Thoughts on Google IO 2017

Screen Shot 2017-05-20 at 3.43.40 PM.png

I spent this entire week in the west coast attending North America GDG Managers’ Summit and the I/O events. I am still processing some of the conversations from the Managers’ summit and how to use them to improve GDG Philadelphia that I help run so I’ll leave that to a future blog post so this post is restricted to the I/O event only.

The list of announcements both big and small are a mile long and have been well covered by other publications. My own gist of the announcements is here (feel free to send me a pull request if you wanna add anything there). Here are some thoughts on just I/O this year:

AI All the Things

Google’s internal things to pepper their products with features only possible using AI is clearly bearing fruit. From just pure utility features like enhanced copy and paste on Android to flagship features like Google Lens that allows object recognition in photos and videos on Google Photo and Assistant. I am particularly excited by the TensorflowLite project and programming for AI is something I am going to learn this year.

Immersive Computing (VR / AR / MR / xR)

People seem to love coming up with new terminology in this space. Google buckets the VR/AR technologies into “Immersive Computing”. They are doing some really interesting things in this space and I am glad to see them continue to push the state of art here. I was particularly impressed by Project Seurat that uses algorithms to allow developers to use simpler geometry to mimic complex, even movie quality, 3D models.

On the Tango / Augmented Reality side, Google Visual Positioning System truly impresses as well. In fact in one conversation, a Googler mentioned that the Google Maps team was heavily involved in the VPS development.

There were also some great demos of AR capture and reconstruction using the upcoming Asus Zenphone AR. Big question is when does a Google Pixel get a depth sensor and Tango support?

Actions on Google

Google’s new Actions platform that lets you build skills for Google Assistant on the Home, Android and iPhone was very interesting. The tooling basically consists of 3 components:

  • Actions on Google console that lets you manage your …um..actions
  • The API.ai tier that your actions probably need to manage natural language input
  • Chatbase, Google’s analytics platform for Chatbots that lets you observe your bots’ growth and engagement over time

I liked the system and it seems pretty trivial to make a simple Chatbot…I mean Action. They also announced a competition for the platform so get ready to see a lot of new ways you can order pizzas 😉

Android Dev

Android SDK + Firebase

Google continues to push Firebase as an essential part of Android development. Google cloud services have been catching up to AWS’ for a while and Firebase seems to be a great option to AWS Mobile. AWS’ tools are not friendly to a mobile developer and the Firebase tools do seem much more approachable. The addition of services like Performance Monitoring makes Firebase even more essential a part of the Android developers’ toolkit.

Google Play Developer Console Updates

I haven’t pushed anything to the Google Play Store since Picscribe in 2013. The publisher tools back then were functional and did a decent job I thought, but the latest updates to the publisher experience are fantastic. More tools to run A/B tests, greater visibility on top reasons for crashes, pre-release testing etc will allow developers to really optimize their apps just from the store.

Kotlin is an official second language for Android development

I am mostly ambivalent about Kotlin (đŸ˜±). I had no particular issues with using Java for Android development, except maybe an occasional gripe about not being able to pass functions around. I am happy for Kotlin’s less verbose syntax but dread what happened with Swift’s introduction to the iOS ecosystem where the focus seemed to change from cool apps to various academic discussions (if I hear about monads one more time…).

Also the rapid evolution of the language meant that code examples and Stack Overflow answers stopped working in a few months. Lets hope this is less of an issue on the Android side.

And of course a new developer moving to Android now needs to know not only Java but Kotlin as well since the codebase will be a mix of the two.

On the flip side, the copy Java and paste as Kotlin feature in Android Studio is pretty dope

Cloud Functions: The rise of Lambdas

With so much functionality exposed as services from either Google or Amazon, developers can really power their apps with very little backend code development. That said, this leads to the rise in the need for some kind of glue layer that connects all these components together. Firebase’ Cloud Functions and Amazon’s Lambdas serve this need.  The workflow for Amazon Lambdas is still slightly awkward, but Firebase’s workflow feels a lot better.

Final thoughts

There were a lot for cool technologies for show at I/O and it was awesome. The other amazing part was just meeting old friends from across the world and even making some new ones.

I will also say this: This was one of the BEST organized events I have ever attended and kudos to Google for pulling it off. The session reservation system worked well, there was ample shade, food and drink and even they even got the weather to be nice for 3 days 😉

Till next year!

 

I won one of Phildelphia Business Journal’s 10 “Tech Disruptors for 2017” awards

They say don’t bury the lede but I just gave it away in the title ;). Phildelphia Business Journal comes up with a list of 10 Tech Disruptors every year who are “blazing new trails and inspiring others in the technology community”. I am one of the 10 for this year, and in extremely smart company of local CEOs, CTOs and Founders.

phl

Thanks for the honor @PHLBizJournal. Its great to see your name in the paper (well for the right reasons 😉 )

The return of the QR code

Over the last few years I have found myself defending QR codes in different conversations. While huge in the rest of the world, QR codes were never embraced in the west. Aesthetics was one that I had heard multiple times (“They are so ugly”) but they solved a real problem: bridging the offline world with the online one.

For whatever reason, neither Apple nor Google devices ship with a default QR code reader. Apple’s default camera app has some image recognition built in which lets you scan iTunes gift cards, but neither Apple nor (more surprisingly) Google showed any interest in QR codes.

But QR codes have snuck up in our society in the last few years. Some of these aren’t normal QR codes and maybe deserve their own label (scan codes?) but the idea remains the same: a graphic that codifies text that a scanner (camera) can read from a distance.

  • Snapchat popularized the idea with their Snapcodes that let users add other Snapchat users as friends.
  • Twitter, Kik, Facebook Messenger and Google Allo followed and now scanning a code to initiate a connection is starting to become normal.

Screen Shot 2017-05-13 at 11.50.37 AM

Today, at F8, Facebook’s big developer event, they announced that Messenger will now support their scan-code, what they call Parametric Codes, which you’ll be able to use to do all sorts of things from friending to payments (offline payments via scan-codes is a big deal in China, where Messenger is taking a lot of its feature development cues from).

As happy as I am to see the return of these codes, the proprietary nature of each of them is a little bit of a bummer, but hopefully they will make the idea of scanning a code to connect with the real world more mainstream.

Update:

YCombinator Blog has a very interesting article on the rise of WeChat, but this section on QR codes is especially interesting

WeChat’s elevation of the QR code as a link from the offline became the lynchpin for China’s online-to-offline boom in 2015. Previously, to engage with a service or brand, a user would have to search or enter a website address. WeChat’s Pony Ma says of QR codes, “it is a label of abundant online information attached to the offline world”. This logic explains why WeChat chose to promote QR codes in the first place. QR codes never took off in the U.S. for three key reasons: (1) the #1 phone and the #1 social app didn’t allow you to scan QR codes. (2) Because of this, people had to download dedicated scanner apps, and then the QR code would take them to a mobile website, which is arguably more cumbersome than simply typing in the URL or searching for the brand on social media. (3) Early use cases focused on low-value, marketing related content and at times was merely spam. So, even though QR codes would’ve been U.S. marketers’ dream, it was a few steps too far to be useful.

With the established adoption of QR codes, WeChat launched “Mini Programs” as an extension of WeChat Official Accounts designed to enable users to access services in a more frictionless way just like the web browser did

Adding React Native to an existing Android App

After my recent adventures with ReactJS, I have been thinking of playing around a bit with React Native. I have looked at it before and even gave a talk on a previous version of it at a previous AndroidPhilly event.  Instead of trying a new React Native app, I figured it might be a good idea to maybe try it on an Android app I am already developing (more on that later). Instagram recently wrote an interesting article on adding React Native to their existing app so I was glad there was a migration path for native apps that didn’t involve starting a React Native app from scratch.

Let go

laptop

The React Native site has a fairly good walkthrough on adding RN to a native app. I did as instructed (I think) and mostly things went ok. And then, the expected issues began.

So long Jack

Hitting compile on AndroidStudio seemed to work for a bit and then it would just hang. Not getting anything useful there, I tried to compile the app from the command line using

./gradlew installDebug

What I saw was that the process would get stuck at a compile with Jack process. Jack was a new toolchain Google was building for Android development and I was the way to use some Java 8 features on current devices. However this month Google announced they were moving away from Jack and fortunately, my app barely used it. So I decided to remove Jack from the app altogether. That worked and AndroidStudio compiled and packaged the app and I got my app running on the emulator! It looked like…

js

Wherefore art the JS Bundle

This seemed to be an simple network issue with the emulator not seeing the locally running server. Having dealt with this while trying to get the emulator to see a locally running rails api, I figured it was something simple. React Native comes with a build it settings activity that you can call by rage-shaking the device or executing

adb shell input keyevent 82

which brings up the options menu that you can then use to launch the Settings Activity.

Wrong Settings

…except that doing that launched my apps preferences screen. Turns out React Native was just looking for a file named preferences.xml file and launching that. It just happens my app’s preferences screen was also defined by the xml file with the same name and so there was a conflict. Changing my apps preference.xml file name got around that.

Crash

I changed the ip address on React Native’s debug screen and went back to the main activity only to have the app crash. Looking at Logcat, I got this weird error

Caused by: java.lang.IllegalAccessError: 
Method 'void android.support.v4.net.ConnectivityManagerCompat.()' 
is inaccessible to class 'com.facebook.react.modules.netinfo.NetInfoModule' 
(declaration of 'com.facebook.react.modules.netinfo.NetInfoModule' 
appears in /data/app/*****.debug-2/base.apk)
     at com.facebook.react.modules.netinfo.NetInfoModule.(NetInfoModule.java:55)
     at com.facebook.react.shell.MainReactPackage.createNativeModules(MainReactPackage.java:67)
     at com.facebook.react.ReactInstanceManagerImpl.processPackage(ReactInstanceManagerImpl.java:793)
     at com.facebook.react.ReactInstanceManagerImpl.createReactContext(ReactInstanceManagerImpl.java:730)
     at com.facebook.react.ReactInstanceManagerImpl.access$600(ReactInstanceManagerImpl.java:91)
     at com.facebook.react.ReactInstanceManagerImpl$ReactContextInitAsyncTask.doInBackground(ReactInstanceManagerImpl.java:184)

Oh weird. Going by this thread, it looked like an API change in Play Services recently was causing an issue. But hold on, according to the conversation, the issue was fixed in RN v22. What the heck was I using?

Listing dependencies with gradlew

I didn’t know this but you can actually list your exact dependency tree by calling

./gradlew app:dependencies

which gives you a nice dependency tree that looks like

Screen Shot 2017-03-29 at 1.05.06 AM

Since the dependency stated in my build.gradle was just

compile "com.facebook.react:react-native:+"

it could be satisfied by any instance of RN gradle found on Maven Central. Except that one of the steps in setting up the project included defining the local folder in node_modules that held the latest (0.43 at the time of writing) and the only version on Maven Central was old (0.20). Turns out (after about an hour of pulling my hair out) that there was an error in the file path I had used to identify the node_module location and setting it right fixed the final issue.

Viola!

4 hours of dev or so and my app finally launched its hello world React Native screen alongside the native activities. Not too bad, I was expecting worse, given all the new tools in play. Lets hope this is worth it.


Tip:

Btw, unlike developing React for the web (using CreateReactApp), the console that you use to launch the dev server does not show the log messages. To see your app logs, use

reactnative logandroid

 

Emergent Ethics for Artificial Intelligence

AI systems today present not only a new set of technical challenges but ethical ones as well. The one that I have seen mentioned the most often involves the decision a self driving car that is about to crash and has to choose between hitting children on the street, a pedestrian on the footpath or a wall and killing the occupant. As the MIT Technology Review’s article titled “Why Self-Driving Cars Must Be Programmed to Kill” phrases it:

How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?

Its hard to imagine anyone trying to solve this problem since it doesn’t have any good solutions. But not taking a decision here would be a decision itself.

At SXSW today I did see an interesting presentation that did spark an idea. I attended a session titled “Humans and Robots in a Free-for-All Discussion” had two robots have a discussion on different ideas with each other and a human. A video of the session is embedded below:

 

The idea of robots talking to each other had a previous brief internet popularity moment with two bots on hacked Google Home devices chatting with each other that was live streamed on Twitch.

What is interesting is that the bots were programmed with just facts and allowed to come to their own conclusions. The photo from the presentation below shows how the system took in bare facts and then, by using supporting or negating statements, could come to a conclusion by itself.

The idea is intriguing. Could this be how cars will learn ethics? No human would ever verbally put a price on human life, yet by action a lot of us do all the time.

Could ethics in AI not be something we code but allow to emerge based on facts that we train the model on?

 

 

Serving multiple ReactJS apps with Express

I am currently working on a web portal for showcasing our teams’ different projects, white-papers and reports. Given the limited scope of the project, we thought it might be a good idea to try it with a full JavaScript stack of  MongoDB, Express and ReactJS.

Since the site needed both a user facing portal and an admin portal, we started building both of those in parallel but as separate projects. The goal was to keep the admin portal independent of the actual site itself. The pages on the site are also more or less completely defined in JSON schemas so it could be reused in the future if we ever wanted to use it for a different project or release it as an open source project.

The Create-React-App library from Facebook was really useful for a pretty painless start to the projects.

Proxying calls to the API

The first issue I ran into was accessing Express that was running on a different port (3000) than Create-React-App’s development server (3001). Since the browser considered these two different sites, browser security wouldn’t let be make API calls to the server from the client. Figuring it was a problem for later, I just allowed the dev server to handle CORS requests using something like:

app.use(function(req, res, next) { 
    res.header("Access-Control-Allow-Origin", "*"); next(); 
});

This worked for all GET requests but completely failed when I tried to POST JSON from the client. Turns out when sending JSON, the browser pre-flights the POST request with an OPTIONS request that I was not handling at all. As found in the MDN docs:

The [CORS] specification mandates that browsers "preflight" 
the request, soliciting supported methods from the server with an 
HTTP OPTIONS request method, and then, upon "approval" from the server, 
sending the actual request with the actual HTTP request method.

Turns out, all the headache was for nothing. Create-React-Apps allows you to proxy calls to a dev server by adding a proxy line in your package.json

"proxy": "http://localhost:3000/",

Packaging React Apps for different server paths

Another issue was that Create-React-Apps by default assumes the final built app will be served from the root of the server. It took a bit of finding out that the it does allow you to set the folder path for your app on the server using the homepage field in the package.json:

"homepage": "http://localhost:3000/admin",

Allowing Express to serve multiple React Apps

Finally as we tried to deploy the front end and the admin app, we ran into another issue. Since Express needed to serve front-end from the server root, admin from the /admin/ path and also a bunch of different api responses from /api urls, we had to do a bit of trial and error to get the pathing for the resources right. Our final response paths for the Express app look like this:

// API end points defined first
app.get("/api/projects",  (req, res)=>{})
app.post("/api/project",  (req, res)=>{})

// Admin paths
app.use('/admin/', express.static(path.join(__dirname, 'admin')))
app.get('/admin/*', function (req, res) {
 res.sendFile(path.join(__dirname, './admin', 'index.html'));
});

// Site path
app.use('/', express.static(path.join(__dirname, 'front-end')))
app.get('/*', function (req, res) {
 res.sendFile(path.join(__dirname, 'front-end', 'index.html'));
});

This is still work in progress and I am sure a lot more learnings will come going forward, but at right now, with the first version of the site deployed to our dev server, I am gonna get a well earned beer 🙂

 

Remembering Yahoo

Today’s news of the post-acquisition Yahoo being called Altaba made me think of the Yahoo I have known (and for a while loved) for the last 20+ years.

While AOL may have been America’s portal, Yahoo was ours in India. I was a happy enough Yahoo Mail, Yahoo Groups and Yahoo News user but what I was really addicted to was Yahoo Chat and Yahoo Messenger. Yahoo Chat was fantastic. The basic functionality was yahgood enough but I got really addicted to their IMvironments: mini flash-based experiences that you could play in while chatting.

I spent hours on it in different chat rooms and even made 2 of my closest friends on a Yahoo India chat room. The first of those friends was a girl from Romania named Anca. We talked a lot about our individual countries and that made me fairly knowledgable about Romania, a country I never really thought of otherwise. When I went to grad school at Rutgers, I ended up sitting next to a very quiet guy during one of the international student orientation sessions. He opened up when he mentioned he was from Romania and we could connect on that. That guy, Nicu, ended up being my best friend in grad school and is single-handedly responsible for me not dropping out of grad school when I struggled with writing code (I had never written code before having majored in Electronics in undergrad) and suddenly doing graduate level and research level C++ programming put me way out of my depth. Nicu, who had by then worked for a couple of years as a Software Engineer in Germany sat with me for HOURS teaching me programming concepts and helping me with my assignments.

So in a way I might owe Yahoo for my entire career 😉

As I became a web developer, Yahoo was one of the technology companies I envied. They were always building amazing pieces of technology. Some pieces of technology I was fascinated by and worked with included:

  • YUI which was THE user interface library for web experiences,
  • YSlow which I used a LOT when I was developing web apps
  • Yahoo Widgets, which was the Konfabulator app that Yahoo bought, that I wrote a couple of widgets for
  • BrowserPlus, which apparently I was a fan of at some point (Who remembers?!)
  • Yahoo Cocktails, a JavaScript based platform that powered their LiveStand iPad app.
  • YQL, a query language that mapped web apis like SQL queries.

In fact, I even went to a Yahoo Hack Day in NY a while back and my friend Gabo and I ended up building an AIR app for YQL queries (we didn’t win though 😩 )

This list doesn’t include other amazing tech that came out of Yahoo like Hadoop etc that I never interacted with.

I also interacted with Yahoo a fair bit professionally, when Comcast and Yahoo struck an ad deal in 2007 and I was responsible for integrating their video ad system (that didn’t exist) into The Fan video player that I was leading the development for. It was also one of my favorite engineering stories to tell but that is a story for another time 😉

Thanks for the memories Yahoo.