The best standard is a killer application


I was reading a post recently by a friend, Mark, about the challenges of getting Wifi Direct working with multiple Android devices. As with most Android development, you are reasonably assured that your app will work on a wide variety of platforms as long as you work within the Android framework, but the moment you start working with hardware sensors, you get into wtf territory pretty quick (talk to anyone who has created a camera related app on Android).

The problem probably is that Google’s direction to OEMs probably mandates a few APIs that they must implement but leaves most of the details to them. And even this is just restricted to devices that Google actually can control. There are a lot of Android OEMs that don’t have any relationship with Google (Xiaomi, Amazon, etc).

To get these guys rowing in the same direction, you don’t need better documentation or rules, you need killer apps to be built on top of them. In the case of Cameras for example, I imagine it would be hard for an OEM to ship a product without making sure Facebook and Instagram work on the device.

Wifi-Direct’s problem is that there isn’t yet a killer app built on top of it. I think Android Beam uses it, but no-one really uses that feature (Beam itself was probably the wannabe killer app that would force NFC and Wifi-Direct adoption among Android OEMs).

It would be hard to imagine a startup or an independent developer building a killer app for it, basically betting their bank on a fractured technology ecosystem. If Google was serious about it, it would have to come from them. But Google’s direction now is to look increasingly at the cloud to solve these problems and so I imagine Wifi-Direct would be left to the side in favor of something powered by something like WebRTC.

This thinking also needs to be applied in the IoT market where there is are so many new standards for device to device communication (AllJoyn, Thread, Brillo, etc). However, without any real killer app in that world, it seems most of these are on the same road as Wifi-Direct.

The best standards emerge from successful products.

Google Maps Timelines and my 2007 maps hack

I just saw a blog post from Google announcing the timelines feature in the new Maps app for Android. The feature extends the previously available (though often hard to find) location history view with photos from Google Photos.

I am really glad this exists now. I have wished for something like this for a very long time. I even started making my travel maps back in 2007 manually using Google Maps “My Map” feature. In my case the photos were coming in from Flickr and embedded with text in an iFrame. I even started working on an app that would do this but lost interest halfway through (story of my life ;) )


The fact that its automatic is very convenient, though I wish I could add non-google data to this. Am also surprised that these timelines don’t integrate with the Stories feature of Google+ Photos Google Photos.

Notes from the 2015 Quantified Self Conference


The 2015 QS conference was at Herbst Pavillion / Cowell Theater right on Pier 2 in San Francisco.

If you can’t measure it, you can’t improve it. – Peter Drucker

A few years ago I was introduced to a growing subculture in the U.S. that was really interested in numerically measuring a variety of aspects of their lives like steps, sleep, environmental conditions around where they lived, etc. and attempted to draw correlations from this data to improve some aspect of their lives. This was before the current explosion of fitness devices and collecting this data was a lot more difficult, and yet these individuals went through enormous effort often wearing clunky self-made devices to get access to the numbers they were looking for. The behavior seemed, at least by conventional definitions, not normal.
Today, at least a part of this behavior has seeped into conventional user behavior with the rise of measuring devices like Fitbit, smart watches etc and the constant running of ads emphasizing the importance of people tracking their own fitness. But the QS community continues to break new ground in identifying interesting metrics about themselves and finding creative ways to collect them.
A couple of weeks ago, the QS community had their annual conference in San Francisco and I was fortunate to be able to attend it as Comcast Labs, the group I work for, was one of the primary sponsors.
The QS community is an interesting group of people that come together around their common interest of measuring personal data. I had assumed that it would be predominantly be full of technologists or statisticians, but I met people from a variety of backgrounds from artists to models to fitness instructors.
I feel I met 3 types of people there:
  • People doing it for pure curiosity, like measuring time spent on couches or watching TV, cataloging their travels or seeing if chat history could show when they fell in love with their now significant other.
  • People who were trying to deal with real or potential health issues like building apps to collect data to quantify effects of both disease and medication on their bodies
  • People who were using their their own data for artistic interpretations or visualization
The Data:
What was interesting was the sheer number of data points people were tracking. A lot of this data used to require medical grade equipment to measure but now can be measured with fair accuracy using off the shelf devices.
These included
  • Heart Rate Variability
  • EDA: electrodermal activity, also known as EDR or GSR, galvanic skin response
  • BMI
  • Blood pressure
  • Fat percentage
  • Macro Nutrients intake
  • Sugar intake
  • Hydration
  • Resting Metabolic Rate
  • Resting Heart Rate
  • Anaerobic threshold:
  • VO2Max
  • METs
  • EEG
  • Breathing patterns
Not all metrics were biological. There were also examples of people tracking environmental and social metrics that could influence their lives like:
  • Electric and magnetic fields in their environment
  • Expenses
  • Email volume and how it related to their stress levels
Questions and Answers
Getting data is only one part of the equation. The real challenge is to correlate this data to draw insights. There were a number of talks around people drawing such correlations which were interesting cause a few missed that old adage about correlation not being causation (Did you hear the joke about data proving pirates causing global warming? Cause if you plot the rise of piracy and global temperatures on a graph, they correlate pretty closely). The other challenging aspect is the whole Heisenberg Principle about the very act of measuring something changing what you are measuring.
Even so, the speakers at the event had some really interesting talks on a trying to answer a lot of questions via data, such as:
  • How does exercise affect metrics like blood pressure or mood?
  • How do different diets influence mood, health or different ailments?
  • How does zapping your head with controlled electric current affect your brain?
  • How does stress affect heart rate variability?
  • How much does it cost to eat healthy?
  • How happens to your expenses if you don’t have a place to live for a year?
You can see their entire lineup of talks here.
Most of the community uses Google spreadsheets to analyze their data, though there was also a talk by a developer at Tableau software about how he used that to analyze his data. But the community is small enough that right now there aren’t enough general purpose statistics oriented tools built for them. At this point just getting data in a non proprietary format like CSV is a challenge (most fitness related services like Fitbit, Google Fit or HealthKit offer their data via APIs but only a fraction of the audience here were software engineers). Other software tools like Beeminder, Zenobase, Gyroscope, Compass  RescueTime were also interesting to see. New wearables like Spire for tracking breathing patterns also piqued my interest but having acquired one, I feel like a second wearable is just one (heh, maybe even two) too many.

Spire is a wearable that clips around your waist and tracks your breathing patterns and guides you towards a more calm breathing pattern

In conclusion
The QS conference was quite an experience, specially since I wasn’t that aware of all the stuff happening there and everything felt new. While a lot of it felt a little “out there”, it was the same feeling I got 3 years ago when I first heard about these guys tracking their steps and hours of sleep. It’ll be interesting to see how much of this data measurement becomes commonplace in the coming years.

A better SXSW scheduling experience

I am currently at SXSW, my forth trip to the event. As conferences go, I find SXSW pretty enjoyable and always come back with a couple of new ideas that I can play with. However, as much as I like the event, trying to manage your schedule on the SXSW app itself is pretty annoying.

2015-03-11 18.50.39
At the top of the app as it exists currently, all days are listed in a horizontally scrollable tabs. Clicking on a particular day lists the events for that day in a vertical list, which is typical for apps of this kind of application (though the SXSW app does make it additionally worse by not even adding sectional headers marking the division between different time slots). You can star individual sessions and view them in a separate “My Events” section of the app.
  • Its hard to quickly glance how your schedule is across time slots. Even the My Events section of the app doesn’t group your events by time.
  • Location is a big factor when choosing my events. I often choose an event I might be less interested than another one if it is closer to where I am
Given my irritations, I did a quick web app for SXSW by scraping their schedules. Here is how my app looks like:
Because the switching the day of the event is not so important, the app lets you do that using a drop down chooser thats placed on the header of the app. This leaves the horizontal axis to break the day out in terms of time slots. Within each time slot, the events are grouped by location. Adding an event as a favorite moves it to the top and changes the background color so that you can quickly scan events you have marked as interesting really quickly. At this moment, these events are saved locally on the browser’s localStorage space and do not sync across devices.
The app was a quick experiment and of course needs a lot more improvement like global filters by event type and location, cross device save, etc. There are also some HTML glitches that I need to fix, like the whole page scrolling slightly within the viewport in the mobile experience. The code is all on Github if you’d like to add those features or fix bugs.
As always, thoughts and opinions are welcome.

UI Concept: Using Android’s Soft Keys for Screen Pinning

I have written before on how Android could be using their software navigation buttons more appropriately. Seeing how Android 5.0 and 5.1 handle screen pinning seems another one of those situations where it could leverage that capability.

For those unaware of the feature, Android 5.0 introduced a feature that allowed you to pin a certain application to the home screen. The primary use case for this feature is to prevent a child or some one you hand over your phone to to accidentally exit an app that you want them to see/use. Since the feature doesn’t require a pin number to exit the app, the feature is not so much designed for security but rather for preventing accidents. Exiting the pinned app requires you to tap the “Back” and the “Overview” screen at the same time. A feature that people may forget (though the OS does bring up a message if you do tap any of the navigation buttons when a screen is pinned). This is probably what prompted a more explicit how-to view explaining the exit action in Android 5.1.

However this feature once again doesn’t take into account the Android soft keys. When a screen is pinned, why not change the graphics for Back and Overview to pin icons. This would also reinforce the fact that the phone is in a separate “pinned screen” mode.


As always, thoughts are welcome :)

WhatsApp Web App and the Rise of Remote User Interfaces

For the last couple of years I have been using WhatsApp to keep in touch with my friends and family in India. While it doesn’t seem to be very popular in the US, its amazing to see how almost everyone I know in India is on it. However till this week, it remained solely a mobile app, which is fine in the whole “think mobile first” world, but does get annoying when you are sitting in front of a PC and still have to dig out your phone to respond to a message (or talk into your Android Wear Watch ;) )

However as I started reading up on it, it turned out to be a pretty bizarre web app. I read a message by the CEO of WhatsApp claiming Apple tech did not offer them the right hooks for their implementation (basically apps running in the background and some unique feature of Android push notifications)

It turns out that actually the “web app” was just a visual shell with the messages being sent back and forth by the phone app itself, something easy to verify by just putting your phone in Airplane mode which brings up this notification on the app:

Screen Shot 2015-01-22 at 1.45.27 PM

The other interesting bit of titbit is why the app was Chrome only. What chrome-only tech was WhatsApp relying on? The answer, after a bit of googling, turned out to be Chrome Push Notifications. Basically, it seems like their architecture is very similar to apps like MightyText or PushBullet that have started to bridge the Android phone and desktop Chrome experience.

Its a pretty interesting implementation. One theory on why WhatsApp decided to go with this implementation is that it might have something to do with their encryption system and rather than re-implement that on the browser in JavaScript, its just easier to send that message via the phone, if you can get that message to the phone locally from the browser. Making the desktop UI just a dumb presentation layer could have a lot of advantages by reducing the number of clients you have to support.

It almost seems that we are now starting to move towards a world of Remote UIs: apps running on the one machine (usually your phone) but pushing the interfaces to another device that may be more contextually appropriate. Some other examples of this include:

  • The Apple Watch and CarPlay both run apps that do all the compute on the phone and present the visuals on to the watch or car’s dashboard display
  • The Android Wear / Android Auto requires a little more computational capacity on the remote display, in both cases they need to be running Android as well with only data moving back and forth. But the core idea remains the same.

While a few folks are crying foul about this implementation, I am kind of a fan. Besides the lack of multiple code bases for desktop and mobile, this setup restricts the number of devices you are signed in to. Since your phone is the one channel back to the servers, it lets you authenticate at one place and just use the most appropriate screen around.

It’ll be interesting to see if more apps adopt this architecture. Its unusual but seems pretty cool. It also might be the beginning’s of Android’s answer to iOS’s Continuity feature.