GDG-Philly and Philly Cocoa’s “State of Mobile Union” Event

This week the GDG-Philadelphia group that I manage collaborated with the Philly Cocoa group to run our first in-person event since Covid. The number of folks who showed up was much higher than I thought they would, I think everyone just really wanted to meet each other IRL again.

The developer meetups in the city have unfortunately not re-emerged from Covid as much and I am hoping they do. I am hopeful that more such events start happening again.

Thanks to Kotaro for leading so much of the organization effort for this event, and to Comcast Labs and the Lift Labs group for sponsoring the space and food.

And of course the speakers:

Looking forward to the next one đź‘‹

Some gotchas when using Firebase Dynamic Links

The last couple of weeks I have been trying to add Firebase Dynamic Links into an app and it took me way longer than I had originally planned. In this post, I wanted to share some of the lessons learned.

First: note that there are 2 kinds of links that you can use:

  1. Dynamic links that you generate on the Dynamic Links section of your Firebase project. These will be the same for every user and are great to link to universally same sections of your app. These are quick to set up and are probably a good idea to try before generating the second kind of deep links
  2. Dynamic links generated by a user for another user on a client application. These would be custom links only relevant for unique users and so cannot be generated via the project dashboard.

In my case, I was trying to get #2 working and it proved to be a real bear.

The problem is that when generating a unique URL, you are essentially doing a couple of handoffs: The first link is managed completely by Firebase (usually with a *.page.link URL). This link checks to see if the app is installed on the device the link is launched on and links to the app-install page if not. If that is fine, the link redirects to the second link that you actually are trying to get to. The second link is often a web address on your own domain which needs to be correctly configured for that page, or else the link will just open that webpage which is probably not what you want to do.

Gotcha 1: [Android] Make sure you have the SHA256 signature saved in your Firebase project

For the longest time, I didn’t realize that I had only the SHA1 key saved in my project. Deep links don’t work without SHA256 values for your project. Thanks to this answer from StackOverflow.

It took me a while to get this document correctly deployed (mostly my fault). I really should have read the documentation on verified site associations on Android more carefully. You can verify your assetlinks setup via this URL (just replace the domain and, if applicable, port values):

https://digitalassetlinks.googleapis.com/v1/statements:list?source.web.site=https://domain1:port&relation=delegate_permission/common.handle_all_urls

Also remember, if you are using Google Play Store to sign and release your app, your assetlinks should refer to their key’s SHA256 signatures. Conveniently, you can copy the assetlinks file from the Play Store itself under the Setup > App Integrity section of the developer console

Gotcha 3: [Android] Make sure “autoverify” attribute in your intent-filter is set to true

Not sure how I missed this but, this took a long time to find

 <intent-filter android:autoVerify="true">
    <action android:name="android.intent.action.VIEW" />
    <category android:name="android.intent.category.DEFAULT" />
     <category android:name="android.intent.category.BROWSABLE" />
     <data
          android:scheme="https"
          android:host="my.app.com" />
 </intent-filter>

iOS:

Surprisingly, as frustrating as getting the Android version to work was, the iOS integration was much simpler. Just following this video helped a lot!

Hope some of this info helps you if you are using dynamic links in your app

Thoughts on Swift

For the last 5 months, I have been working on an iOS project using Swift. Its been an interesting experience. While the language has some parts I like, personally I feel mostly disappointed at the complexity of the language cause, honestly, I don’t see how they help me write apps faster.

A friend of mine described Swift as a “Mirror Programming Language”: everyone who looks at it sees what he wants to see, which I find pretty true. I have had JavaScript developers see it very similar to JavaScript, Scala devs see it like Scala, Ruby devs see it like Ruby etc etc. To be fair, Swift probably took elements from all of the above, but its still interesting to hear the conversation.

My Hopes for Swift

I attended WWDC this year and was there during the Keynote when Apple announced Swift.The moment they showed a swift program with variables defined with the “var” keyword, I got really excited. I am actually a fan of JavaScript, which I generally recommend as a first language to learn for folks trying to get into programming (My only gripe has been lack of a formal definition for “Classes” which seems to be coming in ECMAScript 6). Additionally, Apple introduced “Playgrounds”, an interactive workspace very inspired by Bret Victor’s work as seen in the video below. Bret Victor is another of my heroes, one of the few developers who questions why programming today is still stuck in the text-and-compiler metaphor that was invented over 40 years ago. If you haven’t already, do watch the video below where he goes through some of his thinking.

But between a js-like syntax and an interactive playful workspace, I thought Apple had finally cracked it and democratized programming. On the flight back, I was so sure that Swift would be the programming language that I would now recommend to students going forward.

Working with Swift

Working in Swift the last few months, my opinion has changed a lot. The simplicity that (I thought) Swift promised never really happened and I find the language a lot more laborious to work with than even Objective-C which I generally liked.

  • Optionals?: Swift introduced Optionals that let you explicitly declare that certain variables may not hold data at all times. I am not sure what kind of bugs this is helping me avoid. And I am tired of unwrapping Optionals all over my code. Additionally, Optionals introduce a whole new slew of potential errors.
  • Type casting in Swift sucks. Swift does nothing automatically. Want to add an Int to a CGFloat? Well make sure the you convert your Int to a CGFloat yourself. This gets very annoying when you want to do things like manipulate view dimensions by multiplying with and adding/subtracting constants. I have reached a point where I only do one simple math operation per line.
  • Unexpected types: Why the hell does array[1..10] return a Splice object that you have to cast to an Array?!  If I am asking for a part of a collection, just return it as the same data type.
  • Way too much: There is a lot of smart in Swift, and I am sure it attracts a certain kind of personality. Operator Overloading, Literal Convertables, etc etc. But personally I find very little of that really valuable.
  • Readability: Personally I ding a programming language for every meta character in code (?, ! etc) as I think they generally hamper readability. Swift has a lot of that.
  • XCode 6 is terrible: XCode has gotten really bad. SourceKit editor crashes all the time, errors make no sense, quick fixes don’t actually fix (video below). Its surprising how poor XCode stands up to other modern IDEs.

Swift has some nice parts too. Playgrounds/REPL are actually useful to debug small pieces of code and the lack of header files is a blessing, but besides that, I am not too excited by it. To me Swift is a disappointment, something I had hoped would open mobile development to a larger pool of people just getting into programming. Instead its another language that seems to have been developed by very smart people for very smart people. Nothing wrong with that, its just not what gets me excited.

I mostly agree with Marco on this when he said:

Swift looks interesting, but in all of Overcast’s development so far, I’ve never run into a problem that’s the language’s fault that Swift would have handled better. It appears to solve problems I don’t have, to gain small (and still theoretical) optimizations that I don’t need, at the expense of many Objective-C features I really like.

Further reading:

Standardizing Application End User Licenses

If you have installed an application lately chances are that one of the first screens you have run into is an End User License Agreement (EULA). While it may not be as gargantuan as say the iTunes one, most people don’t bother reading it, and just click on the accept. This numbness to the EULA screen has been bothering me lately. Even though I do it, I have very little idea about what I am agreeing to. I make an assumption that its fair hoping that if it goes beyond what it needs to, I would have heard about that on the internet.

As an app developer, I also worry from the other side of the fence. I want users to know exactly what they are getting and only use my apps if they are comfortable with sharing information with my apps. Additionally I am not a lawyer and I don’t really want to make my own EULA. My apps usually do things very similar to other apps and I just want my users to know that.

Whats interesting is that there is another domain developers work in that licensing comes across often: using third party libraries. However there isn’t that much confusion there anymore. There are a handful of popular licenses out there: The MIT license, the Apache license and GPL are the ones that I run into most often and I am careful to only use the appropriate licensed libraries in my software. However no such named licenses seemed to be shared across multiple end user products today.

I would love to see a few standard licenses appear that encompass certain rights and privileges. For example, an Apache Standard App License that will not hold any user data but the developer is not responsible for damages to the device or injury to user while using the app, or an Apache Social App License that says it will store my friends data and photos for a certain period of time. These would also be great as guidelines for independent developers as well on how they should hold and use end user data.

 

Air for mobile’s weird touch implementation

Let me begin by saying that after playing with Adobe AIR for mobile for the last couple of weeks, I have been really pleasantly surprised by general performance. I have done quite a bit of mobile device development off late with both iOS and Android, but AIR might actually be a contender for my next project.

That said, I am kinda surprised by the touch event implementation. I am hoping someone will correct me if I am missing something but here is what I am seeing so far.

In AIR, you can now choose to set a Multitouch Input mode to either intercept raw touch events (MultitouchInputMode.Touch_Point) or have the touch events mimic mouse events and get separate gesture events from the flash player when gestures are performed (MultitouchInputMode.Gesture). Note: when gestures begin, you stop getting mouse events till the gesture has completed.

My first problem is that you can pick one or the other, but not both. The former will give you raw touch events and so you can see when more than one touch is on the stage, but then you have to write your own code to define what a gesture is. The latter gives you gestures but you’ll never know when more than one finger are on the stage. But what if you want to track touch points independently till a gesture begins? You are on your own there.

Additionally, There is no information on touch positions on the Gesture event. There is a localX and localY which I presume is the point between the 2 touch events, but they don’t change much as the gesture occurs (I was trying to read these values while panning halfway into pinching, the change in values isnt representative of how much my fingers moved) which seem to be the x and y points of where the gesture began and do not change as the gesture events change.

Also, Gestures like Zoom and Pan do work independently but not together. So If you are zooming (pinching) into an image using 2 fingers and then start moving the 2 fingers in a particular direction without changing the distance between the touch points (Pan), you dont get the pan gesture events. This is unlike the behavior on most apps that allow zoom and pan.

At this point I started looking to read raw touch point data. Here is another implementation gap. AIR requires the developer to keep track of every touch point (identified by toucheventIds). There is no data model that you can query on the AIR Player that gives you an Array of touchPoints. This is irritating and smells of poor API design decisions. Compare this with iOS’s api for touches where I get the set of all touch objects every time the time touches begin or change:


- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;

Anyway, thats as far as I have gotten so far. Maybe I am missing something but it seems that if you are really building something significant with AIR for mobile, you might need a custom non-Adobe gestures library.

Anyone know of a good one?

Location Check-Ins are the new Photos

Off late I have started using quite a few location based services like Foursquare, Foodspotting etc, and everyday more interesting apps crop up. However the current model for location based apps requires me to check into each app individually. The problem is when I am at someplace interesting enough to check in to, I am usually with people and I can only take so much time peering into my phone before coming across as rude to the rest of the group.

Compare this with how I interact with photos today. I take a pic and then can choose among any number of apps on your device that can do interesting things with it. Even better, I can manipulate the photos after the event I was taking the picture at.

I feel the whole model of checking-into a location has matured to a point where it can graduate from the app level to the platform level. I would much rather bring up some native location app and check-in to my phone. The check-in is an actual object, like pictures or music and it is now owned by me. It contains the location information as well as the time. Once checked in, I can “share” my checkin to any apps that can do something with it. So I can forward my checkin to Foursquare to inform all my friends I am there, to FoodSpotting to discover whats good to eat there and to any other app that may need that info.

This model has a number of advantages to it:

  • I am building up a location history that is no longer trapped inside one app or service.
  • I can choose to share my checkin with apps later (for example if my only motivation was to get some virtual points for being there, I can check into that service the next day)
  • I can checkin to multiple services in poor connectivity locations by only needing to pull in map images etc once.

 

Of course there are certain challenges here:

  • Some services like Foursquare may want to know immediately when you have checked in to present local offers etc. This could be done by the platform sending a system level event that these apps can listen to when a user checks in to a location.
  • Different services have different location data, for example Foursquare has a lot of geo data but Foodspotting may have better data on restaurants so there will be the need for some standard model of annotating the location with other meta information while the user is locating himself on the map. For example, the map the user is presumably zooming in to can show pins from different services with different icons.

 

Hopefully platforms like Android and iOS add this capability as a core part of the OS pretty much like we use maps today. Till then, the idea of using more than one or two location services at any point is pretty difficult.

Thoughts on inter-app communication and Siri-izing Android apps

Coming from developing iOS applications, the concept that struck me most interesting in Android app development was the way it handled inter-app communications. For the uninitiated, here is a simple way to understand how Android apps work:

  • Each Android app is composed of a few different parts: Activities, Services, ContentProviders and BroadcastRecievers
  • An Activity represents a single screen with a user interface. They usually constitute one complete “action”, like signing up or updating a status or something like that.
  • Users transition between Activity screens while performing a task. This is made possible by the Activity sending asynchronous messages called Intents to the Android system
  • Intents trigger the next appropriate activity. So, for example, a clicking on the share button sends a  Share Intent to the system that then pulls the appropriate share activity.
  • Activities from different installed applications can be mixed and matched to allow the user to complete a task. In the case of “Share” for example, apps like Tweetdeck or Facebook add the appropriate Share Activities so that the user may pick the one that he actually cares about.

The last point makes for very interesting scenarios. This means that app developers can relegate certain responsibilities to other apps the user may have installed. This is also what allows users to switch out the default apps with others. Prefer the new Firefox browser to the default Android one? Well, check off the box that always resolves the browser intent (or specifically the ACTION_VIEW intent) and now you’ll never see the native browser again. While the core Android intents are decently documented, there are even efforts on like OpenIntents.org that try to provide a central location for information on intents offered by third party applications.

When I first saw the Apple Reminders app introduced at the iOS5 keynote, it made no sense to me. Why build a mediocre reminders app when the iOS AppStore is full of really well done todo apps? It wasn’t until I saw Siri at the 4S launch keynote did that click: Reminders is the perfect use case for voice recognition and they needed an app to respond to the Siri “add reminder” trigger. Since then, I have seen a lot of developers get hopeful about when Apple will allow third party apps to be Siri-enabled, but I have a hard time figuring out how exactly that would work. In Android I imagine a central app that dispatches custom Intents like a system wide “Reminder” intent or other custom events that multiple apps can wait on, but I dont think any such concept exists in iOS at least today (hmm, the more I think, it probably could work using something like UIApplicationDelegate’s openURL method and a custom URL Scheme)

The video below by the folks at Remember the Milk showed how to add events to their app via Siri, but if the instructions are anything to go by, its a pretty smart hack but a hack nonetheless. Also I am not sure in this case if your tasks are being pushed to both RTM and Apple Reminders and if they are, I doubt the “done” action is synchronized as well.

Compare that with the second video here done by me using Vlingo, an off the shelf Android personal assistant app (very similar to Siri and surprisingly decent). The video below triggers the “Share” intent at which point I can pick any app from my list of installed apps that respond to that intent. Note that I could have also checked the checkbox there to always go to Remember The Milk (actually my preferred todo app right now is Astrid) but its pretty cool to see that not only did the task successfully get added but so did the day (“Tomorrow”) which is a separate form element in the “Add Task” Activity.

This kind of inter-app play makes for some very interesting possibilities. Its unfortunate that Apple seems to be going the exact opposite direction with apps on their platform. Apps on iOS are already pretty isolated from one another but now those rules are also going to be set on desktop software: a place where it makes even less sense.

On the flip side there are also projects that want to extend the concept of intents to the entire web. If you haven’t seen it yet, check out WebIntents.org. I am pretty excited about this, hopefully this makes it. It looks like both Chrome and Firefox at least are looking to support that kind of mechanism for inter app communication.

Update/epilogue:
I first got really interested in inter app communications when I was playing around with Macromedia Central back in the day. Central was Macromedia’s first try at Flash based desktop apps and never really made out of the developer preview, but it had some great ideas on inter-app communication.

Here is a snippet from their whitepaper on Central:

Hey Adobe, can we get this back in a future version of AIR?

Presentation: 5 Ways iOS is better and worse than Flash

Embedded below is my presentation for tonight’s Philadelphia Flash and Flex User Groups.

Thanks everyone who could make it. It was fantastic to see the local Flash Community after way too long :).

The grand 2010 recap post

Wow, cant believe its that time already.

2010 was a pretty great year for me. In Nov 2009 I moved to the User Experience team hoping to be the voice of technology as new projects were conceived and features enhanced. So the year began with me learning the workings of the UX team which was fascinating. The creative process is, not surprisingly, very different from the engineering one, and sitting in those sessions was ridiculously educational. CIM has some pretty fantastic Design and IA folks and I got to learn quite a bit on concepts such as Mental Models, Task oriented design, User Persona’s etc. I also ended up reading a few books on my new role (of which About Face might be the best one, and I recommend it strongly to anyone in the UX/UI domain) which I never would have done if I hadn’t moved to this team.

Suffice to say, if the year needed to be summarized in a word, it would be “educational” 🙂

Around Feb, I also got involved in a prototype for the project that is now the Xfinity iPad app. As a UX prototyper, I was on the 3 people big prototype team that built the demo that was shown at the NCTA event. After that I ended up working with the brand new Advanced Engineering Team as we rushed to get the final product out of the door. My role in the team was not UX really, but implementation. After the initial learning curve of Objective C, I got pretty comfortable with it and actually realized that user interface frameworks and technologies are pretty similar even with syntactical differences. I wrote about the whole Xfinity App development experience here. In the last few months, I have returned to prototyping but these days they have all been functional additions to the iPad app itself.

While I didn’t write as much code for it as the iPad, I have also become extremely passionate about the Android platform. While less polished than its i-Cousin, the deeper I look into the architecture the more awesome it seems. I built a couple of apps for internal demos that ran on Android (in Java) and that was fun. I feel less proficient in the Android UI framework than in UIKit, just by virtue of time spent developing on it, but its something I hope to get better at next year. The Android world definitely lacks the sexy factor that is going on with the iDevices, and I am really hoping that changes both with upcoming OS updates and developer community maturity. I also played around with AIR for Android a little and it seems pretty decent. I am working on a project using that now. The biggest thing that has going for it is not just the familiarity of ActionScript but also the tooling of the Flash IDE. As much flak as it gets, the Flash IDE is rather fantastic for dropping visual assets for an app. I really wish AIR for Android played nicer with the core Android framework, though there are ways of doing that as mentioned in this post by Elad Erom.

EspressoReader, my AIR app for consuming news (currently as a Google Reader client) continues to evolve. Just building something like that has made me learn so much about the way we consume information. It has also gotten me hooked on books on collective intelligence and text analysis. I will release a new version in the coming weeks that I am really excited about. So if you haven’t tried the app out yet, give it a try by installing it from this link to the Adobe AIR Marketplace.

I ended up travelling for work quite a bit this year, attending some pretty fantastic conferences like the NCTA Cable Show, the Web 2.0 Summit and TechCrunch Disrupt. This is a change for me, as these were more about the business and strategy than my usual fare of tech conferences. From my schedule in Jan, looks like this will continue. Btw, I am heading over to CES so if you are heading there as well, send me a holler 🙂

Finally, looking ahead, 2011 seems to be at a great start. There are a lot of changes afoot which I’d love to share soon. So stay tuned 🙂

Xfinity TV App is finally in the App Store

Its pretty awesome when the project you have been working on for so long is finally released. Last week, the Xfinity TV App was finally available on the Apple App Store, and has been in the top 10 most downloaded apps for the iPad pretty much since then. The app is a free app and is pretty fantastic, turning your iPad, iPhone and iTouch into a virtual remote control for controlling your Comcast Xfinity Cable box.

If you haven’t used it yet, here is a video walkthrough of the app in all its glory:

Building the app was a huge learning experience for me. Before this project I knew very little about iOS and Objective C, but working with some of the smartest developers on the platform got me to learn iOS app development pretty fast. At this point, I have worked with so many UI technologies and frameworks that I was able to apply a lot of my learnings almost directly (with syntax differences of course). For example, one feature I was responsible for was the OnDemand listings view in the app. To implement the view that was supposed to show the thousands of assets available on Xfinity, I was able to implement a virtualized list that recycles item renderers to effectively manage memory. The api and implementation of my component was very influenced by the Flex and OpenPyro List control (I have previously discussed virtualized list implementation in ActionScript here).

Btw, I have to say, if you are just getting started with iOS, the Stanford University video series on iPhone app development is a great place to start.

The Team

IPad Dev Team

Building the app was quite a fantastic (even if extremely hectic) experience, but I have to say the best part was working with an awesome team. The app definitely had us working late nights on multiple occasions but thats a lot more manageable when you work with people you actually enjoy hanging around. The picture below is the app team but behind the scenes there was a huge number of engineers across the country that handled the network and set-top box updates that allowed the app to work.

The video below has Sean Brown (Sr Director of the Advanced Applications Engineering team) talk about the development process of the app. The video occasionally switches to the co-working space that is set up where Engineering, User Experience and QA teams work together.

The Response

… has been fantastic and thats amazingly gratifying. I have embedded some of the Tweets I just found doing a twitter search for “XFinity Remote”, and its awesome to see how people react to it:

//

For more formal reviews, check out some of the posts mentioned below:

As awesome as the app is, there is a lot more features that are being added to it now. Recently we demo-ed the PlayNow functionality thats being added to the app for an upcoming release at the Web 2.0 Summit. Embedded below Neil Smit, President of Comcast, giving a walkthrough of that version at the event:

If you want to keep up with app updates etc, feel free to follow XfinityTV App on Twitter. Also if you are an Android user (like I am) rest assured that the Android version is well on its way :).