Creating Custom Layouts for Android

There are a lot of great articles on Android development on the web but one field that doesn’t feel explored enough is creating custom Layouts. For one thing, the Android framework does spoil you with a bunch of layouts that usually fit most of the usually designed interfaces. The problem comes when faced with slightly unconventional UIs. A basic knowledge of how layouts work may help you avoid creating a mess of nested layouts when a quick custom one may have sufficed. The design below for example was accomplished with one custom layout object.

I have to admit I am not a fan of some fundamental choices made on the UI architecture side. For one thing Layouts are Views (or more specifically ViewGroups) themselves and not only position and size the elements they are supposed to but also add them as children to themselves. This means that if you wanted to create an experience where you maybe started with a grid of photos and when the user clicks on one of them, lay the photos out in a row, you can’t really do that well since the photos have to be unparented from one View (the GridLayout) and then added as children to another. And don’t hope for any animations in between. In the coming months I hope to create an open source project for some custom layouts that separated a views parent from its layout (Adobe Flex went the same route between Flex 3 and Flex 4. Flex 3 had HBoxes, VBoxes etc but were deprecated in Flex 4 with Spark Layouts that could be attached dynamically. If you are a Flex software engineer, the Android architecture will look very familiar).

But this post explains the layout architecture as is. So lets begin. The attached code blocks are from a custom Layout example I wrote and is available on Github. Its a very simplistic LinearLayout that sizes its children equally. You can grab the github project here.

The base class for a Layout is a ViewGroup that basically extends a View and has hooks for things like addView etc. To create a custom ViewGroup, the only method you need to override is onLayout. The onLayout is triggered after the ViewGroup itself has finished laying itself out inside its own container ViewGroup and is now responsible for laying out its children. It should call the layout method on all of its children to now position and size them (the left and top parameters will determine the child view’s x and y and the right and bottom will determine its width (right – left) and height (top-bottom).

protected void onLayout(boolean changed, int l, int t, int r, int b) {
     int itemWidth = (r-l)/getChildCount();
     for(int i=0; i< this.getChildCount(); i++){
         View v = getChildAt(i);
	 v.layout(itemWidth*i, t, (i+1)*itemWidth, b);
         v.layout(itemWidth*i, 0, (i+1)*itemWidth, b-t); (Fixed bug mentioned by Nathaniel Wolf in the comments below)

One thing to note is that at this point you are playing at the pixel level and not with the pixel abstraction units like dips. But besides that you should be fine.

The problem here is that while this will layout any simple views (i.e. Views that aren’t layouts themselves), any child layout objects aren’t visible at all. This is because the child layouts have no idea till that time how they should lay out their own children since they haven’t been measured at all and return a measured width and height of 0. To do a layout correctly, you need to make sure that you also override the onMeasure method in your layout and call the measure method on each of your children appropriately.

During measure, you need to first calculate your own measuredWidth and measuredHeight and then based on that tell your children how they need to size themselves. For example a horizontal LinearLayout might say “My measuredWidth is 100 plxels and I have two children so each must measure exactly 50 pixels. This it would do my passing a MeasureSpec object which defines how the child should interpret the measurement its receiving: either EXACTLY, AT MOST or UNSPECIFIED. The child view then uses those cues to create its own measuredWidth and measuredHeight (by usually calling setMeasuredDimension at some point in the onMeasure).


protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec){

     //At this time we need to call setMeasuredDimensions(). Lets just 
     //call the parent View's method 
     //that does:
     //                       getSuggestedMinimumWidth(), widthMeasureSpec),
     //                    getDefaultSize(
     //                       getSuggestedMinimumHeight(), heightMeasureSpec));

      super.onMeasure(widthMeasureSpec, heightMeasureSpec);
      int wspec = MeasureSpec.makeMeasureSpec(
                  getMeasuredWidth()/getChildCount(), MeasureSpec.EXACTLY);
      int hspec = MeasureSpec.makeMeasureSpec(
                  getMeasuredHeight(), MeasureSpec.EXACTLY);
      for(int i=0; i<getChildCount(); i++){
         View v = getChildAt(i);
         v.measure(wspec, hspec);

Note that the measuredWidth and measuredHeight are cues for the parent layout when its laying out the children. It might still decide to ignore the measured width and height and lay them out as it feels since its up to it to define the left, right, top, bottom values during onLayout but a good citizen of the layout world will probably not ignore them.

Also I am just getting into this so I may have missed something so please drop me a comment or let me know on Google+ and we can have a discussion.

[Update 1] I have written an extension to this post that includes adding custom attibutes and layoutparams to your layout class

[Update 2] Great talk on this topic from Google I/O 2013

Notes from the AnDevCon III Conference

For the last few months I have been doing quite a bit of work on the Android platform. Its no secret that I am a big fan of the Android OS both technically and philosophically. Needless to say, I was really excited to be able to attend AnDevCon III conference earlier this month for both the opportunity to learn some new things about the framework as well as interacting with some of the stars of the developer community. The event, held in Burlingame (SF), did not disappoint. It was amazing to meet folks like Chet Haase, Romain Guy, Jake Wharton, Mark Murphy, etc and hear them speak as well as meet some awesome local devs doing amazing work (and thanks to Twitter, I can stalk them forever ;) ).

I took a bunch of notes which are available in their very raw form on my shared Evernote Notebook here. While most of the sessions were pretty good, some of the more memorable ones are listed below:

  • Romain Guy and Chet Haase‘s talk on best practices for Android UI was probably the most educational giving me a bunch of tips on improving some of my apps’ behavior.
  • Chiu-Ki Chan‘s talk on Android Custom Components was probably the one that I was most desperately looking for. The talk was fantastic and we hit it off pretty well. She is a coding machine and already has another app out there on the Sony Smartwatch she won there.
  • Kirill Grouchnikov‘s talk on Responsive Mobile Design for Android was great and laid out some great tips on how to create screen-size aware interfaces for Android applications.
  • Mark Murphy‘s talk on App Integration was very eye opening. I was already pretty aware of the Android Intent system that allowed data to flow between applications seamlessly, but he laid out a bunch of other ways apps can be integrated with each other, like sending the complete UI to another app using Remote Views, app plugins etc. Also I didn’t realize Mark was a local guy, so maybe we can coerce him to coming to one of the Android group’s talks in the near future.
  • Jake Wharton gave a great talk on his libraries. The guy is a celebrity in Android circles and I already use his ActionBarSherlock library on some projects but I didn’t realize he had ported Android 3.0’s animation system to work on pre-HoneyComb devices as well via his NineOldAndroids library (Very useful for my current project). His other projects like ViewPagerIndicator and HanselAndGretel were pretty cool as well.
  • Blake Meike‘s talk on Concurrency in Android was probably the most packed session besides the official Google talks. Listening to his talk I realized how little attention I had been paying to possible concurrency issues in some of my apps. There was great back and forth with the audience in the session on a bunch of details on Android application lifecycle.
  • Aleksandar Gargenta‘s talk on Android Services and how IPC works across the Android system at the lowest level was fascinating. While I don’t see myself ever writing or needing to know the details of the lowest level Android services, it gave me a lot better understanding of what Android/Linux is doing when different actions are performed.
  • Joshua Jamison gave a great talk on Advanced Design Implementation with some very usable tips on faithfully translating designs to Android applications.

Besides the talks, I really loved the HTC and the Barnes and Noble keynotes. HTC is doing some amazing work with their phones and very carefully navigating the waters of adding functionality to their line of phones and avoiding fragmentation by creating a set of apis only for their phones that differ very slightly in functionality from the core Android OS. The entire HTC keynote was broadcasted to the projector from their phone. They also introduced the new APIs in the latest iteration of Sense including LockScreen, Beats Audio and Video call APIs. B&N gave some rather interesting statistics on the Nook audience, like having over 85% of their audience be women. Android has had a notoriously hard time appealing to women so this statistic was interesting. The Nook marketplace is also apparently very profitable, which I seem to have heard from some other devs there as well (so it wasn’t just PR), though the conspiracy theorists attribute that to their curated market being fairly small. Their talk on thinking of apps as content (like books) was pretty good. Though the best part may have been winning the Sony Smartwatch draw at the end of the conference. The watch actually has a very interesting architecture with most of the user interface generated on the paired mobile phone and the watch itself being just a dumb screen. I hope I get some time to play with the SDK soon.

I missed most of the after event parties since I had a truckload of work to do for a project due the immediate next week (I was sneaking off to write iOS code which is kinda ironic), but did make it a point to attend Square’s dessert bash. In my book they also won the award for best schwag t-shirt ever.

P.S: My friend and fellow Android Alliance organizer Chuck Greb has already posted his notes from the event on his blog.

Friending non-humans: A lazy foodie’s hack for the Android address book

Probably not one my best characteristics but I do tend to order in a lot of food from a variety of places near my apartment. There are also enough places around where I live that I end up trying a bunch of different places and then when a familiar menu comes up again I am often left wondering “what did I eat there the last time, and how good was it?”.

I was half tempted to create a food journal app specifically for delivery food. Think of it as a Foodspotting / Foursquare app but more like a personal diary than a social app. Neat idea I guess, except I have almost no time these days. But then I got thinking: the People app on Android exposes a bunch of new social apis that I felt I could maybe use in some manner. Could I use the Android People app be used to “friend” my favorite delivery places?

One of the things Android has going for it for something like this is the Intents system and inter-app communication that are very core to the OS. This means quite a few applications are open to external data or expose their own data to external apps. My end “hack” basically involved creating a contact to represent each of my more regular delivery places and wiring it to different apps via URL patterns registered by some apps that I could find online. Since you can add many web links to each contact, I added a couple:

  1. A link to the mobile foursquare URL for the location. Since the Foursquare app registers itself as a handler to those links, clicking on the link on the contacts app launches the foursquare app for that location.
  2. A Google Docs file url where I can write about the dishes I ate at that place. Once again, since the Google Docs (now Drive) app registers itself as a handler for those urls, using that on the phone works well enough.

Check out the video below to see it in action:



My original goal was to tie it to my Foodspotting profile but that app does is completely closed and does not respond to any local intents or URLs.

For the most part this does let me do what I wanted to do. Look up the latest reviews/tips about a place, put down my thoughts about different dishes I tried there and then call them if I feel like it. In a more complete app that leveraged the people app social apis even more, the photos representing the places could also change, maybe representing a special dish or something.

But it does make me think the idea of contacts really does need to expand beyond the people in my life. I mean by definition, isn’t anything you can call or contact for more information a contact? Why can’t I  save a particular restaurant from Foursquare directly as a contact to my address-book since I usually call the place and not really a person there? Extend this thinking and you realize there are a bunch of “things” you often call: the taxi service, the hospital, the utility company, etc.

A lot of these thoughts are also probably a result of my reading reading a particular thesis by  John Kestner, an MIT student on creating “A Social Network for Lonely Objects“. Its a fascinating read and I definitely recommend it.

All of this also involves a rethinking of fundamental parts of the data that define a contact. VCards are a very human concept but we need to morph that construct into a more unstructured form, so that a contact of a particular type can create data fields relevant to it. The internet is already evolving to embrace unstructured data with NoSQL databases and such.

Kinda ironic that Android recently renamed the Contacts app to the People app in ICS ;)

Ice Cream Sandwich designs on Android

Android ICS Apps

The biggest critique for Android (besides software update uncertainties) for the longest time has been the poor design of most Android apps. While some app developers did do a good job with their apps, the lack of any kind of design guidelines for developers meant everyone was finding his own way to create a good experience on the platform. This also meant that apps tended to look very different from each other.

The introduction of Ice Cream Sandwich design guidelines at the beginning of this year seemed to be a step in the right direction, but in the back of my head I was worried it could be a little too late. Fast forward merely 3 months and things are very different. More and more applications are conforming to the new guidelines and they look GREAT. The great thing is that with frameworks and libraries like the Android Compatibility Library,  ActionbarSherlock, ViewPagerIndicator and HoloEverywhere, the design guidelines can be used for pre-ICS apps as well.

Roman Nurik seemed to have the same thoughts in his mind so he started a Google+ thread on it . I wanted to take a few screenshots and share them here but as I started saving the images, there just seemed too many to list. So instead I just saved a bunch of apps to a Tumblr account. Check it out and follow it if you’d like to track updates. While I’ll keep adding to that as I find more, if you see and app that you’d like to see there, drop me a comment here or message me on Twitter or Google Plus



Wow, my first Emmy,

Rather Comcast’s Emmy for “Outstanding Achievement in Engineering Development” for the iPad app I worked on for most of 2010. This one is my copy of it that now adorns the top shelf in my office.

Bonus points for being one of the first 3 engineers on the project!

You can find a more complete writeup on the award on the Comcast Voices blog. I haven’t been involved in that app since Jan 2011 and was recently informed that most of my code has finally been refactored out with newer/better implementations, but here’s hoping there is at least one dead code path that the team forgot about that compiles to the production app that still has my name on it ;)

Introducing SuperLoader: a better AS3 library for fetching images

This post is way overdue but in any case, I’d like to introduce (rather belatedly) SuperLoader, an image loading library for AS3. The library grew out of my work when I was building the magazine view for EspressoReader. The magazine view builds a visual grid from the images found in a collection of blogs you are browsing. Here is a screenshot of the experience:

While building the Magazine View, I was faced with a problem. Each “page” of the view read 10 blog entries and then needed to find the most relevant image to show in the interface for each entry. Posts tend to have multiple images and a lot of them are not usable. For example, the image may be a tracking 1×1 pixel image used for analytics or may be too small or large to be effective. Loading each image using a Loader was not feasable since the loader only gave dimension information once loading had been completed and I didnt really want to load a bunch of images completely only to discover they weren’t useful.

The SuperLoader library  takes care of this by actually parsing the binary data in the URLStream that it uses to load the images and can identify very early in the loading process the type and the size of the image being loaded (since that information is available in the first few bytes of the incoming data). Moreover it also includes an api for immediately canceling the load process so that you can jump to the next image in the list of images you are loading. The API looks something like this:


var loader:SuperLoader = new SuperLoader();
loader.addEventListener(SuperLoaderEvent.LOAD_COMPLETE, onLoadComplete);

private function onImageTypeIdentified(event:SuperLoaderEvent):void{

private function onImageSizeIdentified(event:SuperLoaderEvent):void{
  if(loader.imageWidth < 20 || loader.imageHeight < 20){

private function onLoadComplete(event:SuperLoaderEvent):void{
  var image:Loader = new Loader();

The library also includes a SuperLoaderQueue object to manage the load process of multiple images.

The library is released under the MIT open source license and is available on Github.


Air for mobile’s weird touch implementation

Let me begin by saying that after playing with Adobe AIR for mobile for the last couple of weeks, I have been really pleasantly surprised by general performance. I have done quite a bit of mobile device development off late with both iOS and Android, but AIR might actually be a contender for my next project.

That said, I am kinda surprised by the touch event implementation. I am hoping someone will correct me if I am missing something but here is what I am seeing so far.

In AIR, you can now choose to set a Multitouch Input mode to either intercept raw touch events (MultitouchInputMode.Touch_Point) or have the touch events mimic mouse events and get separate gesture events from the flash player when gestures are performed (MultitouchInputMode.Gesture). Note: when gestures begin, you stop getting mouse events till the gesture has completed.

My first problem is that you can pick one or the other, but not both. The former will give you raw touch events and so you can see when more than one touch is on the stage, but then you have to write your own code to define what a gesture is. The latter gives you gestures but you’ll never know when more than one finger are on the stage. But what if you want to track touch points independently till a gesture begins? You are on your own there.

Additionally, There is no information on touch positions on the Gesture event. There is a localX and localY which I presume is the point between the 2 touch events, but they don’t change much as the gesture occurs (I was trying to read these values while panning halfway into pinching, the change in values isnt representative of how much my fingers moved) which seem to be the x and y points of where the gesture began and do not change as the gesture events change.

Also, Gestures like Zoom and Pan do work independently but not together. So If you are zooming (pinching) into an image using 2 fingers and then start moving the 2 fingers in a particular direction without changing the distance between the touch points (Pan), you dont get the pan gesture events. This is unlike the behavior on most apps that allow zoom and pan.

At this point I started looking to read raw touch point data. Here is another implementation gap. AIR requires the developer to keep track of every touch point (identified by toucheventIds). There is no data model that you can query on the AIR Player that gives you an Array of touchPoints. This is irritating and smells of poor API design decisions. Compare this with iOS’s api for touches where I get the set of all touch objects every time the time touches begin or change:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;

Anyway, thats as far as I have gotten so far. Maybe I am missing something but it seems that if you are really building something significant with AIR for mobile, you might need a custom non-Adobe gestures library.

Anyone know of a good one?

Location Check-Ins are the new Photos

Off late I have started using quite a few location based services like Foursquare, Foodspotting etc, and everyday more interesting apps crop up. However the current model for location based apps requires me to check into each app individually. The problem is when I am at someplace interesting enough to check in to, I am usually with people and I can only take so much time peering into my phone before coming across as rude to the rest of the group.

Compare this with how I interact with photos today. I take a pic and then can choose among any number of apps on your device that can do interesting things with it. Even better, I can manipulate the photos after the event I was taking the picture at.

I feel the whole model of checking-into a location has matured to a point where it can graduate from the app level to the platform level. I would much rather bring up some native location app and check-in to my phone. The check-in is an actual object, like pictures or music and it is now owned by me. It contains the location information as well as the time. Once checked in, I can “share” my checkin to any apps that can do something with it. So I can forward my checkin to Foursquare to inform all my friends I am there, to FoodSpotting to discover whats good to eat there and to any other app that may need that info.

This model has a number of advantages to it:

  • I am building up a location history that is no longer trapped inside one app or service.
  • I can choose to share my checkin with apps later (for example if my only motivation was to get some virtual points for being there, I can check into that service the next day)
  • I can checkin to multiple services in poor connectivity locations by only needing to pull in map images etc once.


Of course there are certain challenges here:

  • Some services like Foursquare may want to know immediately when you have checked in to present local offers etc. This could be done by the platform sending a system level event that these apps can listen to when a user checks in to a location.
  • Different services have different location data, for example Foursquare has a lot of geo data but Foodspotting may have better data on restaurants so there will be the need for some standard model of annotating the location with other meta information while the user is locating himself on the map. For example, the map the user is presumably zooming in to can show pins from different services with different icons.


Hopefully platforms like Android and iOS add this capability as a core part of the OS pretty much like we use maps today. Till then, the idea of using more than one or two location services at any point is pretty difficult.

The prettier side of Android

A lot has been said about the lack of UI polish in Android (sometimes rather obnoxiously), but over the last few months I have started debating how much of that is still relevant. When I first saw Android (around version 1.6) I was not a fan, but over the last couple of years having it as my primary mobile phone, I realize that while a lot of the applications lack UI polish, there are a lot of parts of Android that can be customized with very beautiful options. Most of the time these seem to be the work of communities of passionate developers and designers and often lacks mainstream visibility. Thats what prompted me to write this post.


HomeScreens, Launchers and Icons

Nothing gets the point as immediately across as, a site dedicated to customized homescreens (regardless of Android, iPhone, etc). The number of entries under the Android category however completely outweigh iOS since iOS offers almost no homescreen customizations on non rooted devices. On Android however, you can use a variety of launchers, custom icons and homescreen widgets to create a pretty amazing experience. DeviantArt for example is a great resource for custom themes and icons for different launcher themes (for example LauncherPro themes)

MyColorScreen recently also posted a blog entry on the 10 best customizations for 2011, definitely worth a look for some inspiration.


LockScreens is another area of a lot of visual explorations in the Android community. For the last few months I have been using WidgetLocker as a lockscreen app which lets me not only use background images, but also widgets on my lockscreen. Additionally WidgetLocker also lets you create custom themes and apply them. This XDA link has a HUGE list (374 pages) of some good ones.

Recently however I also tried MILocker which seems to have even more polished themes (though lacks Widgets-on-lockscreen functionality). MILocker is a port of the LockScreen app from MIUI Rom for rooted Android devices (I talk a bit about that below)

Core Apps

One of the teams that do awesome visual work are the guys behind the Go Apps. All of their apps are completely themeable and often replace (or override) the default Android apps. For example Go SMS (which I used till I recently rooted my phone and swapped for the new MIUI SMS interface) works seamlessly overrides the default SMS app on Android, and has some really fantastic themes available for it. Go Apps are available for very core Android functions including Keyboards, Dialers, etc.

Live Wallpapers

I have also recently come around to Android Live Wallpapers which I originally considered useless and mostly a battery suck (turns out the battery consumption is not that bad at all). These are animated backgrounds that you can use on your launchscreen. While some of them are just beautiful visually, others are actually very functional. For example I used to use Go Weather that had different animated backgrounds for the desktop based on the weather at the location. This kind of ambient information is pretty awesome. More recently I have started using the Aurora Live Wallpaper  (see my current homescreen below) since I find it very soothing to look at the Northern Lights every time I am on the home screen ;) .

Custom ROMS

Finall, rooting your phone offers even more customization options since quite a few ROMs even offer themes that can be applied to the whole OS, though I imagine thats not for the faint of heart. I recently rooted my phone and am now using MIUI ROM that is just visually fantastic. It has customized all aspects of the OS like systemwide font, notifications and alerts and default applications for music, sms etc. The video below is a pretty comprehensive walkthrough of MIUI. Its just fantastic.

As a user interface developer who is passionate about design, I am pretty happy with my current experience on Android, but it took me a while to find a lot of the options to get there. There are a lot of communities like XDA Developers and ColorMeAndroid (on DeviantArt) that most people aren’t aware of. Hopefully these grow and Android gets more designers on the platform. The open nature of the framework and the apps that allow communities to customize them truly make the visual possibilities exciting.

Thoughts on inter-app communication and Siri-izing Android apps

Coming from developing iOS applications, the concept that struck me most interesting in Android app development was the way it handled inter-app communications. For the uninitiated, here is a simple way to understand how Android apps work:

  • Each Android app is composed of a few different parts: Activities, Services, ContentProviders and BroadcastRecievers
  • An Activity represents a single screen with a user interface. They usually constitute one complete “action”, like signing up or updating a status or something like that.
  • Users transition between Activity screens while performing a task. This is made possible by the Activity sending asynchronous messages called Intents to the Android system
  • Intents trigger the next appropriate activity. So, for example, a clicking on the share button sends a  Share Intent to the system that then pulls the appropriate share activity.
  • Activities from different installed applications can be mixed and matched to allow the user to complete a task. In the case of “Share” for example, apps like Tweetdeck or Facebook add the appropriate Share Activities so that the user may pick the one that he actually cares about.

The last point makes for very interesting scenarios. This means that app developers can relegate certain responsibilities to other apps the user may have installed. This is also what allows users to switch out the default apps with others. Prefer the new Firefox browser to the default Android one? Well, check off the box that always resolves the browser intent (or specifically the ACTION_VIEW intent) and now you’ll never see the native browser again. While the core Android intents are decently documented, there are even efforts on like that try to provide a central location for information on intents offered by third party applications.

When I first saw the Apple Reminders app introduced at the iOS5 keynote, it made no sense to me. Why build a mediocre reminders app when the iOS AppStore is full of really well done todo apps? It wasn’t until I saw Siri at the 4S launch keynote did that click: Reminders is the perfect use case for voice recognition and they needed an app to respond to the Siri “add reminder” trigger. Since then, I have seen a lot of developers get hopeful about when Apple will allow third party apps to be Siri-enabled, but I have a hard time figuring out how exactly that would work. In Android I imagine a central app that dispatches custom Intents like a system wide “Reminder” intent or other custom events that multiple apps can wait on, but I dont think any such concept exists in iOS at least today (hmm, the more I think, it probably could work using something like UIApplicationDelegate’s openURL method and a custom URL Scheme)

The video below by the folks at Remember the Milk showed how to add events to their app via Siri, but if the instructions are anything to go by, its a pretty smart hack but a hack nonetheless. Also I am not sure in this case if your tasks are being pushed to both RTM and Apple Reminders and if they are, I doubt the “done” action is synchronized as well.

Compare that with the second video here done by me using Vlingo, an off the shelf Android personal assistant app (very similar to Siri and surprisingly decent). The video below triggers the “Share” intent at which point I can pick any app from my list of installed apps that respond to that intent. Note that I could have also checked the checkbox there to always go to Remember The Milk (actually my preferred todo app right now is Astrid) but its pretty cool to see that not only did the task successfully get added but so did the day (“Tomorrow”) which is a separate form element in the “Add Task” Activity.

This kind of inter-app play makes for some very interesting possibilities. Its unfortunate that Apple seems to be going the exact opposite direction with apps on their platform. Apps on iOS are already pretty isolated from one another but now those rules are also going to be set on desktop software: a place where it makes even less sense.

On the flip side there are also projects that want to extend the concept of intents to the entire web. If you haven’t seen it yet, check out I am pretty excited about this, hopefully this makes it. It looks like both Chrome and Firefox at least are looking to support that kind of mechanism for inter app communication.

I first got really interested in inter app communications when I was playing around with Macromedia Central back in the day. Central was Macromedia’s first try at Flash based desktop apps and never really made out of the developer preview, but it had some great ideas on inter-app communication.

Here is a snippet from their whitepaper on Central:

Hey Adobe, can we get this back in a future version of AIR?