Last week I was invited to give a talk at the PHLAI conference on the intersection of Blockchains and Machine Learning, 2 areas I have been working with a fair bit in the last couple of years. My hope with the talk was to get more AI practitioners interested in the Blockchain space, which I feel perfectly complements the AI space by providing a layer of trust on black-box AI systems.
Powerpoint presentations do not make good blog posts, so I’ll elaborate on some of the ideas in the future, but here is my deck from the presentation for now
Last week was Labweek at Comcast, one of the best traditions at Comcast, where developers and designers can take some time to pursue ideas, learn new technologies or just work with folks you don’t usually get to work with. Though every week can be labweek in my world working at Comcast Labs, I still enjoy working on something completely different from the projects-on-record for a week. You can read about a few of my previous labweek prototypes here.
For this labweek, I took the opportunity to build something with Jetpack Compose, Google’s new UI toolkit for building Android apps. In the last couple of years I have worked quite a bit with ReactJS, SwiftUI and a LOT with Flutter (read my thoughts on Flutter here) , and it was interesting to see how all of them were starting to adopt the same patterns. From the sessions at IO and conversations at the Philadelphia Google Developers’ Group that I help run, I knew Jetpack was also headed the same direction, but it took me building an app to realize how close that was.
Compose vs. SwiftUI
Compose feels the closest to SwiftUI, borrowing not only the ideas of light weight layout containers (Rows, Columns, etc) but also the use of Modifiers to manipulate the View …sorry…the Composable. Even the structure of a typical compose file with a function that defines your composable and another, annotated with a preview annotation, that lets you preview your layout is identical to the SwiftUI edit/preview experience. The similarity even extends to the documentation experience: check out the SwiftUI tutorial and the Compose tutorial page layouts with text on left that scrolls with the code on the right. Heck, even my bugs are similar in both frameworks 😉
Compose vs. Flutter
While Flutter is similar to Compose, I do prefer Compose’s modifier approach to Flutter’s approach of composing behavior using a hierarchy of widgets, though the hot reload on a device/simulator that Flutter does is preferred to the preview experience on Compose, specially since previews cannot fetch live data from the cloud and my design was very remote image heavy.
I also find creating animations in Flutter a bit cumbersome, having to create AnimationControllers, TickerProviderMixins, Curves and callbacks. Jetpack Compose does seem to have enough complexity in their own system as well but I got a lot of mileage out of just using AnimatedVisibility with enter and exit animations, though SwiftUI with the `withAnimation` blocks is the clear winner here.
Random Lessons Learned
There were a couple of gotchas as I was building the app: For some functionality that I would consider core, like fetching remote images or making your app aware of things like WindowInsets, is only available as part of an external (Google authored) library called Accompanist. I had a bit of a hiccup because my version of that library wasn’t compatible with the version of Jetpack in my Android IDE. I do hope these capabilities get added to Jetpack Compose core instead of an external dependency that I’d have to track (I do prefer the batteries included approach). Also if you do plan to use the official samples as a starting point, note that some (or at least the one we were using) has a compiler instruction to fail on all warnings (that took like 2 hours to figure out)
A week of intense coding in compose gave me a lot of appreciation for this new direction for Android development. I was pleasantly surprised how productive I felt working on a completely custom Android UI. There is still a lot of features of Compose I haven’t tried out yet but am definitely looking forward to learning more. At this moment Compose is not officially out yet (the latest version is release candidate 1 that came out a few days ago), but I am sure Compose will enable some truly amazing UI experiences on Android in the next few months!
NFTs have been a lot in the news lately, jolted into mainstream spotlight by Beeple’s $69M sale of his NFT art piece called the first The First 5000 Days. As with most things crypto, there are as many passionate believers as there are skeptics of this new model for digital collectables. Regardless, from a purely technical level, I have been fascinated by digital collectables for a while (ever since CryptoKitties broke the Ethereum Blockchain) and have been trying to learn more about the technical underpinnings for the last couple of weeks.
A16Z, the venture capital firm, has been a great source for information on the whole crypto space for a while and organized an online summit this afternoon bringing in some big names in the field to speak about the state of the NFT space. Below are some of my takeaways from the event
On NFTs in general
Dan Boneh from Stanford University and Chris Dixon from A16Z kicked off the event with a fireside discussion on the state of the NFT space in general. Some interesting points of discussion included:
How one of the big reasons that Decentralized Finance (DeFi) exploded was because of the composable nature of Blockchain finance primitives. NFTs could offer similar capabilities. For example, you could wrap non-fungible ERC-721 tokens in fungible ERC-20 wrappers.
How we are already starting to see NFTs be used as collateral just as other assets tend to be
Could Quantum Compute destroy Blockchains and therefore the NFT value (nope, we have quantum resistant algorithms which we can move to as QC attacks start becoming more probable)
On NFT Marketplaces
It was fascinating to hear Kayvon Tehranian from Foundation and Devin Finzer from OpenSea talk about their NFT marketplaces. I missed a big part of the latter’s talk but I have been really curious about how Foundation works and it was great to hear a bit about that.
Every action on Foundation (listings, bids, etc) are recorded on the Blockchain and the asset itself is stored in IPFS. The system only works with non-custodial wallets (sorry Coinbase)
While technically it is possible for someone to upload an asset that they don’t own, Foundation manages this a bit by a pretty exclusive invite process with only current artists being able to invite new artists (which does feel a bit centralized IMO)
Since everything is managed in a decentralized way, it is theoretically possible to buy an asset from Foundation and sell on a different marketplace
Dieter Shirley and the Flow Blockchain
This may have been my favorite session since I am already interested in Flow. Dieter Shirley is the CEO of Dapper Labs but really got famous when he and his team built CryptoKitties while still working at Axiom Zen. Flow is a new blockchain designed for applications, not financial instruments and is famous for running the NBA Top Shot NFT
Flow’s architecture is driven by 3 goals:
Enabling building tangible products
Simple on-boarding for non-crypto-nerds
Higher capacity to enable web-scale products
He also talked about the decision to build their own chain instead of using Ethereum (“wasn’t easy”), though he does feel that interop with other chains is going to happen among different chains anyway.
On his one regret with the ERC-721 specification that he drafted, he wishes they hadn’t punted on the metadata specification for ERC-721 tokens (“it was a classic bikeshedding moment and there were too many people with too many opinions”).
On the challenges he sees with NFTs in general, he feels legitimacy of the NFT, the challenge of balancing scarcity and abundance and the challenge of interacting with the traditional financial system will remain the big challenges for the near future.
DAOs and NFTs
The last talk of the evening was by digital artist pplpleasr who talked a little bit about her process for NFTs but then mostly talked about the birth of the PleasrDAO, a Decentralized Autonomous Organization that was formed organically to acquire her Uniswap NFT and now exists as a community that buys other NFTs and leverages their assets to power socially conscious projects on the Blockchain. Her talk ended with her revealing her new NFT titled “Apes Together Strong”, with all proceeds going towards charities supporting autism advocacy.
I love the idea of DAOs and the talk, as well as the sentiment on her slide below, was the perfect talk to end on
I recently finished reading Ken Kocienda’s “Creative Selection” book about his time on the original iPhone engineering team. Most of the book is his work building the the soft keyboard for the iPhone and coming up with systems to allow users to productively type on a glass surface without any tactile feedback (with the specter of the awful handwriting recognition software that killed the Newton in the background)
So much of this book spoke so personally to me. Most of my career has been in very early stage projects where we were still figuring out what the products and technologies were trying to be. As part of Comcast Labs, demos and prototypes still remain the bread-and-butter of most of my daily work. And while I have met a few other folks who, at least at some point in their career, have had to work on these, I haven’t found many books that talk about the process of building prototypes and demo-ware.
Prototypes vs Demo-ware
One of the things I have learned in the last so many years that prototypes and demo-ware can be very different things. The primary goal of a prototype is to learn something (Houde and Hill’s excellent paper from 1997 breaks that down to Implementation, Role and Look-And-Feel). Demo-ware is more about getting people excited about the possibilities.
That said, a great demo can sink your product if you set unrealistic expectations. Among some of my prototyper-friends who now prototype at different companies, we still give each other “Clinkle bucks” for a good demo. For those who may not remember it, Clinkle was a once-hot-in-Silicon-Valley startup that raised $30M on the back of a great demo. The history of Clinkle is a fascinating read but highlights how a great demo made with no regard for feasibility cannot save your company.
A few thoughts on demos
Here are some of my personal notes on making good demos:
Get to the point: You only have a few minutes for a good demo, so get to the interesting point fast. Do not waste your time implementing general software constructs like real login systems, etc. Fake as much as you can.
Have the right team: Quite a number of devs I have met consider demos a waste of time. Make sure your team is passionate about making great demos
Remember Theater: Lean into a bit of theater with good design and animations. Choreography is important
Between a global pandemic and a shocking display of the ugliest parts of human characteristics, 2020 will go down as one of the worst years to be around. Compared to some of the other heartbreaking stories I keep reading, my family and I were lucky to only be inconvenienced and not devastated by everything that happened in 2020.
The tl;dr version of this post is: ‘I got MARRIED 😱 … and, yeah, I wrote some code’
After way too long, Dana and I finally got married. The pandemic ruined our more elaborate plans, but we had drawn on the engagement for too long already and all our travel plans are at a halt for a while, so having a small ceremony at my sister-in-law’s backyard seemed like a good idea. We live-streamed most of it on a (non-interactive) Zoom and a (interactive) Google Meet virtual meeting so we did get a big audience for the event. I wish my parents had been able to join us physically but we’ll do some kind of IRL party when we go to India whenever the world feels safer.
One of the interesting parts about working at Comcast Labs is that you get to work on a number of projects using very different technologies. In previous years it has been a healthy mix of VR/Blockchains/Chatbots/Machine Learning etc. In terms of domain, this year was a lot more focused. Most of my explorations were in the space of Customer Experience Bots, and efforts to improve the Xfinity Assistant, coming at it from a lens of 3-5 years out. Over the year I built a Knowledge Graph editor using Grakn, explored the use of Structured Data, esp Microdata, within chatbots and worked on adding more intelligence to the edge (i.e. Mobile Apps) to power the diagnostic flows.
I also enjoyed working on some personal mobile apps using Flutter, Ruby on Rails and Firebase. I am blown away by the capabilities of Firebase and hope to share some learnings on that on this blog soon.
Here is a very unscientific quantitative breakdown on what I spent my time on this year
The one thing that is conspicuously missing here is Blockchains. While I still help run the Comcast Blockchain and Decentralized Technologies Guild, I didn’t get to spend any actual coding time on it in 2020. Here is hoping for 2021 🤞
The Google Developers’ Group that I help run went virtual this year, like every other Meetup (I wrote a bit about that earlier). I miss hanging out in person with the friends I have made there but thanks to Google Meet and Slack, we are still alive and kicking.
The one change this year was a lot more interactions with the Google Cloud teams as well as GDG-Cloud Philly. With my own interest in Cloud Services growing, the joint sessions with the other two groups were definitely super interesting.
This hasn’t been the greatest years in terms of reading, but that is a good thing, since my focus was more on producing and given the time limitations, something had to give.
2021 is starting off on some positive notes so I hope its a better year in general. Have a great 2021 👍
I recently represented the Philadelphia Google Developers’ Group, a group I have been helping manage for close to 9 years, at Technical.ly’s Super Meetup event, an event that brought together local technology and entrepreneur communities for an evening of social festivities. And while Zoom events don’t have the same vibe as the in-person events that the event has traditionally been, the Technical.ly crew did a good job bringing people together for an evening of community talks and nerd-trivia 🙂
As part of the event, they had the group leads talk about how the groups have fared during the pandemic. You can read all the responses here and I am pasting mine here as well
The question gave me a little time to reflect on our setup, and generally we are doing the best we can of a really weird situation. I am really looking forward to a time when we’ll be able to meet face to face again but that doesn’t seem like its going to be anytime soon. But the virtual nature of all meetups has given us more opportunities to collaborate beyond our local neighborhood.
Probably the biggest change has been the activity on our Slack account which we had only recently moved to, as we moved away from the larger PhillyDev Slack community. That decision seems to have been the right one and I hope more folks from our Meetup.com page join us there.
I recently attended a virtual event hosted by Promptworks which was really interesting as well. I hadn’t realized till that event that Zoom offered breakout rooms which is great. I might try that for our next event. Anything to lower to speaker/attendance ratio which makes the conversations feel more intimate
Catering seems to be becoming a part of some events as well. The Google events we helped with had catered lunches through Grubhub which was great (who knew Grubhub had a corporate events group 🤷♂️) though the Promptworks team won that round with some amazing food and wine delivered to the attendees. It might be too expensive for monthly events but might be an option for special occasions
Video collaboration tools still feel poorly designed for a professional communities though. Most are designed around a talking-head + shared screen experience and aren’t nearly as collaborative or inclusive as in-person events. There is an opportunity for a product here, though it would have to be with a very different business model, since most communities don’t charge their users to attend their events and so can’t afford to pay for individual seats.
Maybe its something a company like LinkedIn could be interested in offering to professional communities?
Yesterday I attended the L3 AI online conference on digital assistants organized by RASA. I am still working on the notes from that conference that I’ll share here at some point but I was really pleasantly surprised by the format of the conference. While the current pandemic has forced a lot of conferences to go online, most have just become Zoom calls and honestly are exhausting for more than an hour. I actually attended the conference for the whole day yesterday and it was the best online conference format I have seen so far.
The conference was powered by Accelevents, so good job folks, though I am sure they have competition in that space. I have also heard of good things for Run the World (actually, I haven’t. The only thing I have heard of them is on their investment with a16z 😁. But the features listed on their site look interesting).
So here are some thoughts on my experience with L3
Both Accelevents and Run the World allow users to create a profile ahead of time. This lets users reach out to others who may share the same interests during the event or when they are algorithmically paired (see below). RTW lets you create video profiles as well, which is cool
Connecting to others is probably the most important part of a conference (most session videos end up online anyway). The Zoom experience is to just have as many videos of people as possible. That doesn’t really work since only one person can talk at the same time and a number of people are either multi-tasking or feel otherwise hesitant to share their video
The L3 conference page had a link to socialize which would randomly pair you with another attendee. I didn’t use it but mostly because there wasn’t much between sessions during the day. Instead of one-on-ones, I would have liked small groups that I could be joined with which would have felt a little less intense.
Prerecorded Scheduled Sessions
Most of the talks were just prerecorded sessions with the speaker and other attendees discussing the talks in a chat window next to the video player. The sessions unlocked at different times, so it did feel a bit like a conference track.
The advantage of the prerecordings was that
You could pause and rewind the sessions right there if you missed something
The video-audio quality of the sessions was good (none of the “can you hear me now” moments).
Some presenters had even done some post-production work on their videos which was nice
The event page included a video page and a side panel that included tabs for chat, polls, attendees and questions. As with a lot of tabbed interfaces, the out-of-sight / out-of-mind thing happened and I never looked at the non-default (chat) tabs.
Unlike video, chat allows for many people to talk to each other at the same time which is better I think. So I was able to see some interesting discussions between the attendees on various topics.
An interesting aspect of the conference was a virtual expo tab where every company that was sharing their products could have people available for a Zoom video chat (yeah, they were using Zoom which I didn’t know could be embedded in a webpage). That was neat.
I really got a lot out of this conference and enjoyed the format. With a lot of conversations going on right now on how virtual conferences could be more like real ones, I think we should also think about how they could be better than the real thing. For one thing, your audience can be a lot bigger, more diverse and inclusive.
There is also a lot of innovation going on right now in the chat experience in general (emojis, virtual gifts, etc) that could make text chat more lively as well.
There needs to be a new middle ground between video and text chats (maybe digital avatars?). I like looking at people’s faces but I also understand the multi-tasking thing when in front of a laptop. VR chat rooms get across a lot of feeling of presence by just using eyes for example.
I enjoyed the timed sessions, though I struggled to attend any of them totally in sync with their start times as there was a lot of stuff happening at home (work emails, etc).
I am really curious where the virtual conference ideas go from here. At the Philly GDG which I help run, we have transitioned our events to Zoom events and were planning to do the same thing for future “conferences” (like DevFest etc), but this has given me a lot to think about.
If you have other ideas about the opportunities here, drop in a comment below 🙂
Today I deployed a second site to a Firebase project. I have deployed sites individually on different Firebase projects but hadn’t realized that a single project could support multiple sites. This is specially useful if the various sites use the same assets (think internationalized versions of sites etc)
The documentation on multi-site support is actually pretty good. In my case, my “launchpage” project was completely different from the other site on the project but it does give me the opportunity to bring the two together later. It basically came down to modifying the firebase.json file to look like this:
This file tells the Firebase tools to ignore certain files and deploy others to hosting.
You can test your app locally by calling firebase serve and deploy to production by calling firebase deploy while in the project root.
The only hiccup I ran into is setting up DNS correctly. While Firebase tries to make it easy by giving you an IP address to point your domain to, Namecheap doesn’t work if you specify your domain name in the hosting panel and requires you to use @ to refer to the domain you are configuring. Subdomains similarly cannot be FQDNs and need to just be the name of the subdomain you are configuring (www instead of http://www.mysite.com for example)
Note that Firebase will occasionally request you to re-verify your domain. See the conditions why on this link or the screenshot below
Same rule applies for re-verification: Use the `@` key to add your custom TXT record needed to verify your domain
Considering how easy this was, this might be the way I host all my sites in the future 🙂
Most of the examples I have seen for PaletteGenerator use an in-app image that the system can immediately load but using remote images is more complicated since the library has to wait for enough of the image to load to read the color data. This gets further complicated if you need to run an animation while loading the image.
After trying a number of iterations, the best approach seems to be using Flutter’s precacheImage method before kicking off the animation
🙃 Can’t believe 2019 is over. Fun was had, life was lived. So let’s talk about it
Most of my work in 2019 was split between conversational technologies (bots and such), Flutter, some Machine Learning and finally some Blockchain stuff. So here is a quick recap of the year:
I spent a lot of time this year evaluating various technologies in the context of virtual conversational assistants. I still remain very passionately a believer in the chatbots space and even with the fervor around that space dying out with the whole “Bots are the new apps” idea not really happening.
As with a lot of domains of technology right now (VR, Blockchains, etc), the dying out of the initial mania is allowing some really interesting work proceed and evolve the space without a harsh spotlight and investors expecting 10x returns in 2 years.
The problems in that space (IMHO) right now really come down to the facts that:
Writing bot dialogue is hard and manually authored conversation trees can’t scale
Tools for authoring and previewing bot dialogues are poor
AI-based systems that can hold a true dynamic conversation aren’t really there yet and
There is very little exploration of the user-experience beyond text and animated gifs.
I still really believe that we will need virtual agents as proxies for ourselves and services we interact with as the digital world becomes more complex. It’ll be interesting to see if this space evolves or becomes the next IVR system that no-one loves
Speaking of user-experience, I played a lot with Flutter this year and have already written about it in a previous post. There are 3 reasons I like Flutter:
It’s a cross-platform tool that gives me a lot of control over the graphics (unlike, say, React-Native)
It’s pushing a culture of advanced UI’s that are simple to build which I felt kinda suffered when Flash died
The fact that Google commissioned GSkinner.com to create some amazing UIs that they gave away the code for others to use in their apps just underscores the kind of experiences they wish people would create with it. Here’s hoping Flutter gets more adopted in 2020
I finally got to work on some Machine Learning based projects this year which was interesting. While I wouldn’t call myself competent in that domain yet, I feel I could get there in 2020 (hopefully). I am also very interested in the emergence of higher-level tools that make working with ML even easier, like Uber’s Ludwig and tools like RunwayML.
One particular area of ML that I got into this year was Affective Computing. I am fascinated by the idea of empathetic systems (whether they use AI or not) and exploring the area of Affective Computing gave me a lot to think about. Some of that I even shared at a couple of conferences this year, including the PHLAI conference.
I wish I had done more with Blockchains this year, but my efforts in that space this year were mostly limited to managing the Comcast Blockchain Guild, attending the local Ethereum meetup and the Philly BlockchainTech meetup and trying to keep up with the torrent of news coming out of the dev community. My personal goal is to do a little more hands-on coding in that space again in 2020 🤞
I attended a few conferences this year which were very different from one another
Google IO was really inspirational with a lot of ideas to come back with. It is amazing to see how much Google has embraced AI and the kinds of experiences AI has enabled. I actually kept the Android sessions I attended this year to a minimum as I was getting a lot more interested in other spaces like AI, Flutter and Firebase. I was also very pleasantly surprised by the Chrome experiences on display at IO. Its amazing to see how far the web platform has come.
My favorite tech conference of the year had to be EyeO Festival. The conference explores the space at the intersection of art and technology and had some truly inspiring sessions with amazing speakers. You can check out my Twitter thread of some of the sessions I attended but I’d strongly encourage you to check out as many of the sessions as you can from Eyeo 2019 on Vimeo
I spoke at PHLAI on Affective AI. Had a lot of imposter-syndrome going on given that I was speaking at an AI/ML conference with some very high profile speakers
I was at a panel on Smart Contracts at Coinvention 2019 moderated by the amazing Thomas Jay Rush (of Quickblocks.io)
Attended the Blockchain and Other Networks conference by TTI Vanguard which was really interesting, especially with a format where every attendee could interrupt the speaker at any time if they had a question. Someone later recognized me there as “oh yeah, you are the one with all the questions” 😜