On the panel for Philly Blockchain Breakfast

Photo of panel at the Blockchain breakfast

Last week, I was on the panel at the Web3 / Blockchain breakfast event organized by Comcast NBCUniversal LIFT Labs and Philly Startup Leaders.

It was great to hear a lot of probing questions from both the audience and the moderator. I have been exploring this space since 2012 and it still feels like the early days with so much still left to figure out. Every year feels closer to the kind of ideas I hoped to see sprout from this space but we are not there yet, and the space feels ripe with opportunity.

I really enjoyed listening to the the rest of the panel offer so much insight into the current state of the Web3 ecosystem from their own individual points of view. Thanks, Mark Wheeler for moderating the panel expertly, as well as fellow panelists Karla BallardKelly Gallagher, Mike Ghen and David Barrick, and of course everyone who attended.

2021 Retrospective

Your 2021 Work-From-Home Resolutions — The Storyteller Agency

Writing my personal yearly review on this blog is always a good way for me to reflect on my wins and challenges of the year. But this year’s post is harder to write just cause the whole year has been such a blur. Being mostly locked up at home as the world lives through the second year of a global pandemic has denied me the usual anchors of travels, conferences, and other social activities that I write my narrative around. And while I am thankful that my own immediate family has not been affected too badly, we did have a few deaths in my extended family in India, which was sobering, to say the least. I am hopeful that this ends in 2022 or 23, and that everything we have learned through this crisis, from new ways to work to the advances in virology and vaccination technology can be used for better ends in the future 🤞

I seem to start this writeup every year by talking about how educational the previous year had been working on new technologies at Comcast Labs. This year was no exception though maybe slightly more focused in a domain that I had never worked in before: AI-generated speech. While deepfake technologies have mostly been in the news for the wrong reasons, there is a huge opportunity for useful applications using synthetic video or audio as well. Specifically, I explored the current state of the art in synthetic speech, the differences in the offerings of the various cloud providers and startups in that space, and its applicability to some of the domains I am focused on. We tried a number of open-source Text-to-speech libraries like Mozilla TTS and ESPNet and used them to power a “voice-first” device-setup experience. This year definitely seems to be the year of voice-first platforms, with the explosion in voice platforms like Clubhouse and Twitter Spaces as well as the continued proliferation of voice hardware like Amazon Echo and Google Home. Our prototype was not only educational technically for me but also highlighted the prototypical nature of the current state-of-the-art in Voice UI/UX.

The customer experiences I worked on this year were mostly iOS-based which gave me a good opportunity to really get into SwiftUI this year. After about 5 months of working with it, I am feeling pretty confident with it. I like it though there are some language features in Swift that still make me go cross-eyed. Another part of the iOS stack I had never touched before was their on-device Machine Learning stack, CoreML. I built a fairly simple classifier in CoreML to handle customer responses and it worked pretty well, though I haven’t had the chance to compare it to something like Tensorflow Lite. I also spent a little bit of time with Jetpack Compose (I wrote about that experience here). Between SwiftUI, Jetpack, React, and Flutter, I feel my front-end bingo card is pretty much all checked 🙂

Speaking of Flutter, I shipped my first Flutter-based app to the Google/Apple app stores. The app, Jax, is a JavaScript learning app powered by Flutter on the frontend and a Rails/Firebase combo at the backend. Side apps can take a long time and this one certainly did, having started as a native Android app first, and then a React-Native app, and then finally a Flutter app. I learned a ton about Rails and Firebase through that journey and am really starting to dig the Flutter+Firebase stack for projects.

I continue to run the Comcast Blockchain Guild as well as the Philly Google Developers Group. This was of course the year of NFTs whether you love them or not, and I enjoyed doing some technical explorations in that space. I also got pulled into some very interesting conversations around how the city of Philadelphia could leverage that technology. The fact that the Phila.gov site now has a page up on https://phila.gov/blockchain is a great sign.

As with a lot of technology meetup groups, the pandemic severely shrunk the group that met monthly for the Google Developers’ Group meeting at the Comcast Center. One thing I did start late this year was using Twitter Spaces to host some of our meetings and that has gone off pretty well. We will definitely continue that in 2022 and I am excited about the possibilities of that format.

Otherwise, life has been good. I enjoyed reading a fair bit of books this year including a bunch of graphic novels. I am in the middle of 3 books right now and have half a chance of finishing at least one of them before the year ends in about 24 hours 👋

2021 completed list

Some gotchas when using Firebase Dynamic Links

The last couple of weeks I have been trying to add Firebase Dynamic Links into an app and it took me way longer than I had originally planned. In this post, I wanted to share some of the lessons learned.

First: note that there are 2 kinds of links that you can use:

  1. Dynamic links that you generate on the Dynamic Links section of your Firebase project. These will be the same for every user and are great to link to universally same sections of your app. These are quick to set up and are probably a good idea to try before generating the second kind of deep links
  2. Dynamic links generated by a user for another user on a client application. These would be custom links only relevant for unique users and so cannot be generated via the project dashboard.

In my case, I was trying to get #2 working and it proved to be a real bear.

The problem is that when generating a unique URL, you are essentially doing a couple of handoffs: The first link is managed completely by Firebase (usually with a *.page.link URL). This link checks to see if the app is installed on the device the link is launched on and links to the app-install page if not. If that is fine, the link redirects to the second link that you actually are trying to get to. The second link is often a web address on your own domain which needs to be correctly configured for that page, or else the link will just open that webpage which is probably not what you want to do.

Gotcha 1: [Android] Make sure you have the SHA256 signature saved in your Firebase project

For the longest time, I didn’t realize that I had only the SHA1 key saved in my project. Deep links don’t work without SHA256 values for your project. Thanks to this answer from StackOverflow.

It took me a while to get this document correctly deployed (mostly my fault). I really should have read the documentation on verified site associations on Android more carefully. You can verify your assetlinks setup via this URL (just replace the domain and, if applicable, port values):

https://digitalassetlinks.googleapis.com/v1/statements:list?source.web.site=https://domain1:port&relation=delegate_permission/common.handle_all_urls

Also remember, if you are using Google Play Store to sign and release your app, your assetlinks should refer to their key’s SHA256 signatures. Conveniently, you can copy the assetlinks file from the Play Store itself under the Setup > App Integrity section of the developer console

Gotcha 3: [Android] Make sure “autoverify” attribute in your intent-filter is set to true

Not sure how I missed this but, this took a long time to find

 <intent-filter android:autoVerify="true">
    <action android:name="android.intent.action.VIEW" />
    <category android:name="android.intent.category.DEFAULT" />
     <category android:name="android.intent.category.BROWSABLE" />
     <data
          android:scheme="https"
          android:host="my.app.com" />
 </intent-filter>

iOS:

Surprisingly, as frustrating as getting the Android version to work was, the iOS integration was much simpler. Just following this video helped a lot!

Hope some of this info helps you if you are using dynamic links in your app

Hosting Meetups on Twitter Spaces

What Is Twitter Spaces, and Is It Different From Clubhouse?

Like a lot of technical meetup groups, the Philadelphia Google Developer Group that I help manage has been holding its monthly meetings virtually for the last year and a half using Google Meet and (more recently) Bevy. I am really grateful to the community and especially our regulars who are open to attending another video meeting after a day of many others. That said, our attendance is definitely lower than when we used to do in-person meetups. Here are my three top theories why:

  • Zoom fatigue: No secret here.
  • Marketing: Like most developers, we suck at marketing, and it is likely that a lot of people don’t know about our event.
  • Focus: GDG events span a diverse array of technologies, from Android (where we started), to Firebase, Flutter, Google Cloud and Web technologies. It’s hard to build a cohesive community around a diverse set like this. One option might be to have a series of sub-groups under GDG, like GDG-Cloud which exists but also GDG-Flutter, GDG-Android etc. We’ll see.

2021 has seen the explosion of new voice-only social platforms, starting with Clubhouse and including things like Twitter Spaces,  Spotify Greenroom, etc. I have to say I have enjoyed participating in a few of these sessions – there is something less stressful about not having to worry about being on camera or feeling less social by keeping your camera off during an event.

So, in the spirit of exploration, we tried our first GDG meeting as a Twitter Spaces event last month, and I think it went well, though I did learn a few things I hadn’t really thought about before. Below are some learnings from my experience:

Not everyone has Twitter accounts

I pretty much live on Twitter so I didn’t even realize that not all of our members had active Twitter accounts, though in the end, those who didn’t did have inactive accounts that they could resuscitate. I wish Spaces had a guest mode but I guess that would be against the whole reason Twitter has this feature anyway.

It is not an online meeting

Unlike our video conferencing systems like Zoom or Meet which allow for 100 or more simultaneous speakers, Spaces only allows for 10. Spaces isn’t so much about collaboration, more a “narrowcast”: You can have a panel of speakers but most people are supposed to be listening. You can swap speakers in and out as people wish to speak but it’s a very different model than a fully democratic online meeting.

Tooling is pretty good

The management features are pretty good and I was able to mute, boot, and swap speakers as needed but it definitely was some effort on my part.

Sharing information is hard

It is also hard to share links or show something off on your screen. One trick someone mentioned towards the end of the event was to share a Twitter thread with the audience that they can add to share something. It’s a bit of a hack but it works. This is also how I have heard Clubhouse panels share information: on their individual Twitter profiles.

TLDR:

Overall I really enjoyed our first Twitter Spaces event (what is the verb for this?) though it definitely had a bit of a learning curve. And by being a very public event, we did have a few folks join the event that had never attended a GDG event before, which was my primary motivation for the event. We are planning our next Spaces event now, so if you are interested, follow me or the GDG Philadelphia Twitter account for the announcement.

Metaverse

Mark Zuckerberg's avatar presenting the metaverse at an event Thursday.

The first time I really understood Mark Zuckerberg’s ambition was when he announced Facebook login. I had been developing prototypes on the Facebook platform for a bit but the idea that Facebook would scale its infrastructure to support login buttons and the friend graph everywhere on the internet blew my mind.

Whatever you may want to accuse Mark Zuckerberg of, lack of ambition cannot be one of them. And he pairs that with an uncanny ability to be right in the long term while being ridiculed by folks steeped in the conventional wisdom of the time (Instagram for a billion? WhatsApp for 18B?)

So I am mulling over Facebook’s pivot to the Metaverse pretty carefully. While I am excited about the possibility of the Metaverse, and having previously worked with VR for a couple of years, do find it a lot of fun, I am not convinced of Facebook’s current idea of the Metaverse in general or Horizon in particular.

Metaverse, metaverse and Horizon

So there are 3 ideas that often get intertwined in my head:

  • The metaverse (lowercase m) – A realm that merged digital and physical as originally defined by Neal Stephenson and imagined by a lot of technology pioneers since.
  • The Metaverse – As imagined by Meta: a constellation of VR specific apps that people will jump around. Guess we can call it the Meta-Metaverse, not confusing at all
  • Horizon – a VR social app. Horizon is also feels like a metaverse … that lives inside a larger Metaverse? Kinda like Facebook on the internet, or it would be if only Facebook could allow what pages could go on the real internet.

Meta-Metaverse

My primary gripe with the larger Meta-Metaverse is the app model. The iPhone, when it launched, launched with a lot of content: the entire internet. One of the biggest joys of owning the original iPhone was the ability to use the web, not a neutered mobile-centric version. Over time, native apps supplanted web experiences by delivering more dedicated versions of the experience, but that came afterwards.


The problem with going app-first is that it is hard given the current SDKs and workflows. And while WebVR kinda helps, there are very few tools that let novices design good VR experiences. This makes making VR apps expensive and since the only apps that make money on VR currently are games, VR starts feeling more like a game console than a smartphone.

Meta’s primary goal needs to be to figure out a way to bring the web into VR.

Of course, the challenge is that the web isn’t really designed for VR, just as the 2006 internet wasn’t really designed for mobile users. But you gotta pull a page out of Apple’s book and create UI elements around those limitations. Just as the iPhone replaced the combo box with the iconic tumbler wheel.

iOS 5 Safari Now Has Native HTML5 Date and Time Pickers - Smiths R Us
iOS’s iPhone optimized native controls for the web

Oculus should supply a whole range of controls that replace web elements and offer opportunities to interact with content in a VR-specific way, for example:

  • Create a VR specific form controls that replace on-page controls like combo-boxes, text inputs, date-fields etc. Not just functional but fun to use.
  • Convert on-page text to speech
  • Open image carousels automatically
  • Make embedded video that you can control (i.e not YouTube embeds…unless you can) open in the virtual theater experience. Fill the virtual theater with others who are watching the video at the same time
  • Allow web developers to control the above aspects via html meta-tags

etc etc…

Basically do everything to break outside the 2d window that current VR browsers offer. I think there is a lot of experiments that can be done there.

Horizon

The primary challenge to Horizon I feel is the synchronous nature of it (at least as shown by the demos shared so far).

Facebook social experiences that became huge were asynchronous. Words with Friends for example was an async version of Scrabble. Farmville didn’t require all your friends to be online at the same time. Most of my social experiences today are async.

By contrast, Horizon experiences seem to require everyone to be online and dedicated to the experience (i.e not multitasking) at the same time. The success here would depend on really killer experiences and/or a big leap in VR multi-tasking capabilities, both technically (i.e being able to run many apps at the same time) and experientially.

We don’t know much about Horizon yet, so I’ll end with my feeling that Horizon feels very reminiscent of the web portals where you could interact with a few dedicated experiences. In the long run, that isn’t what is going to win.

… guess we’ll see…

I think it’s interesting to realize that I am now much older than when I first got on Facebook and might be lacking the less-critical optimism that you need in the early days of any technology. But, like most geeks I guess, I have always enjoyed the idea of a metaverse. I just think it needs to be a very different thing than currently being imagined.

Or I could be wrong and will be eating these words in 5 years or so ⏰

Slides from my PHLAI Talk

Last week I was invited to give a talk at the PHLAI conference on the intersection of Blockchains and Machine Learning, 2 areas I have been working with a fair bit in the last couple of years. My hope with the talk was to get more AI practitioners interested in the Blockchain space, which I feel perfectly complements the AI space by providing a layer of trust on black-box AI systems.

Powerpoint presentations do not make good blog posts, so I’ll elaborate on some of the ideas in the future, but here is my deck from the presentation for now

https://speakerdeck.com/arpit/living-at-the-intersection-of-blockchains-and-machine-learning

Jetpack Compose: Rocketing in the right direction

Jetpack Compose Logo

Last week was Labweek at Comcast, one of the best traditions at Comcast, where developers and designers can take some time to pursue ideas, learn new technologies or just work with folks you don’t usually get to work with. Though every week can be labweek in my world working at Comcast Labs, I still enjoy working on something completely different from the projects-on-record for a week. You can read about a few of my previous labweek prototypes here.

For this labweek, I took the opportunity to build something with Jetpack Compose, Google’s new UI toolkit for building Android apps. In the last couple of years I have worked quite a bit with ReactJS, SwiftUI and a LOT with Flutter (read my thoughts on Flutter here) , and it was interesting to see how all of them were starting to adopt the same patterns. From the sessions at IO and conversations at the Philadelphia Google Developers’ Group that I help run, I knew Jetpack was also headed the same direction, but it took me building an app to realize how close that was.

Compose vs. SwiftUI

Compose feels the closest to SwiftUI, borrowing not only the ideas of light weight layout containers (Rows, Columns, etc) but also the use of Modifiers to manipulate the View …sorry…the Composable. Even the structure of a typical compose file with a function that defines your composable and another, annotated with a preview annotation, that lets you preview your layout is identical to the SwiftUI edit/preview experience. The similarity even extends to the documentation experience: check out the SwiftUI tutorial and the Compose tutorial page layouts with text on left that scrolls with the code on the right. Heck, even my bugs are similar in both frameworks 😉

Compose vs. Flutter

While Flutter is similar to Compose, I do prefer Compose’s modifier approach to Flutter’s approach of composing behavior using a hierarchy of widgets, though the hot reload on a device/simulator that Flutter does is preferred to the preview experience on Compose, specially since previews cannot fetch live data from the cloud and my design was very remote image heavy.

I also find creating animations in Flutter a bit cumbersome, having to create AnimationControllers, TickerProviderMixins, Curves and callbacks. Jetpack Compose does seem to have enough complexity in their own system as well but I got a lot of mileage out of just using AnimatedVisibility with enter and exit animations, though SwiftUI with the `withAnimation` blocks is the clear winner here.

Flowchart describing the decision tree for choosing the appropriate animation API
Compose’s animation system isn’t lacking in complexity either

Random Lessons Learned

There were a couple of gotchas as I was building the app: For some functionality that I would consider core, like fetching remote images or making your app aware of things like WindowInsets, is only available as part of an external (Google authored) library called Accompanist. I had a bit of a hiccup because my version of that library wasn’t compatible with the version of Jetpack in my Android IDE. I do hope these capabilities get added to Jetpack Compose core instead of an external dependency that I’d have to track (I do prefer the batteries included approach). Also if you do plan to use the official samples as a starting point, note that some (or at least the one we were using) has a compiler instruction to fail on all warnings (that took like 2 hours to figure out)

Wrapping Up

A week of intense coding in compose gave me a lot of appreciation for this new direction for Android development. I was pleasantly surprised how productive I felt working on a completely custom Android UI. There is still a lot of features of Compose I haven’t tried out yet but am definitely looking forward to learning more. At this moment Compose is not officially out yet (the latest version is release candidate 1 that came out a few days ago), but I am sure Compose will enable some truly amazing UI experiences on Android in the next few months!

Notes from A16Z’s NFT Virtual Summit

NFTs have been a lot in the news lately, jolted into mainstream spotlight by Beeple’s $69M sale of his NFT art piece called the first The First 5000 Days. As with most things crypto, there are as many passionate believers as there are skeptics of this new model for digital collectables. Regardless, from a purely technical level, I have been fascinated by digital collectables for a while (ever since CryptoKitties broke the Ethereum Blockchain) and have been trying to learn more about the technical underpinnings for the last couple of weeks.

A16Z, the venture capital firm, has been a great source for information on the whole crypto space for a while and organized an online summit this afternoon bringing in some big names in the field to speak about the state of the NFT space. Below are some of my takeaways from the event

On NFTs in general

Dan Boneh from Stanford University and Chris Dixon from A16Z kicked off the event with a fireside discussion on the state of the NFT space in general. Some interesting points of discussion included:

  • How one of the big reasons that Decentralized Finance (DeFi) exploded was because of the composable nature of Blockchain finance primitives. NFTs could offer similar capabilities. For example, you could wrap non-fungible ERC-721 tokens in fungible ERC-20 wrappers.
  • How we are already starting to see NFTs be used as collateral just as other assets tend to be
  • Could Quantum Compute destroy Blockchains and therefore the NFT value (nope, we have quantum resistant algorithms which we can move to as QC attacks start becoming more probable)

On NFT Marketplaces

It was fascinating to hear Kayvon Tehranian from Foundation and Devin Finzer from OpenSea talk about their NFT marketplaces. I missed a big part of the latter’s talk but I have been really curious about how Foundation works and it was great to hear a bit about that.

  • Every action on Foundation (listings, bids, etc) are recorded on the Blockchain and the asset itself is stored in IPFS. The system only works with non-custodial wallets (sorry Coinbase)
  • While technically it is possible for someone to upload an asset that they don’t own, Foundation manages this a bit by a pretty exclusive invite process with only current artists being able to invite new artists (which does feel a bit centralized IMO)
  • Since everything is managed in a decentralized way, it is theoretically possible to buy an asset from Foundation and sell on a different marketplace
The Nyan Cat NFT sold on Foundation by its original creator

Dieter Shirley and the Flow Blockchain

This may have been my favorite session since I am already interested in Flow. Dieter Shirley is the CEO of Dapper Labs but really got famous when he and his team built CryptoKitties while still working at Axiom Zen. Flow is a new blockchain designed for applications, not financial instruments and is famous for running the NBA Top Shot NFT

Flow’s architecture is driven by 3 goals:

  • Enabling building tangible products
  • Simple on-boarding for non-crypto-nerds
  • Higher capacity to enable web-scale products

He also talked about the decision to build their own chain instead of using Ethereum (“wasn’t easy”), though he does feel that interop with other chains is going to happen among different chains anyway.

On his one regret with the ERC-721 specification that he drafted, he wishes they hadn’t punted on the metadata specification for ERC-721 tokens (“it was a classic bikeshedding moment and there were too many people with too many opinions”).

On the challenges he sees with NFTs in general, he feels legitimacy of the NFT, the challenge of balancing scarcity and abundance and the challenge of interacting with the traditional financial system will remain the big challenges for the near future.

DAOs and NFTs

The last talk of the evening was by digital artist pplpleasr who talked a little bit about her process for NFTs but then mostly talked about the birth of the PleasrDAO, a Decentralized Autonomous Organization that was formed organically to acquire her Uniswap NFT and now exists as a community that buys other NFTs and leverages their assets to power socially conscious projects on the Blockchain. Her talk ended with her revealing her new NFT titled “Apes Together Strong”, with all proceeds going towards charities supporting autism advocacy.

Apes Together Strong by pplpleasr

I love the idea of DAOs and the talk, as well as the sentiment on her slide below, was the perfect talk to end on

Demo-Driven Development

I recently finished reading Ken Kocienda’s “Creative Selection” book about his time on the original iPhone engineering team. Most of the book is his work building the the soft keyboard for the iPhone and coming up with systems to allow users to productively type on a glass surface without any tactile feedback (with the specter of the awful handwriting recognition software that killed the Newton in the background)

So much of this book spoke so personally to me. Most of my career has been in very early stage projects where we were still figuring out what the products and technologies were trying to be. As part of Comcast Labs, demos and prototypes still remain the bread-and-butter of most of my daily work. And while I have met a few other folks who, at least at some point in their career, have had to work on these, I haven’t found many books that talk about the process of building prototypes and demo-ware.

Prototypes vs Demo-ware

One of the things I have learned in the last so many years that prototypes and demo-ware can be very different things. The primary goal of a prototype is to learn something (Houde and Hill’s excellent paper from 1997 breaks that down to Implementation, Role and Look-And-Feel). Demo-ware is more about getting people excited about the possibilities.

That said, a great demo can sink your product if you set unrealistic expectations. Among some of my prototyper-friends who now prototype at different companies, we still give each other “Clinkle bucks” for a good demo. For those who may not remember it, Clinkle was a once-hot-in-Silicon-Valley startup that raised $30M on the back of a great demo. The history of Clinkle is a fascinating read but highlights how a great demo made with no regard for feasibility cannot save your company.

A few thoughts on demos

Here are some of my personal notes on making good demos:

  • Get to the point: You only have a few minutes for a good demo, so get to the interesting point fast. Do not waste your time implementing general software constructs like real login systems, etc. Fake as much as you can.
  • Have the right team: Quite a number of devs I have met consider demos a waste of time. Make sure your team is passionate about making great demos
  • Remember Theater: Lean into a bit of theater with good design and animations. Choreography is important

One final thing I’d like to say is that it is that in terms of tooling, it’s a bit of a bummer that tools like Flash are dead. While I love JavaScript, it doesn’t have the same ease of building amazing visuals like Flash did (Bas Ording, Steve Job’s main interaction lead responsible for many iOS interactions, did most of his work in Macromedia Director). A couple of my friends in other companies have moved to Unity but building demos for 2D experiences in a 3D game engine is not ideal. We need better and more approachable visual tools for sharing ideas.

2020 Retrospective

Between a global pandemic and a shocking display of the ugliest parts of human characteristics, 2020 will go down as one of the worst years to be around. Compared to some of the other heartbreaking stories I keep reading, my family and I were lucky to only be inconvenienced and not devastated by everything that happened in 2020.

The tl;dr version of this post is: ‘I got MARRIED 😱 … and, yeah, I wrote some code’

Married

After way too long, Dana and I finally got married. The pandemic ruined our more elaborate plans, but we had drawn on the engagement for too long already and all our travel plans are at a halt for a while, so having a small ceremony at my sister-in-law’s backyard seemed like a good idea. We live-streamed most of it on a (non-interactive) Zoom and a (interactive) Google Meet virtual meeting so we did get a big audience for the event. I wish my parents had been able to join us physically but we’ll do some kind of IRL party when we go to India whenever the world feels safer.

Code

One of the interesting parts about working at Comcast Labs is that you get to work on a number of projects using very different technologies. In previous years it has been a healthy mix of VR/Blockchains/Chatbots/Machine Learning etc. In terms of domain, this year was a lot more focused. Most of my explorations were in the space of Customer Experience Bots, and efforts to improve the Xfinity Assistant, coming at it from a lens of 3-5 years out. Over the year I built a Knowledge Graph editor using Grakn, explored the use of Structured Data, esp Microdata, within chatbots and worked on adding more intelligence to the edge (i.e. Mobile Apps) to power the diagnostic flows.

I also enjoyed working on some personal mobile apps using Flutter, Ruby on Rails and Firebase. I am blown away by the capabilities of Firebase and hope to share some learnings on that on this blog soon.

Here is a very unscientific quantitative breakdown on what I spent my time on this year

The one thing that is conspicuously missing here is Blockchains. While I still help run the Comcast Blockchain and Decentralized Technologies Guild, I didn’t get to spend any actual coding time on it in 2020. Here is hoping for 2021 🤞

Community

The Google Developers’ Group that I help run went virtual this year, like every other Meetup (I wrote a bit about that earlier). I miss hanging out in person with the friends I have made there but thanks to Google Meet and Slack, we are still alive and kicking.

The one change this year was a lot more interactions with the Google Cloud teams as well as GDG-Cloud Philly. With my own interest in Cloud Services growing, the joint sessions with the other two groups were definitely super interesting.

Books

This hasn’t been the greatest years in terms of reading, but that is a good thing, since my focus was more on producing and given the time limitations, something had to give.

2021 is starting off on some positive notes so I hope its a better year in general. Have a great 2021 👍