Hosting Meetups on Twitter Spaces

What Is Twitter Spaces, and Is It Different From Clubhouse?

Like a lot of technical meetup groups, the Philadelphia Google Developer Group that I help manage has been holding its monthly meetings virtually for the last year and a half using Google Meet and (more recently) Bevy. I am really grateful to the community and especially our regulars who are open to attending another video meeting after a day of many others. That said, our attendance is definitely lower than when we used to do in-person meetups. Here are my three top theories why:

  • Zoom fatigue: No secret here.
  • Marketing: Like most developers, we suck at marketing, and it is likely that a lot of people don’t know about our event.
  • Focus: GDG events span a diverse array of technologies, from Android (where we started), to Firebase, Flutter, Google Cloud and Web technologies. It’s hard to build a cohesive community around a diverse set like this. One option might be to have a series of sub-groups under GDG, like GDG-Cloud which exists but also GDG-Flutter, GDG-Android etc. We’ll see.

2021 has seen the explosion of new voice-only social platforms, starting with Clubhouse and including things like Twitter Spaces,  Spotify Greenroom, etc. I have to say I have enjoyed participating in a few of these sessions – there is something less stressful about not having to worry about being on camera or feeling less social by keeping your camera off during an event.

So, in the spirit of exploration, we tried our first GDG meeting as a Twitter Spaces event last month, and I think it went well, though I did learn a few things I hadn’t really thought about before. Below are some learnings from my experience:

Not everyone has Twitter accounts

I pretty much live on Twitter so I didn’t even realize that not all of our members had active Twitter accounts, though in the end, those who didn’t did have inactive accounts that they could resuscitate. I wish Spaces had a guest mode but I guess that would be against the whole reason Twitter has this feature anyway.

It is not an online meeting

Unlike our video conferencing systems like Zoom or Meet which allow for 100 or more simultaneous speakers, Spaces only allows for 10. Spaces isn’t so much about collaboration, more a “narrowcast”: You can have a panel of speakers but most people are supposed to be listening. You can swap speakers in and out as people wish to speak but it’s a very different model than a fully democratic online meeting.

Tooling is pretty good

The management features are pretty good and I was able to mute, boot, and swap speakers as needed but it definitely was some effort on my part.

Sharing information is hard

It is also hard to share links or show something off on your screen. One trick someone mentioned towards the end of the event was to share a Twitter thread with the audience that they can add to share something. It’s a bit of a hack but it works. This is also how I have heard Clubhouse panels share information: on their individual Twitter profiles.

TLDR:

Overall I really enjoyed our first Twitter Spaces event (what is the verb for this?) though it definitely had a bit of a learning curve. And by being a very public event, we did have a few folks join the event that had never attended a GDG event before, which was my primary motivation for the event. We are planning our next Spaces event now, so if you are interested, follow me or the GDG Philadelphia Twitter account for the announcement.

Metaverse

Mark Zuckerberg's avatar presenting the metaverse at an event Thursday.

The first time I really understood Mark Zuckerberg’s ambition was when he announced Facebook login. I had been developing prototypes on the Facebook platform for a bit but the idea that Facebook would scale its infrastructure to support login buttons and the friend graph everywhere on the internet blew my mind.

Whatever you may want to accuse Mark Zuckerberg of, lack of ambition cannot be one of them. And he pairs that with an uncanny ability to be right in the long term while being ridiculed by folks steeped in the conventional wisdom of the time (Instagram for a billion? WhatsApp for 18B?)

So I am mulling over Facebook’s pivot to the Metaverse pretty carefully. While I am excited about the possibility of the Metaverse, and having previously worked with VR for a couple of years, do find it a lot of fun, I am not convinced of Facebook’s current idea of the Metaverse in general or Horizon in particular.

Metaverse, metaverse and Horizon

So there are 3 ideas that often get intertwined in my head:

  • The metaverse (lowercase m) – A realm that merged digital and physical as originally defined by Neal Stephenson and imagined by a lot of technology pioneers since.
  • The Metaverse – As imagined by Meta: a constellation of VR specific apps that people will jump around. Guess we can call it the Meta-Metaverse, not confusing at all
  • Horizon – a VR social app. Horizon is also feels like a metaverse … that lives inside a larger Metaverse? Kinda like Facebook on the internet, or it would be if only Facebook could allow what pages could go on the real internet.

Meta-Metaverse

My primary gripe with the larger Meta-Metaverse is the app model. The iPhone, when it launched, launched with a lot of content: the entire internet. One of the biggest joys of owning the original iPhone was the ability to use the web, not a neutered mobile-centric version. Over time, native apps supplanted web experiences by delivering more dedicated versions of the experience, but that came afterwards.


The problem with going app-first is that it is hard given the current SDKs and workflows. And while WebVR kinda helps, there are very few tools that let novices design good VR experiences. This makes making VR apps expensive and since the only apps that make money on VR currently are games, VR starts feeling more like a game console than a smartphone.

Meta’s primary goal needs to be to figure out a way to bring the web into VR.

Of course, the challenge is that the web isn’t really designed for VR, just as the 2006 internet wasn’t really designed for mobile users. But you gotta pull a page out of Apple’s book and create UI elements around those limitations. Just as the iPhone replaced the combo box with the iconic tumbler wheel.

iOS 5 Safari Now Has Native HTML5 Date and Time Pickers - Smiths R Us
iOS’s iPhone optimized native controls for the web

Oculus should supply a whole range of controls that replace web elements and offer opportunities to interact with content in a VR-specific way, for example:

  • Create a VR specific form controls that replace on-page controls like combo-boxes, text inputs, date-fields etc. Not just functional but fun to use.
  • Convert on-page text to speech
  • Open image carousels automatically
  • Make embedded video that you can control (i.e not YouTube embeds…unless you can) open in the virtual theater experience. Fill the virtual theater with others who are watching the video at the same time
  • Allow web developers to control the above aspects via html meta-tags

etc etc…

Basically do everything to break outside the 2d window that current VR browsers offer. I think there is a lot of experiments that can be done there.

Horizon

The primary challenge to Horizon I feel is the synchronous nature of it (at least as shown by the demos shared so far).

Facebook social experiences that became huge were asynchronous. Words with Friends for example was an async version of Scrabble. Farmville didn’t require all your friends to be online at the same time. Most of my social experiences today are async.

By contrast, Horizon experiences seem to require everyone to be online and dedicated to the experience (i.e not multitasking) at the same time. The success here would depend on really killer experiences and/or a big leap in VR multi-tasking capabilities, both technically (i.e being able to run many apps at the same time) and experientially.

We don’t know much about Horizon yet, so I’ll end with my feeling that Horizon feels very reminiscent of the web portals where you could interact with a few dedicated experiences. In the long run, that isn’t what is going to win.

… guess we’ll see…

I think it’s interesting to realize that I am now much older than when I first got on Facebook and might be lacking the less-critical optimism that you need in the early days of any technology. But, like most geeks I guess, I have always enjoyed the idea of a metaverse. I just think it needs to be a very different thing than currently being imagined.

Or I could be wrong and will be eating these words in 5 years or so ⏰

Slides from my PHLAI Talk

Last week I was invited to give a talk at the PHLAI conference on the intersection of Blockchains and Machine Learning, 2 areas I have been working with a fair bit in the last couple of years. My hope with the talk was to get more AI practitioners interested in the Blockchain space, which I feel perfectly complements the AI space by providing a layer of trust on black-box AI systems.

Powerpoint presentations do not make good blog posts, so I’ll elaborate on some of the ideas in the future, but here is my deck from the presentation for now

https://speakerdeck.com/arpit/living-at-the-intersection-of-blockchains-and-machine-learning

Jetpack Compose: Rocketing in the right direction

Jetpack Compose Logo

Last week was Labweek at Comcast, one of the best traditions at Comcast, where developers and designers can take some time to pursue ideas, learn new technologies or just work with folks you don’t usually get to work with. Though every week can be labweek in my world working at Comcast Labs, I still enjoy working on something completely different from the projects-on-record for a week. You can read about a few of my previous labweek prototypes here.

For this labweek, I took the opportunity to build something with Jetpack Compose, Google’s new UI toolkit for building Android apps. In the last couple of years I have worked quite a bit with ReactJS, SwiftUI and a LOT with Flutter (read my thoughts on Flutter here) , and it was interesting to see how all of them were starting to adopt the same patterns. From the sessions at IO and conversations at the Philadelphia Google Developers’ Group that I help run, I knew Jetpack was also headed the same direction, but it took me building an app to realize how close that was.

Compose vs. SwiftUI

Compose feels the closest to SwiftUI, borrowing not only the ideas of light weight layout containers (Rows, Columns, etc) but also the use of Modifiers to manipulate the View …sorry…the Composable. Even the structure of a typical compose file with a function that defines your composable and another, annotated with a preview annotation, that lets you preview your layout is identical to the SwiftUI edit/preview experience. The similarity even extends to the documentation experience: check out the SwiftUI tutorial and the Compose tutorial page layouts with text on left that scrolls with the code on the right. Heck, even my bugs are similar in both frameworks 😉

Compose vs. Flutter

While Flutter is similar to Compose, I do prefer Compose’s modifier approach to Flutter’s approach of composing behavior using a hierarchy of widgets, though the hot reload on a device/simulator that Flutter does is preferred to the preview experience on Compose, specially since previews cannot fetch live data from the cloud and my design was very remote image heavy.

I also find creating animations in Flutter a bit cumbersome, having to create AnimationControllers, TickerProviderMixins, Curves and callbacks. Jetpack Compose does seem to have enough complexity in their own system as well but I got a lot of mileage out of just using AnimatedVisibility with enter and exit animations, though SwiftUI with the `withAnimation` blocks is the clear winner here.

Flowchart describing the decision tree for choosing the appropriate animation API
Compose’s animation system isn’t lacking in complexity either

Random Lessons Learned

There were a couple of gotchas as I was building the app: For some functionality that I would consider core, like fetching remote images or making your app aware of things like WindowInsets, is only available as part of an external (Google authored) library called Accompanist. I had a bit of a hiccup because my version of that library wasn’t compatible with the version of Jetpack in my Android IDE. I do hope these capabilities get added to Jetpack Compose core instead of an external dependency that I’d have to track (I do prefer the batteries included approach). Also if you do plan to use the official samples as a starting point, note that some (or at least the one we were using) has a compiler instruction to fail on all warnings (that took like 2 hours to figure out)

Wrapping Up

A week of intense coding in compose gave me a lot of appreciation for this new direction for Android development. I was pleasantly surprised how productive I felt working on a completely custom Android UI. There is still a lot of features of Compose I haven’t tried out yet but am definitely looking forward to learning more. At this moment Compose is not officially out yet (the latest version is release candidate 1 that came out a few days ago), but I am sure Compose will enable some truly amazing UI experiences on Android in the next few months!

Notes from A16Z’s NFT Virtual Summit

NFTs have been a lot in the news lately, jolted into mainstream spotlight by Beeple’s $69M sale of his NFT art piece called the first The First 5000 Days. As with most things crypto, there are as many passionate believers as there are skeptics of this new model for digital collectables. Regardless, from a purely technical level, I have been fascinated by digital collectables for a while (ever since CryptoKitties broke the Ethereum Blockchain) and have been trying to learn more about the technical underpinnings for the last couple of weeks.

A16Z, the venture capital firm, has been a great source for information on the whole crypto space for a while and organized an online summit this afternoon bringing in some big names in the field to speak about the state of the NFT space. Below are some of my takeaways from the event

On NFTs in general

Dan Boneh from Stanford University and Chris Dixon from A16Z kicked off the event with a fireside discussion on the state of the NFT space in general. Some interesting points of discussion included:

  • How one of the big reasons that Decentralized Finance (DeFi) exploded was because of the composable nature of Blockchain finance primitives. NFTs could offer similar capabilities. For example, you could wrap non-fungible ERC-721 tokens in fungible ERC-20 wrappers.
  • How we are already starting to see NFTs be used as collateral just as other assets tend to be
  • Could Quantum Compute destroy Blockchains and therefore the NFT value (nope, we have quantum resistant algorithms which we can move to as QC attacks start becoming more probable)

On NFT Marketplaces

It was fascinating to hear Kayvon Tehranian from Foundation and Devin Finzer from OpenSea talk about their NFT marketplaces. I missed a big part of the latter’s talk but I have been really curious about how Foundation works and it was great to hear a bit about that.

  • Every action on Foundation (listings, bids, etc) are recorded on the Blockchain and the asset itself is stored in IPFS. The system only works with non-custodial wallets (sorry Coinbase)
  • While technically it is possible for someone to upload an asset that they don’t own, Foundation manages this a bit by a pretty exclusive invite process with only current artists being able to invite new artists (which does feel a bit centralized IMO)
  • Since everything is managed in a decentralized way, it is theoretically possible to buy an asset from Foundation and sell on a different marketplace
The Nyan Cat NFT sold on Foundation by its original creator

Dieter Shirley and the Flow Blockchain

This may have been my favorite session since I am already interested in Flow. Dieter Shirley is the CEO of Dapper Labs but really got famous when he and his team built CryptoKitties while still working at Axiom Zen. Flow is a new blockchain designed for applications, not financial instruments and is famous for running the NBA Top Shot NFT

Flow’s architecture is driven by 3 goals:

  • Enabling building tangible products
  • Simple on-boarding for non-crypto-nerds
  • Higher capacity to enable web-scale products

He also talked about the decision to build their own chain instead of using Ethereum (“wasn’t easy”), though he does feel that interop with other chains is going to happen among different chains anyway.

On his one regret with the ERC-721 specification that he drafted, he wishes they hadn’t punted on the metadata specification for ERC-721 tokens (“it was a classic bikeshedding moment and there were too many people with too many opinions”).

On the challenges he sees with NFTs in general, he feels legitimacy of the NFT, the challenge of balancing scarcity and abundance and the challenge of interacting with the traditional financial system will remain the big challenges for the near future.

DAOs and NFTs

The last talk of the evening was by digital artist pplpleasr who talked a little bit about her process for NFTs but then mostly talked about the birth of the PleasrDAO, a Decentralized Autonomous Organization that was formed organically to acquire her Uniswap NFT and now exists as a community that buys other NFTs and leverages their assets to power socially conscious projects on the Blockchain. Her talk ended with her revealing her new NFT titled “Apes Together Strong”, with all proceeds going towards charities supporting autism advocacy.

Apes Together Strong by pplpleasr

I love the idea of DAOs and the talk, as well as the sentiment on her slide below, was the perfect talk to end on

Demo-Driven Development

I recently finished reading Ken Kocienda’s “Creative Selection” book about his time on the original iPhone engineering team. Most of the book is his work building the the soft keyboard for the iPhone and coming up with systems to allow users to productively type on a glass surface without any tactile feedback (with the specter of the awful handwriting recognition software that killed the Newton in the background)

So much of this book spoke so personally to me. Most of my career has been in very early stage projects where we were still figuring out what the products and technologies were trying to be. As part of Comcast Labs, demos and prototypes still remain the bread-and-butter of most of my daily work. And while I have met a few other folks who, at least at some point in their career, have had to work on these, I haven’t found many books that talk about the process of building prototypes and demo-ware.

Prototypes vs Demo-ware

One of the things I have learned in the last so many years that prototypes and demo-ware can be very different things. The primary goal of a prototype is to learn something (Houde and Hill’s excellent paper from 1997 breaks that down to Implementation, Role and Look-And-Feel). Demo-ware is more about getting people excited about the possibilities.

That said, a great demo can sink your product if you set unrealistic expectations. Among some of my prototyper-friends who now prototype at different companies, we still give each other “Clinkle bucks” for a good demo. For those who may not remember it, Clinkle was a once-hot-in-Silicon-Valley startup that raised $30M on the back of a great demo. The history of Clinkle is a fascinating read but highlights how a great demo made with no regard for feasibility cannot save your company.

A few thoughts on demos

Here are some of my personal notes on making good demos:

  • Get to the point: You only have a few minutes for a good demo, so get to the interesting point fast. Do not waste your time implementing general software constructs like real login systems, etc. Fake as much as you can.
  • Have the right team: Quite a number of devs I have met consider demos a waste of time. Make sure your team is passionate about making great demos
  • Remember Theater: Lean into a bit of theater with good design and animations. Choreography is important

One final thing I’d like to say is that it is that in terms of tooling, it’s a bit of a bummer that tools like Flash are dead. While I love JavaScript, it doesn’t have the same ease of building amazing visuals like Flash did (Bas Ording, Steve Job’s main interaction lead responsible for many iOS interactions, did most of his work in Macromedia Director). A couple of my friends in other companies have moved to Unity but building demos for 2D experiences in a 3D game engine is not ideal. We need better and more approachable visual tools for sharing ideas.

2020 Retrospective

Between a global pandemic and a shocking display of the ugliest parts of human characteristics, 2020 will go down as one of the worst years to be around. Compared to some of the other heartbreaking stories I keep reading, my family and I were lucky to only be inconvenienced and not devastated by everything that happened in 2020.

The tl;dr version of this post is: ‘I got MARRIED 😱 … and, yeah, I wrote some code’

Married

After way too long, Dana and I finally got married. The pandemic ruined our more elaborate plans, but we had drawn on the engagement for too long already and all our travel plans are at a halt for a while, so having a small ceremony at my sister-in-law’s backyard seemed like a good idea. We live-streamed most of it on a (non-interactive) Zoom and a (interactive) Google Meet virtual meeting so we did get a big audience for the event. I wish my parents had been able to join us physically but we’ll do some kind of IRL party when we go to India whenever the world feels safer.

Code

One of the interesting parts about working at Comcast Labs is that you get to work on a number of projects using very different technologies. In previous years it has been a healthy mix of VR/Blockchains/Chatbots/Machine Learning etc. In terms of domain, this year was a lot more focused. Most of my explorations were in the space of Customer Experience Bots, and efforts to improve the Xfinity Assistant, coming at it from a lens of 3-5 years out. Over the year I built a Knowledge Graph editor using Grakn, explored the use of Structured Data, esp Microdata, within chatbots and worked on adding more intelligence to the edge (i.e. Mobile Apps) to power the diagnostic flows.

I also enjoyed working on some personal mobile apps using Flutter, Ruby on Rails and Firebase. I am blown away by the capabilities of Firebase and hope to share some learnings on that on this blog soon.

Here is a very unscientific quantitative breakdown on what I spent my time on this year

The one thing that is conspicuously missing here is Blockchains. While I still help run the Comcast Blockchain and Decentralized Technologies Guild, I didn’t get to spend any actual coding time on it in 2020. Here is hoping for 2021 🤞

Community

The Google Developers’ Group that I help run went virtual this year, like every other Meetup (I wrote a bit about that earlier). I miss hanging out in person with the friends I have made there but thanks to Google Meet and Slack, we are still alive and kicking.

The one change this year was a lot more interactions with the Google Cloud teams as well as GDG-Cloud Philly. With my own interest in Cloud Services growing, the joint sessions with the other two groups were definitely super interesting.

Books

This hasn’t been the greatest years in terms of reading, but that is a good thing, since my focus was more on producing and given the time limitations, something had to give.

2021 is starting off on some positive notes so I hope its a better year in general. Have a great 2021 👍

Managing professional communities during a pandemic

I recently represented the Philadelphia Google Developers’ Group, a group I have been helping manage for close to 9 years, at Technical.ly’s Super Meetup event, an event that brought together local technology and entrepreneur communities for an evening of social festivities. And while Zoom events don’t have the same vibe as the in-person events that the event has traditionally been, the Technical.ly crew did a good job bringing people together for an evening of community talks and nerd-trivia 🙂

As part of the event, they had the group leads talk about how the groups have fared during the pandemic. You can read all the responses here and I am pasting mine here as well

Image

The question gave me a little time to reflect on our setup, and generally we are doing the best we can of a really weird situation. I am really looking forward to a time when we’ll be able to meet face to face again but that doesn’t seem like its going to be anytime soon. But the virtual nature of all meetups has given us more opportunities to collaborate beyond our local neighborhood.

Probably the biggest change has been the activity on our Slack account which we had only recently moved to, as we moved away from the larger PhillyDev Slack community. That decision seems to have been the right one and I hope more folks from our Meetup.com page join us there.

The virtual events have allowed us some non-traditional events as well like Mike Zorn’s book club meeting every Monday evening where we are going through books on Kotlin and Android programming and some co-viewing events we are running in collaboration with the Google Cloud team that have been personally very educational as well.

I recently attended a virtual event hosted by Promptworks which was really interesting as well. I hadn’t realized till that event that Zoom offered breakout rooms which is great. I might try that for our next event. Anything to lower to speaker/attendance ratio which makes the conversations feel more intimate

Catering seems to be becoming a part of some events as well. The Google events we helped with had catered lunches through Grubhub which was great (who knew Grubhub had a corporate events group 🤷‍♂️) though the Promptworks team won that round with some amazing food and wine delivered to the attendees. It might be too expensive for monthly events but might be an option for special occasions

Video collaboration tools still feel poorly designed for a professional communities though. Most are designed around a talking-head + shared screen experience and aren’t nearly as collaborative or inclusive as in-person events. There is an opportunity for a product here, though it would have to be with a very different business model, since most communities don’t charge their users to attend their events and so can’t afford to pay for individual seats.

Maybe its something a company like LinkedIn could be interested in offering to professional communities?

Thoughts on Web Conferences

Yesterday I attended the L3 AI online conference on digital assistants organized by RASA. I am still working on the notes from that conference that I’ll share here at some point but I was really pleasantly surprised by the format of the conference. While the current pandemic has forced a lot of conferences to go online, most have just become Zoom calls and honestly are exhausting for more than an hour. I actually attended the conference for the whole day yesterday and it was the best online conference format I have seen so far.

The conference was powered by Accelevents, so good job folks, though I am sure they have competition in that space. I have also heard of good things for Run the World (actually, I haven’t. The only thing I have heard of them is on their investment with a16z 😁. But the features listed on their site look interesting).

So here are some thoughts on my experience with L3

Pre-Conference

Both Accelevents and Run the World allow users to create a profile ahead of time. This lets users reach out to others who may share the same interests during the event or when they are algorithmically paired (see below). RTW lets you create video profiles as well, which is cool

Socializing

Connecting to others is probably the most important part of a conference (most session videos end up online anyway). The Zoom experience is to just have as many videos of people as possible. That doesn’t really work since only one person can talk at the same time and a number of people are either multi-tasking or feel otherwise hesitant to share their video

The L3 conference page had a link to socialize which would randomly pair you with another attendee. I didn’t use it but mostly because there wasn’t much between sessions during the day. Instead of one-on-ones, I would have liked small groups that I could be joined with which would have felt a little less intense.

Prerecorded Scheduled Sessions

Most of the talks were just prerecorded sessions with the speaker and other attendees discussing the talks in a chat window next to the video player. The sessions unlocked at different times, so it did feel a bit like a conference track.

The advantage of the prerecordings was that

  1. You could pause and rewind the sessions right there if you missed something
  2. The video-audio quality of the sessions was good (none of the “can you hear me now” moments).
  3. Some presenters had even done some post-production work on their videos which was nice

The event page included a video page and a side panel that included tabs for chat, polls, attendees and questions. As with a lot of tabbed interfaces, the out-of-sight / out-of-mind thing happened and I never looked at the non-default (chat) tabs.

Unlike video, chat allows for many people to talk to each other at the same time which is better I think. So I was able to see some interesting discussions between the attendees on various topics.

Expo

An interesting aspect of the conference was a virtual expo tab where every company that was sharing their products could have people available for a Zoom video chat (yeah, they were using Zoom which I didn’t know could be embedded in a webpage). That was neat.

Final thoughts

I really got a lot out of this conference and enjoyed the format. With a lot of conversations going on right now on how virtual conferences could be more like real ones, I think we should also think about how they could be better than the real thing. For one thing, your audience can be a lot bigger, more diverse and inclusive.

There is also a lot of innovation going on right now in the chat experience in general (emojis, virtual gifts, etc) that could make text chat more lively as well.

There needs to be a new middle ground between video and text chats (maybe digital avatars?). I like looking at people’s faces but I also understand the multi-tasking thing when in front of a laptop. VR chat rooms get across a lot of feeling of presence by just using eyes for example.

I enjoyed the timed sessions, though I struggled to attend any of them totally in sync with their start times as there was a lot of stuff happening at home (work emails, etc).

I am really curious where the virtual conference ideas go from here. At the Philly GDG which I help run, we have transitioned our events to Zoom events and were planning to do the same thing for future “conferences” (like DevFest etc), but this has given me a lot to think about.

If you have other ideas about the opportunities here, drop in a comment below 🙂

Deploying Multiple Sites on Firebase Hosting

Today I deployed a second site to a Firebase project. I have deployed sites individually on different Firebase projects but hadn’t realized that a single project could support multiple sites. This is specially useful if the various sites use the same assets (think internationalized versions of sites etc)

The documentation on multi-site support is actually pretty good. In my case, my “launchpage” project was completely different from the other site on the project but it does give me the opportunity to bring the two together later. It basically came down to modifying the firebase.json file to look like this:

{
  "hosting": {
    "target":"launchpage",
    "public": "",
    "ignore": ["firebase.json",/.", "/node_modules/*"]
  }
}

This file tells the Firebase tools to ignore certain files and deploy others to hosting.

You can test your app locally by calling firebase serve and deploy to production by calling firebase deploy while in the project root.

The only hiccup I ran into is setting up DNS correctly. While Firebase tries to make it easy by giving you an IP address to point your domain to, Namecheap doesn’t work if you specify your domain name in the hosting panel and requires you to use @ to refer to the domain you are configuring. Subdomains similarly cannot be FQDNs and need to just be the name of the subdomain you are configuring (www instead of http://www.mysite.com for example)

Note that Firebase will occasionally request you to re-verify your domain. See the conditions why on this link or the screenshot below

Same rule applies for re-verification: Use the `@` key to add your custom TXT record needed to verify your domain

Considering how easy this was, this might be the way I host all my sites in the future 🙂