Gotchas with JavaScript Promises and Fetch

This year has definitely been one of “return to JavaScript” for me (among other things) and its really interesting to see how far it has come. Between Cloud Functions, complex client applications using React, native app development with React Native and now even using AFrame/ThreeJS for WebVR development, I have been writing a LOT of JavaScript across the stack.

JavaScript’s increased responsibilities have unfortunately brought with it a corresponding increase in complexity which trips up many a returning developer (Gina Trapani’s excellent post is a good read if you are one of them..er..us). This month I have spent quite a few hours dealing with JavaScript’s Fetch API and the whole Promises API. There are a couple of gotchas there that I ran into that are worth sharing. Maybe they can save you a couple of hours down the road

  1. Fetch and Promises start executing immediately. You cannot create a Promise object and store it to be executed later. If you are trying to avoid that, one option is to create a function that returns the Promise when needed.
  2. Fetch requests have no concept of a timeout. If you need a Fetch request to be aborted after a certain number of seconds, the best way I have seen is to use Promise.race along with a different function that then throws the error to reject the Promise chain.
  3. Making multiple calls with Fetch? Promise.all is a great option except that all requests / Promises start executing in parallel. If you need to execute them in sequence (like I did), you are out of luck without writing some utility code or leveraging a library. I ended up using this npm module.
  4.  Server error responses to Fetch calls are still interpreted as successes and call the success callback handler. Which means that you have to check for errors in your onSuccess which feels just wrong.

These are definitely some … debatable calls made by the guys deciding the api. If there are other gotchas you have run into, please share them here as well.

 

Different Programming Metaphors

Its interesting that for an industry pushing humanity into the future, Software engineering practices have not changed significantly in the last 50 years.  We are still using basic text editors with syntax highlighting, often on machines with hundreds of times the power of the devices they were originally designed for, an irony highlighted by Bret Victor in his talk linked below

I have been thinking about this for a while and collecting links on different ideas around this for the last few years. The deck below collects some of these ideas. If you have others that could be added, please leave a comment.

Thoughts on Google IO 2017

Screen Shot 2017-05-20 at 3.43.40 PM.png

I spent this entire week in the west coast attending North America GDG Managers’ Summit and the I/O events. I am still processing some of the conversations from the Managers’ summit and how to use them to improve GDG Philadelphia that I help run so I’ll leave that to a future blog post so this post is restricted to the I/O event only.

The list of announcements both big and small are a mile long and have been well covered by other publications. My own gist of the announcements is here (feel free to send me a pull request if you wanna add anything there). Here are some thoughts on just I/O this year:

AI All the Things

Google’s internal things to pepper their products with features only possible using AI is clearly bearing fruit. From just pure utility features like enhanced copy and paste on Android to flagship features like Google Lens that allows object recognition in photos and videos on Google Photo and Assistant. I am particularly excited by the TensorflowLite project and programming for AI is something I am going to learn this year.

Immersive Computing (VR / AR / MR / xR)

People seem to love coming up with new terminology in this space. Google buckets the VR/AR technologies into “Immersive Computing”. They are doing some really interesting things in this space and I am glad to see them continue to push the state of art here. I was particularly impressed by Project Seurat that uses algorithms to allow developers to use simpler geometry to mimic complex, even movie quality, 3D models.

On the Tango / Augmented Reality side, Google Visual Positioning System truly impresses as well. In fact in one conversation, a Googler mentioned that the Google Maps team was heavily involved in the VPS development.

There were also some great demos of AR capture and reconstruction using the upcoming Asus Zenphone AR. Big question is when does a Google Pixel get a depth sensor and Tango support?

Actions on Google

Google’s new Actions platform that lets you build skills for Google Assistant on the Home, Android and iPhone was very interesting. The tooling basically consists of 3 components:

  • Actions on Google console that lets you manage your …um..actions
  • The API.ai tier that your actions probably need to manage natural language input
  • Chatbase, Google’s analytics platform for Chatbots that lets you observe your bots’ growth and engagement over time

I liked the system and it seems pretty trivial to make a simple Chatbot…I mean Action. They also announced a competition for the platform so get ready to see a lot of new ways you can order pizzas 😉

Android Dev

Android SDK + Firebase

Google continues to push Firebase as an essential part of Android development. Google cloud services have been catching up to AWS’ for a while and Firebase seems to be a great option to AWS Mobile. AWS’ tools are not friendly to a mobile developer and the Firebase tools do seem much more approachable. The addition of services like Performance Monitoring makes Firebase even more essential a part of the Android developers’ toolkit.

Google Play Developer Console Updates

I haven’t pushed anything to the Google Play Store since Picscribe in 2013. The publisher tools back then were functional and did a decent job I thought, but the latest updates to the publisher experience are fantastic. More tools to run A/B tests, greater visibility on top reasons for crashes, pre-release testing etc will allow developers to really optimize their apps just from the store.

Kotlin is an official second language for Android development

I am mostly ambivalent about Kotlin (😱). I had no particular issues with using Java for Android development, except maybe an occasional gripe about not being able to pass functions around. I am happy for Kotlin’s less verbose syntax but dread what happened with Swift’s introduction to the iOS ecosystem where the focus seemed to change from cool apps to various academic discussions (if I hear about monads one more time…).

Also the rapid evolution of the language meant that code examples and Stack Overflow answers stopped working in a few months. Lets hope this is less of an issue on the Android side.

And of course a new developer moving to Android now needs to know not only Java but Kotlin as well since the codebase will be a mix of the two.

On the flip side, the copy Java and paste as Kotlin feature in Android Studio is pretty dope

Cloud Functions: The rise of Lambdas

With so much functionality exposed as services from either Google or Amazon, developers can really power their apps with very little backend code development. That said, this leads to the rise in the need for some kind of glue layer that connects all these components together. Firebase’ Cloud Functions and Amazon’s Lambdas serve this need.  The workflow for Amazon Lambdas is still slightly awkward, but Firebase’s workflow feels a lot better.

Final thoughts

There were a lot for cool technologies for show at I/O and it was awesome. The other amazing part was just meeting old friends from across the world and even making some new ones.

I will also say this: This was one of the BEST organized events I have ever attended and kudos to Google for pulling it off. The session reservation system worked well, there was ample shade, food and drink and even they even got the weather to be nice for 3 days 😉

Till next year!

 

I won one of Phildelphia Business Journal’s 10 “Tech Disruptors for 2017” awards

They say don’t bury the lede but I just gave it away in the title ;). Phildelphia Business Journal comes up with a list of 10 Tech Disruptors every year who are “blazing new trails and inspiring others in the technology community”. I am one of the 10 for this year, and in extremely smart company of local CEOs, CTOs and Founders.

phl

Thanks for the honor @PHLBizJournal. Its great to see your name in the paper (well for the right reasons 😉 )

The return of the QR code

Over the last few years I have found myself defending QR codes in different conversations. While huge in the rest of the world, QR codes were never embraced in the west. Aesthetics was one that I had heard multiple times (“They are so ugly”) but they solved a real problem: bridging the offline world with the online one.

For whatever reason, neither Apple nor Google devices ship with a default QR code reader. Apple’s default camera app has some image recognition built in which lets you scan iTunes gift cards, but neither Apple nor (more surprisingly) Google showed any interest in QR codes. (Update: Apple added QR support for the default camera app in iOS 11)

But QR codes have snuck up in our society in the last few years. Some of these aren’t normal QR codes and maybe deserve their own label (scan codes?) but the idea remains the same: a graphic that codifies text that a scanner (camera) can read from a distance.

  • Snapchat popularized the idea with their Snapcodes that let users add other Snapchat users as friends.
  • Twitter, Kik, Facebook Messenger and Google Allo followed and now scanning a code to initiate a connection is starting to become normal.

Screen Shot 2017-05-13 at 11.50.37 AM

Today, at F8, Facebook’s big developer event, they announced that Messenger will now support their scan-code, what they call Parametric Codes, which you’ll be able to use to do all sorts of things from friending to payments (offline payments via scan-codes is a big deal in China, where Messenger is taking a lot of its feature development cues from).

As happy as I am to see the return of these codes, the proprietary nature of each of them is a little bit of a bummer, but hopefully they will make the idea of scanning a code to connect with the real world more mainstream.

Update:

YCombinator Blog has a very interesting article on the rise of WeChat, but this section on QR codes is especially interesting

WeChat’s elevation of the QR code as a link from the offline became the lynchpin for China’s online-to-offline boom in 2015. Previously, to engage with a service or brand, a user would have to search or enter a website address. WeChat’s Pony Ma says of QR codes, “it is a label of abundant online information attached to the offline world”. This logic explains why WeChat chose to promote QR codes in the first place. QR codes never took off in the U.S. for three key reasons: (1) the #1 phone and the #1 social app didn’t allow you to scan QR codes. (2) Because of this, people had to download dedicated scanner apps, and then the QR code would take them to a mobile website, which is arguably more cumbersome than simply typing in the URL or searching for the brand on social media. (3) Early use cases focused on low-value, marketing related content and at times was merely spam. So, even though QR codes would’ve been U.S. marketers’ dream, it was a few steps too far to be useful.

With the established adoption of QR codes, WeChat launched “Mini Programs” as an extension of WeChat Official Accounts designed to enable users to access services in a more frictionless way just like the web browser did