I recently updated the React Native app I have been working on for a while from RN 0.47 to 0.55. I’ll admit I was a bit callous about the update and hadn’t really looked at the change log, but hey, version control does give one a foolish sense of bravado.
Anyway, needless to say there were issues. As of RN 0.55.4, the `setJSMainModuleName` has been renamed to `setJSMainModulePath` and it took me a bit of sleuthing to figure that out (Find the Github commit here)
However a bigger issue came up when I tried to package the app after resolving the compile errors.
This was a total fail for me, since my app uses local npm modules to hold pieces of common code for the web and mobile clients.
Thankfully someone did come up with a bit of a hack that generates absolute paths for all symlinked libraries and launches the cli.js file of the packager with a config file with the list of absolute paths.
It works for now, but hopefully this bug will get fixed soon.
If you know me, there is a good chance that you know how 👍 I am about Blockchain and Decentralized apps. I have given a few talks on it but till recently these were mostly either focused on Bitcoin or on the academics of Blockchain technology. At a recent Comcast Labweek, I was finally able to get my hands dirty with building a Blockchain based decentralized app (DApp) on Ethereum.
Labweek is a week long hackathon at the T&P org in Comcast that lets people work on pretty much anything. I was pretty fortunate to end up working with a bunch of really smart engineers here. The problem we decided to look into was the challenge of funding open source projects. I am pretty passionate about open source technologies but I have seen great ideas die on Github because supporting a project when you aren’t getting paid for it is really hard. Our solution to this problem was a bounty system for Github issues that we called CodeCoin.
The way CodeCoin worked was as follows:
A project using CodeCoin would sign up on our site and download some Git hooks.
When anyone creates an issue on Github, we create an Ethereum wallet for the issue and post the wallet address back to Github so its the first comment on the issue.
We use a Chrome extension that adds a “Fund this issue” button on the Github page that starts the Ethereum payment flow.
Ether is held in the wallet till the issue is marked resolved and merged into master. At this time another Git hook fires that tells our server to release the Ether into the wallets of all the developers who worked on the issue.
Note that while we held the Ether on our side in wallets, the right way to do this would have been to use a Smart Contract. We started down that route but since most of the code was done in like 2 days (while juggling other real projects), wallets seemed like the easier route.
Releasing money into developer accounts was also a hack. Since developers don’t sign up to Github with any digital wallet address, we need the wallet addresses as part of the final commit message. This could be done with a lookup on a service like Keybase.IO maybe and with more time we would have tried integrating it to our prototype. In fact it was the next week that I heard about their own Git offering. I haven’t read enough about that yet though.
For local development, we used the TestRPC library to run a Ethereum chain simulation on our machine.
Web3js was injected into the browser by the MetaMask extension. There were some challenges getting Metamask to talk to the TestRPC. Basically, you had to make sure that you initialized MetaMask with the same seed words as you used for your account on TestRPC (which makes sense) but there isn’t a way afaik to change that information in MetaMask. Early on, we were restarting TestRPC without configuring the initial accounts so we’d have to reinstall MetaMask to configure it with the new account. Chalk that to our own unfamiliarity with the whole setup.
We did try to use Solidity to run a smart contract on TestRPC which worked for the demo apps, but canned that effort in the last moment as we were running out of time
All in all, it was a fun couple of days of intense coding and I feel I learnt a lot. Most of all I enjoyed working with a group of really smart peers, most of whom I didn’t know before the project at all. Hopefully we get to do more of that in the future 🙂
I had a great time last week attending Oculus Connect 4. Just like last year, the keynotes were really interesting and the sessions pretty informative. Here are some quick thoughts on the whole event:
Oculus Go and Santa Cruz
Oculus announced two new self contained headsets: the Go, a 3DoF inexpensive ($199) headset that will be coming early next year and much later, Project Santa Cruz, the 6DoF headset with inside-out tracking. Whats interesting is that both these devices will run mobile CPU/GPUs which means that 3 of the 4 VR headsets released by Oculus will have mobile processing power. If you are a VR developer, you better be optimizing your code to run on low horsepower devices, not beefy gaming machines.
Both Go and Santa Cruz are running a fork of Android
The move to inexpensive hardware makes sense, since Oculus has declared it their goal to bring 1 billion people into VR (no time frame was given 😉 )
Oculus Dash and new Home Experience
The older Oculus Home experience is also going away in favor of the new Dash dashboard that you’ll be able to bring up within any application. Additionally you’ll be able to pin certain screens from Dash enabled applications (which based on John Carmack‘s talk seem to be just Android apks). There could be an interesting rise of apps dedicated to this experience, kinda like Dashboard widgets for Mac when that was a thing.
The removal of the app-launcher from Oculus Home means Home now becomes a personal space that you can modify with props and environments to your liking. It looks beautiful, though not very useful. Hopefully it lasts longer than PlayStation’s Home
The Oculus Avatars have also undergone a change. They no longer have the weird mono-color/ wax-dolls look but actually look more human with full color. This was also done to allow for custom props and costumes that you’ll be able to dress your avatar in in the future (go Capitalism 😉 )
Another change is that the new Avatars have eyes with pupils! The previous ones with pupil-less eyes creeped me out. The eyes have also been coded to follow things happening in the scene to make them feel more real.
Oh and finally, the Avatar SDK is going to go cross platform, which means if you use the Avatars in your app, you’ll be able to use them in other VR platforms as well like Vive and DayDream.
Oculus has been talking quite a bit lately about how Video is a huge use case for VR. A majority of use of VR seems to be in video applications, though detail on that wasn’t given. For example, apps like BigScreen that let you stream your PC cannot be classified as video or game since who knows whats being streamed. Also since the actual usage number of VR sessions wasn’t said, its hard to figure out if the video sessions count is a lot or not.
Either way, one of the big things that Carmack is working on is a better video experience. Apparently last year their main focus was better text rendering and now the focus is moving to video. The new video framework no longer uses Google’s ExoPlayer and improves the playback experience by syncing audio to locked video framerate rather than the other way as its done today.
One of the interesting things announced at Connect was Venues: a social experience for events like concerts, sports etc. It will be interesting to see how that goes.
There were numerous other talks that were interesting, from Lessons from One Year of Facebook Social to analyzing what is working in the app store. All the videos are on their YouTube Channel
While I was wowed by a lot of the technology presented, it definitely feels like VR has a Crossing the Chasm problem: They have a pretty passionate alpha-user base but are trying really hard to actually get the larger non-gaming-centric audience in.
Oculus Go seems like a good idea to get the hardware and experience more widely distributed but what is really needed is that killer app that you really have to try in VR. The technology pieces are right there for the entrepreneur with the right idea.
I have been involved in a few VR projects this last year. While the earlier prototypes used Unity as the development environment, some of the new ones use WebVR, an emerging web standard for VR development.
WebVR, as opposed to native-app VR, does have a few advantages:
Automatically falls back to an in-browser 3D experience on non-VR devices
Not having to compile the app to quickly check the changes in a browser is pretty awesome
The biggest thing though is that the kind of experiences we have always thought about: moving from one VR experience, is not possible in a series of native apps. I have heard the future of VR referred to as a “web of connected VR experiences” and that is the vision that is truly exciting.
That said, current tooling is much better for VR native apps with most tools focusing on Unity, which is really the de-facto tool for game developers. However I really hope the tooling on WebVR side starts getting better.
Developing for WebVR
The way we currently build for WebVR is by using AFrame, a VR framework built on top of WebGL primarily maintained by Mozilla and the WebVR community. AFrame is built on top of ThreeJS, the most popular 3D library for WebGL. For desktop VR development, the only desktop browser that you don’t have to finagle with too much is Firefox. Most of the development is done on Oculus Rifts connected to some beefy PCs.
The developer workflow for mobile VR development though is a different story. While our current prototype had no requirements to be mobile, I recently tried it on a Google’s Daydream and found a few bugs. Fixing those seemed trivial, but actually doing that was a lot more painful than I would have thought. Here are some problems I ran into:
Cannot start a WebVR experience from inside VR
Currently there is no available web browser that can launch from the DayDream VR home menu. While Chrome on Android supports WebVR and will trigger a “Insert into Headset” DayDream action when a user taps on a VR button on a WebVR experience, there is no way to get to that experience from within DayDream itself. You cannot pin a WebVR experience to your DayDream Home and WebVR experiences don’t appear in your recent VR apps section.
This is actually really frustrating. The workflow to debug a DayDream error is:
On phone, go to Chrome, launch app
Tap “VR” mode
Insert phone into headset
Verify Chrome Remote Debugger is still connected
See if the bug still appears
Pop phone out of headset
The constant popping of the phone in and out of the headset get old really fast. One option may be to add a “reload” button in your WebVR experience but I am not sure if that will work, since you aren’t supposed to be able to enter VR mode without an explicit user action (like a button tap)
One thought I did have was to create an Android app with the Manifest declaring it as a DayDream app, and then have its main view just be a WebView. Unfortunately that didn’t work, though I did get the app in the DayDream Home view. A different idea was to let this app launch Chrome with my WebVR app’s URL. Again, there were challenges: For one Chrome launched in conventional view and did not automatically trigger the VR split view for the left and right lenses. To add to this hack, I added a trigger to call AFrame’s enterVR() method when the page loaded which kinda worked but every launch caused this weird blink when the app went from 2D to VR mode that it was actually painful to use.
One HUGE tip in this workflow: Make sure you have enabled the DayDream debug menu selected the “Skip VR Entry Screens” without which the workflow mentioned adds like 2 more steps per debug.
Using Chrome Remote Debug
For a lot of my testing, all I needed was the console.log function from developer tools. You can see your logs using Chrome Developer Tools’ Remote Debug feature. Not sure I was doing it wrong but I kept losing connection to the active tab every time I reloaded the page to check. Really annoying. At the end of the day, I did discover the A-Frame Log Component, which I haven’t used yet, but intend to very soon.
Lack of a DayDream Controller Emulator
If you are developing for VR, your productivity is directly proportional to how much of the development you can do without putting on the headset. With WebVR, since your app automatically works in a browser, you can do a lot of development without the headset. Unfortunately this breaks down when you are trying to code around user interactions. You can use the Mouse as a raycast source which gets you partly there but you really want an emulator for the hand controllers to try different things out.
DayDream officially has an emulator for its controller, but that controller only seems to target Unity and Unreal based projects. There are other projects like DayFrame for AFrame but since my problem was specific to the DayDream controller, using a proxy emulator didn’t make much sense.
What I really wanted to do was to pair the official Google DayDream controller to my PC but I haven’t been able to find any way to do that yet.
I have been generally enjoying working with AFrame and it has a surprisingly (to me) strong community of active developers. However the developer workflows, esp for on-device testing, still need work. Ideally what I am looking for is a one click that deploys my WebVR app to a server and then launches DayDream pointed to the WebVR page running in fullscreen VR. Or even better, a WebVR/AFrame equivalent of Create React App or similar boilerplate projects, that automatically sets up all the best tools for developing and testing WebVR projects on both the browser and on-device.
Its interesting that for an industry pushing humanity into the future, Software engineering practices have not changed significantly in the last 50 years. We are still using basic text editors with syntax highlighting, often on machines with hundreds of times the power of the devices they were originally designed for, an irony highlighted by Bret Victor in his talk linked below
I have been thinking about this for a while and collecting links on different ideas around this for the last few years. The deck below collects some of these ideas. If you have others that could be added, please leave a comment.
I spent this entire week in the west coast attending North America GDG Managers’ Summit and the I/O events. I am still processing some of the conversations from the Managers’ summit and how to use them to improve GDG Philadelphia that I help run so I’ll leave that to a future blog post so this post is restricted to the I/O event only.
The list of announcements both big and small are a mile long and have been well covered by other publications. My own gist of the announcements is here (feel free to send me a pull request if you wanna add anything there). Here are some thoughts on just I/O this year:
AI All the Things
Google’s internal things to pepper their products with features only possible using AI is clearly bearing fruit. From just pure utility features like enhanced copy and paste on Android to flagship features like Google Lens that allows object recognition in photos and videos on Google Photo and Assistant. I am particularly excited by the TensorflowLite project and programming for AI is something I am going to learn this year.
Immersive Computing (VR / AR / MR / xR)
People seem to love coming up with new terminology in this space. Google buckets the VR/AR technologies into “Immersive Computing”. They are doing some really interesting things in this space and I am glad to see them continue to push the state of art here. I was particularly impressed by Project Seurat that uses algorithms to allow developers to use simpler geometry to mimic complex, even movie quality, 3D models.
On the Tango / Augmented Reality side, Google Visual Positioning System truly impresses as well. In fact in one conversation, a Googler mentioned that the Google Maps team was heavily involved in the VPS development.
Visualization of a how a VPS (visual positioning service) map gets generated. This map of a Lowe's store has 10M+ feature points in it. pic.twitter.com/QMEkBIOtXY
There were also some great demos of AR capture and reconstruction using the upcoming Asus Zenphone AR. Big question is when does a Google Pixel get a depth sensor and Tango support?
Actions on Google
Google’s new Actions platform that lets you build skills for Google Assistant on the Home, Android and iPhone was very interesting. The tooling basically consists of 3 components:
Actions on Google console that lets you manage your …um..actions
The API.ai tier that your actions probably need to manage natural language input
Chatbase, Google’s analytics platform for Chatbots that lets you observe your bots’ growth and engagement over time
I liked the system and it seems pretty trivial to make a simple Chatbot…I mean Action. They also announced a competition for the platform so get ready to see a lot of new ways you can order pizzas 😉
Android SDK + Firebase
Google continues to push Firebase as an essential part of Android development. Google cloud services have been catching up to AWS’ for a while and Firebase seems to be a great option to AWS Mobile. AWS’ tools are not friendly to a mobile developer and the Firebase tools do seem much more approachable. The addition of services like Performance Monitoring makes Firebase even more essential a part of the Android developers’ toolkit.
Google Play Developer Console Updates
I haven’t pushed anything to the Google Play Store since Picscribe in 2013. The publisher tools back then were functional and did a decent job I thought, but the latest updates to the publisher experience are fantastic. More tools to run A/B tests, greater visibility on top reasons for crashes, pre-release testing etc will allow developers to really optimize their apps just from the store.
Kotlin is an official second language for Android development
I am mostly ambivalent about Kotlin (😱). I had no particular issues with using Java for Android development, except maybe an occasional gripe about not being able to pass functions around. I am happy for Kotlin’s less verbose syntax but dread what happened with Swift’s introduction to the iOS ecosystem where the focus seemed to change from cool apps to various academic discussions (if I hear about monads one more time…).
Also the rapid evolution of the language meant that code examples and Stack Overflow answers stopped working in a few months. Lets hope this is less of an issue on the Android side.
And of course a new developer moving to Android now needs to know not only Java but Kotlin as well since the codebase will be a mix of the two.
On the flip side, the copy Java and paste as Kotlin feature in Android Studio is pretty dope
Cloud Functions: The rise of Lambdas
With so much functionality exposed as services from either Google or Amazon, developers can really power their apps with very little backend code development. That said, this leads to the rise in the need for some kind of glue layer that connects all these components together. Firebase’ Cloud Functions and Amazon’s Lambdas serve this need. The workflow for Amazon Lambdas is still slightly awkward, but Firebase’s workflow feels a lot better.
There were a lot for cool technologies for show at I/O and it was awesome. The other amazing part was just meeting old friends from across the world and even making some new ones.
I will also say this: This was one of the BEST organized events I have ever attended and kudos to Google for pulling it off. The session reservation system worked well, there was ample shade, food and drink and even they even got the weather to be nice for 3 days 😉