Thoughts on Web Conferences

Yesterday I attended the L3 AI online conference on digital assistants organized by RASA. I am still working on the notes from that conference that I’ll share here at some point but I was really pleasantly surprised by the format of the conference. While the current pandemic has forced a lot of conferences to go online, most have just become Zoom calls and honestly are exhausting for more than an hour. I actually attended the conference for the whole day yesterday and it was the best online conference format I have seen so far.

The conference was powered by Accelevents, so good job folks, though I am sure they have competition in that space. I have also heard of good things for Run the World (actually, I haven’t. The only thing I have heard of them is on their investment with a16z 😁. But the features listed on their site look interesting).

So here are some thoughts on my experience with L3

Pre-Conference

Both Accelevents and Run the World allow users to create a profile ahead of time. This lets users reach out to others who may share the same interests during the event or when they are algorithmically paired (see below). RTW lets you create video profiles as well, which is cool

Socializing

Connecting to others is probably the most important part of a conference (most session videos end up online anyway). The Zoom experience is to just have as many videos of people as possible. That doesn’t really work since only one person can talk at the same time and a number of people are either multi-tasking or feel otherwise hesitant to share their video

The L3 conference page had a link to socialize which would randomly pair you with another attendee. I didn’t use it but mostly because there wasn’t much between sessions during the day. Instead of one-on-ones, I would have liked small groups that I could be joined with which would have felt a little less intense.

Prerecorded Scheduled Sessions

Most of the talks were just prerecorded sessions with the speaker and other attendees discussing the talks in a chat window next to the video player. The sessions unlocked at different times, so it did feel a bit like a conference track.

The advantage of the prerecordings was that

  1. You could pause and rewind the sessions right there if you missed something
  2. The video-audio quality of the sessions was good (none of the “can you hear me now” moments).
  3. Some presenters had even done some post-production work on their videos which was nice

The event page included a video page and a side panel that included tabs for chat, polls, attendees and questions. As with a lot of tabbed interfaces, the out-of-sight / out-of-mind thing happened and I never looked at the non-default (chat) tabs.

Unlike video, chat allows for many people to talk to each other at the same time which is better I think. So I was able to see some interesting discussions between the attendees on various topics.

Expo

An interesting aspect of the conference was a virtual expo tab where every company that was sharing their products could have people available for a Zoom video chat (yeah, they were using Zoom which I didn’t know could be embedded in a webpage). That was neat.

Final thoughts

I really got a lot out of this conference and enjoyed the format. With a lot of conversations going on right now on how virtual conferences could be more like real ones, I think we should also think about how they could be better than the real thing. For one thing, your audience can be a lot bigger, more diverse and inclusive.

There is also a lot of innovation going on right now in the chat experience in general (emojis, virtual gifts, etc) that could make text chat more lively as well.

There needs to be a new middle ground between video and text chats (maybe digital avatars?). I like looking at people’s faces but I also understand the multi-tasking thing when in front of a laptop. VR chat rooms get across a lot of feeling of presence by just using eyes for example.

I enjoyed the timed sessions, though I struggled to attend any of them totally in sync with their start times as there was a lot of stuff happening at home (work emails, etc).

I am really curious where the virtual conference ideas go from here. At the Philly GDG which I help run, we have transitioned our events to Zoom events and were planning to do the same thing for future “conferences” (like DevFest etc), but this has given me a lot to think about.

If you have other ideas about the opportunities here, drop in a comment below 🙂

Suddenly, Flutter

I had no plans to learn Flutter in 2019

When I first heard of Flutter last year, I couldn’t help but draw parallels to Java Swing, the UI technology I started with in grad school (and thankfully dropped a year or so later). If you don’t know much about Java’s UI technologies, suffice to say that for all of Java’s strengths, no version of their UI framework was ever one of them.

It started okay-ish enough with Java’s AWT toolkit that let Java call native code to create system windows, buttons etc, but devs soon realized that building cross-platform applications (which was always Java’s pitch) was really hard when you could only target the least common denominator widgets that were available across all platforms. “No more” said the Java community, and proceeded to build Swing, a cross-platform UI framework that emulated the system controls by drawing them itself on a canvas.

Image result for write once debug everywhere java

Sound familiar? That was what Flutter promises with the core graphics engine that would emulate the native Android and iOS widgets

The problem is that Swing turned out to be crap. The widgets never felt native and performed poorly. You could always tell if you were using a Swing app. And it was always interesting when some app wasn’t coded right and you’d end up with apps emulating the Windows look-and-feel on a Mac (who checked on a Mac back in those days)

So Flutter was on my “no-thanks” list. Besides my lack of faith in faking native system widgets, the language they chose was Dart! Who knew Dart? More importantly, where were the interesting libraries in Dart? Having done a few React Native apps before, I liked my options with JavaScript and the dream of making cross-platform (mobile AND web) apps.

But then a couple of things happened. One: I saw some pretty compelling Flutter based apps. The interesting thing was, the best apps try to create their own design language anyway, so deviating from the system look-and-feel felt okay; and then second: I tried Flutter for a labweek project and was won over by the one click deploy to multiple platforms and the hot reload (it might also have been just fortuitous timing as I was losing my mind with React Native’s minimal support for custom views and animations, something that Flutter promises a lot more control over)

So for the last 4 months or so I have been working with Flutter and I have to say I really enjoy it. Some parts are definitely rough (I still don’t love the indentation hell when describing layouts) but Dart is enjoyable and is very inspired by the best parts of JavaScript and Swift (among others). Recent additions to Dart have been very interesting, because they address issues in real-world UI development. This feels different from the more generic “Lets create a general purpose language and allow it for UI dev” approach of other languages.

But the core reason I am excited about Flutter is the culture. The reason Swing was a dud (IMHO) was that it was built by people who didn’t care to push UI experiences. The native mobile toolkits are better but still make it hard to build complex user interfaces (SwiftUI and Jetpack Compose are trying to change that). Example “Hello World” apps you see using native toolkits are pretty generic form-based apps.

But look at the kinds of apps Flutter shows off:

Image result for flutter timeline app

🤩

While this may not be everyone’s cup of tea, this “think-outside-boxy-layouts” approach has my vote.

4 months in, I am still new-ish to Flutter but I guess I am on board. Stay tuned for more random Flutter stuff on this blog 🙂

Learning Curves

For the last few years I have been thinking quite a bit about how we enable more people to learn programming. As an industry, we need more programmers universally and there seems to be a huge number of people who would want to come in. Unfortunately though we can’t seem to connect the 2 sides of this equation effectively.

Specifically I have been thinking about learning curves. Until recently I believed learning curves to follow a close-to-linear relationship with time. You learn a little bit at the beginning and are work on simple ideas and learn more and more as time goes by.

This seems to be codified in most programming books too, which introduce simple ideas at the beginning and then move towards more complex ideas

However, lately I feel a more honest representation of this learning curve we expect a newcomer to master would look something like this.

The initial hump in that graph represents a mountain of complexity that junior programmers are immediately handed before they can do anything with code. A lot of times this hump represents meta-work: things that are not core to the technology but elements like build systems or frameworks for testability, coding standards, etc

Take JavaScript for example. A “Hello world” JavaScript experience requires you to either start coding the way the industry strongly dislikes, the old fashioned way with script tags and vanilla JS, or learn the complexity of modules, package managers, build systems etc.

Same goes for mobile app development. For example if you are looking to make your first Android app, a brand new Android project using the Android Studio wizard drops you into a mess of Gradle, Java, Kotlin and XML files.

Tools like XCode and Android Studio also are extremely complicated for any beginner to use, with a ton of panels and tools to tap on without knowing what they do. Ironically, most of the teams building these tools have User Experience professionals on them and yet the ideas of progressive disclosure and first run experience, that as an industry we keep touting for our end user apps, are never considered.

Technology Complexity Cycle

Reflecting on my own learning-programming experience and talking about this with a few other people, I realize that another thing that got me into programming was also working on a technology (Flash) that wasn’t as mature.

When I started playing with Flash, it was back in the Flash-4 days with a very simple programming model where most of the code was written in small scripts attached to the timeline that just controlled the position of the animation playhead. My learning-to-code experience happened almost in-sync with the addition of complexity to Flash. Towards the last part of my experience with Flash, it had gotten complex enough with ActionScript3 and the need to become a “real” programming platform that it started to lose people.

I feel this happens a lot. Early versions of a programming platform are simple and functional and then, if it gathers attention of the “serious” programmers, way too much complexity gets added. This complexity makes the technology a daunting beast to new entrants.

The point is …

I had a couple of thoughts for new programmers that became the primary motivation for this post:

  1. Survive the initial hump: Getting started with learning programming is a lot harder in the beginning so stick with it. It does get easier as you cross the initial hump of tools and meta-work that goes into starting a project and very rarely revisited once the project is in active development
  2. Play with emerging technologies: Emerging technologies don’t often have a lot of initial hump as tooling and other meta-work hasn’t been invented yet. Technologies like WebVR, Blockchains, Flutter etc are great candidates to play with now and grow your skills as the technology matures.

And for those of us who have been in this industry for a while and may have the power to influence tooling and/or methodologies of how code is written, lets endeavor to make these more welcoming to folks with different levels of experience with tech.

GDG Philly’s 100th meetup: a retrospective

Drumroll!!! The next GDG Philly meetup will the 🎉100th official meetup!!!

And we have a good one lined up with some of the biggest tech leaders in Philly on a panel on managing your career as a technologist. If you are a developer or are looking to become one, you should definitely sign up.

For me it’s certainly a time for some celebration and reflection. Corey and I started Philly GDG, or rather its previous incarnation, AndroidPhilly, in 2011 when both of us had just about started working on Android and realized there wasn’t a local community where we could learn from each other. And considering how minimal technical documentation and user experience guidelines were back then, a local community was sorely needed. The group transitioned to an official GDG at some point which meant we got a lot more support from Google in terms of speakers and schwag.

Thinking back, there are a lot of things that worked well. The consistency of the day (last Wed of every month) and location (Comcast Center) every month definitely was a good idea and built up a monthly habit for the regular members. Comcast was great about sponsoring this event every month since it’s inception, and my managers, former and current, were very supportive of letting me run this. Other companies in Philly have been fantastic supporters as well including Promptworks, Chariot, Candidate and others who have hosted or supported us with food and beverages over time.

We are also a better balanced community as far as gender goes with more women participation than a lot of other communities. A lot of credit there goes to Corey for leading the outreach in early days, and always making sure we had women as part of the leads. It’s something the current leads, Yash, John and Ruthie, continue to champion.

There have also always been a lot of challenges, some similar to those faced by other groups while others unique to our own. Sourcing speakers every month is hard, specially when your community is much smaller than those in cities like SF and NY. Creating a channel for the community to keep the conversation going has also been challenging with Slack becoming a defacto communities platform that doesn’t really work if you aren’t paying for it (I am starting to look at other platforms like Discord, but a lot of people may not be willing to install another app). Trying to balance the level of talks has also been a concern: we want to have intro level talks to bring new people in but also more advanced sessions for folks who have been coming here for a while. If you have ideas on any of these, I am all ears.

I made a lot of friends thanks to our group. From other past (Corey, Chuck, Dallas) and present (Yash, John, Ruthie and Kenny) fellow organizers who helped run this group to regular members who have been attending our monthly meetup for years.

Hanging out with past and present Philly GDG leads at Google IO 2018

I am looking forward to how the group evolves going forward. In the meanwhile, if you are in the neighborhood, join us for our 🎉100th event. It promises to be a great one

Adventures in working with JavaScript, Dart and Emojis

I spent the whole day today working with Strings being sent between a JavaScript serverside app and a Dart clientside app. As an industry we have been doing this forever, so you’d think it’d be easy but then along came emojis to muck up my day 🤬

Instead of writing my own primer on Strings here (and doing a bad job), let me just link to Joel Spolsky’s excellent post on the subject

This really old post still does a great job of bringing us up to speed to the Unicode world we live in today. And then came Emojis

There are numerous posts of the pain of dealing with Emojis whenever you have to because it does screwy things like combining neighboring characters to form a single emoji. This means that the length of a string, if it is just a measure of the unicode CodePoints used is different from what you would count on the screen.

This gives you whacky results like “💩”.length == 2 and generally makes working with strings just a pain even to the extent of crashing your iPhone. On the flip side some things like being able to delete family members from the 4 member family emoji with every backspace are kinda amusing, since it is it’s actually 7 characters: 4 ‘normal’ people characters and 3 invisible ‘joining’ characters in between.

Which brings me to today. I am playing around with moving a client/server app from JavaScript everywhere to JavaScript server and a Dart client app. In the previous iteration, strings that needed to be sent had special characters that needed to be escaped and sent across: no problem. JavaScript’s escape/unescape worked pretty well.

Moving to Dart though was a challenge, because there is no escape/unescape method. Turns out escape/unescape is best avoided anyway, and encodeURI/decodeURI is a better option. Dart has a decodeFull method on the Uri class that does the job pretty well.

Except that the characters in the list also included emojis and Dart’s Uri class doesn’t work with anything more than UTF-8 characters and crashes when encountering strings with emojis that are just ‘escaped’. This, as it turns out, is as per spec and all those fancy emoji domains that I thought used Unicode in the URI, use a different idea around Internationalized Resource Identifiers and Punycode. Thankfully passing in a URI encoded string with emojis seems to work fine and emojis come out 👍on the other side of the decode process.

While this seemed to work at that point, passing the decoded string to my Yaml loader crashes the app again (is Yaml supposed to be restricted to Ascii/Utf-8 ? ). But that is a problem for a different day.

For now, I have decided to just convert emojis to shortcodes for the transit and remap them to emojis on the other side. Its not pretty but it works.

Oh and in the meanwhile, if you want to know how to loop through a String with emojis in Dart, you can do that by looking through the Runes in a String:

String s = "😀 hello";
s.runes.forEach((int i){
String x = String.fromCharCode(i); // Get emoji as 1 string and now 2 CodePoints
}
view raw emoji.dart hosted with ❤ by GitHub

Using Symlinked Node libraries with React Native 0.55

I recently updated the React Native app I have been working on for a while from RN 0.47 to 0.55. I’ll admit I was a bit callous about the update and hadn’t really looked at the change log, but hey, version control does give one a foolish sense of bravado.

Anyway, needless to say there were issues. As of RN 0.55.4, the `setJSMainModuleName` has been renamed to `setJSMainModulePath` and it took me a bit of sleuthing to figure that out (Find the Github commit here)

However a bigger issue came up when I tried to package the app after resolving the compile errors.

Screen Shot 2018-05-16 at 1.19.27 AM

Turns out the new Metro packager cannot follow symlinks, like those created by npm-link

This was a total fail for me, since my app uses local npm modules to hold pieces of common code for the web and mobile clients.

Thankfully someone did come up with a bit of a hack that generates absolute paths for all symlinked libraries and launches the cli.js file of the packager with a config file with the list of absolute paths.

It works for now, but hopefully this bug will get fixed soon.

Building CodeCoin: A Blockchain DApp prototype

If you know me, there is a good chance that you know how 👍 I am about Blockchain and Decentralized apps. I have given a few talks on it but till recently these were mostly either focused on Bitcoin or on the academics of Blockchain technology. At a recent Comcast Labweek, I was finally able to get my hands dirty with building a Blockchain based decentralized app (DApp) on Ethereum.

Labweek is a week long hackathon at the T&P org in Comcast that lets people work on pretty much anything. I was pretty fortunate to end up working with a bunch of really smart engineers here. The problem we decided to look into was the challenge of funding open source projects. I am pretty passionate about open source technologies but I have seen great ideas die on Github because supporting a project when you aren’t getting paid for it is really hard. Our solution to this problem was a bounty system for Github issues that we called CodeCoin.

The way CodeCoin worked was as follows:

  • A project using CodeCoin would sign up on our site and download some Git hooks.
  • When anyone creates an issue on Github, we create an Ethereum wallet for the issue and post the wallet address back to Github so its the first comment on the issue.
  • We use a Chrome extension that adds a “Fund this issue” button on the Github page that starts the Ethereum payment flow.
  • To actually handle the payment, we require MetaMask that we can trigger using its JavaScript api
  • Ether is held in the wallet till the issue is marked resolved and merged into master. At this time another Git hook fires that tells our server to release the Ether into the wallets of all the developers who worked on the issue.

app-screen.png
Issue page design. Most of the UI changes came from a custom Chrome extension

flow.png
Application Flow

Note that while we held the Ether on our side in wallets, the right way to do this would have been to use a Smart Contract. We started down that route but since most of the code was done in like 2 days (while juggling other real projects), wallets seemed like the easier route.

Releasing money into developer accounts was also a hack. Since developers don’t sign up to Github with any digital wallet address, we need the wallet addresses as part of the final commit message. This could be done with a lookup on a service like Keybase.IO maybe and with more time we would have tried integrating it to our prototype. In fact it was the next week that I heard about their own Git offering. I haven’t read enough about that yet though.

Development notes:

  • For local development, we used the TestRPC library to run a Ethereum chain simulation on our machine.
  • We used web3js, the Ethereum JavaScript api for doing most of the actual transactions
  • Web3js was injected into the browser by the MetaMask extension. There were some challenges getting Metamask to talk to the TestRPC. Basically, you had to make sure that you initialized MetaMask with the same seed words as you used for your account on TestRPC (which makes sense) but there isn’t a way afaik to change that information in MetaMask. Early on, we were restarting TestRPC without configuring the initial accounts so we’d have to reinstall MetaMask to configure it with the new account. Chalk that to our own unfamiliarity with the whole setup.

metamask
MetaMask transaction

  • We did try to use Solidity to run a smart contract on TestRPC which worked for the demo apps, but canned that effort in the last moment as we were running out of time

All in all, it was a fun couple of days of intense coding and I feel I learnt a lot. Most of all I enjoyed working with a group of really smart peers, most of whom I didn’t know before the project at all. Hopefully we get to do more of that in the future 🙂

IMG_0310.jpg

 

Notes from Oculus Connect 4

I had a great time last week attending Oculus Connect 4. Just like last year, the keynotes were really interesting and the sessions pretty informative. Here are some quick thoughts on the whole event:

Oculus Go and Santa Cruz

Oculus announced two new self contained headsets: the Go, a 3DoF inexpensive ($199) headset that will be coming early next year and much later, Project Santa Cruz, the 6DoF headset with inside-out tracking. Whats interesting is that both these devices will run mobile CPU/GPUs which means that 3 of the 4 VR headsets released by Oculus will have mobile processing power. If you are a VR developer, you better be optimizing your code to run on low horsepower devices, not beefy gaming machines.

Image result for oculus go and santa cruz
Oculus Go

Both Go and Santa Cruz are running a fork of Android

The move to inexpensive hardware makes sense, since Oculus has declared it their goal to bring 1 billion people into VR (no time frame was given 😉 )

Oculus Dash and new Home Experience

The older Oculus Home experience is also going away in favor of the new Dash dashboard that you’ll be able to bring up within any application. Additionally you’ll be able to pin certain screens from Dash enabled applications (which based on John Carmack‘s talk seem to be just Android apks). There could be an interesting rise of apps dedicated to this experience, kinda like Dashboard widgets for Mac when that was a thing.

Image result for Oculus Dash
Oculus Dash

The removal of the app-launcher from Oculus Home means Home now becomes a personal space that you can modify with props and environments to your liking. It looks beautiful, though not very useful. Hopefully it lasts longer than PlayStation’s Home

 

Image result for new Oculus Home Connect 4
New Oculus Home (pic from TechCrunch,com)

 

New Avatars

The Oculus Avatars have also undergone a change. They no longer have the weird mono-color/ wax-dolls look but actually look more human with full color. This was also done to allow for custom props and costumes that you’ll be able to dress your avatar in in the future (go Capitalism 😉 )

Image result for new Oculus avatars
New Avatars (Pic from VentureBeat.com)

Another change is that the new Avatars have eyes with pupils! The previous ones with pupil-less eyes creeped me out. The eyes have also been coded to follow things happening in the scene to make them feel more real.

Oh and finally, the Avatar SDK is going to go cross platform, which means if you use the Avatars in  your app, you’ll be able to use them in other VR platforms as well like Vive and DayDream.

More Video

Oculus has been talking quite a bit lately about how Video is a huge use case for VR. A majority of use of VR seems to be in video applications, though detail on that wasn’t given. For example, apps like BigScreen that let you stream your PC cannot be classified as video or game since who knows whats being streamed. Also since the actual usage number of VR sessions wasn’t said, its hard to figure out if the video sessions count is a lot or not.

Either way, one of the big things that Carmack is working on is a better video experience. Apparently last year their main focus was better text rendering and now the focus is moving to video. The new video framework no longer uses Google’s ExoPlayer and improves the playback experience by syncing audio to locked video framerate rather than the other way as its done today.

Venues

One of the interesting things announced at Connect was Venues: a social experience for events like concerts, sports etc. It will be interesting to see how that goes.

Image result for oculus venues
Oculus Venues

There were numerous other talks that were interesting, from Lessons from One Year of Facebook Social to analyzing what is working in the app store. All the videos are on their YouTube Channel

Conclusion:

While I was wowed by a lot of the technology presented, it definitely feels like VR has a Crossing the Chasm problem: They have a pretty passionate alpha-user base but are trying really hard to actually get the larger non-gaming-centric audience in.

Image result for Crossing the Chasm

Oculus Go seems like a good idea to get the hardware and experience more widely distributed but what is really needed is that killer app that you really have to try in VR. The technology pieces are right there for the entrepreneur with the right idea.

Tips and Thoughts on Mobile WebVR Development

I have been involved in a few VR projects this last year. While the earlier prototypes used Unity as the development environment, some of the new ones use WebVR, an emerging web standard for VR development.

WebVR, as opposed to native-app VR, does have a few advantages:

  • JavaScript tooling is pretty good and getting better
  • Automatically falls back to an in-browser 3D experience on non-VR devices
  • Not having to compile the app to quickly check the changes in a browser is pretty awesome

The biggest thing though is that the kind of experiences we have always thought about: moving from one VR experience, is not possible in a series of native apps. I have heard the future of VR referred to as a “web of connected VR experiences” and that is the vision that is truly exciting.

Cyberspace as imagined by Ghost in the Shell

That said, current tooling is much better for VR native apps with most tools focusing on Unity, which is really the de-facto tool for game developers. However I really hope the tooling on WebVR side starts getting better.

Developing for WebVR

The way we currently build for WebVR is by using AFrame, a VR framework built on top of WebGL primarily maintained by Mozilla and the WebVR community. AFrame is built on top of ThreeJS, the most popular 3D library for WebGL. For desktop VR development, the only desktop browser that you don’t have to finagle with too much is Firefox. Most of the development is done on Oculus Rifts connected to some beefy PCs.

Current State of WebVR support

Another tool worth noting is Glitch which provides instant development setups for JavaScript based apps. Glitch has been very useful to quickly try out an idea and share it internally. The develop -> preview flow is pretty straight forward.

The developer workflow for mobile VR development though is a different story. While our current prototype had no requirements to be mobile, I recently tried it on a Google’s Daydream and found a few bugs. Fixing those seemed trivial, but actually doing that was a lot more painful than I would have thought. Here are some problems I ran into:

Cannot start a WebVR experience from inside VR

Currently there is no available web browser that can launch from the DayDream VR home menu. While Chrome on Android supports WebVR and will trigger a “Insert into Headset” DayDream action when a user taps on a VR button on a WebVR experience, there is no way to get to that experience from within DayDream itself. You cannot pin a WebVR experience to your DayDream Home and WebVR experiences don’t appear in your recent VR apps section.

This is actually really frustrating. The workflow to debug a DayDream error is:

  • Fix(?) bug
  • On phone, go to Chrome, launch app
  • Tap “VR” mode
  • Insert phone into headset
  • Verify Chrome Remote Debugger is still connected
  • See if the bug still appears
  • Pop phone out of headset

The constant popping of the phone in and out of the headset get old really fast. One option may be to add a “reload” button in your WebVR experience but I am not sure if that will work, since you aren’t supposed to be able to enter VR mode without an explicit user action (like a button tap)

One thought I did have was to create an Android app with the Manifest declaring it as a DayDream app, and then have its main view just be a WebView. Unfortunately that didn’t work, though I did get the app in the DayDream Home view. A different idea was to let this app launch Chrome with my WebVR app’s URL. Again, there were challenges: For one Chrome launched in conventional view and did not automatically trigger the VR split view for the left and right lenses. To add to this hack, I added a trigger to call AFrame’s enterVR() method when the page loaded which kinda worked but every launch caused this weird blink when the app went from 2D to VR mode that it was actually painful to use.

One HUGE tip in this workflow: Make sure you have enabled the DayDream debug menu selected the “Skip VR Entry Screens” without which the workflow mentioned adds like 2 more steps per debug.

Using Chrome Remote Debug

For a lot of my testing, all I needed was the console.log function from developer tools. You can see your logs using Chrome Developer Tools’ Remote Debug feature. Not sure I was doing it wrong but I kept losing connection to the active tab every time I reloaded the page to check. Really annoying. At the end of the day, I did discover the A-Frame Log Component, which I haven’t used yet, but intend to very soon.

Lack of a DayDream Controller Emulator

If you are developing for VR, your productivity is directly proportional to how much of the development you can do without putting on the headset. With WebVR, since your app automatically works in a browser, you can do a lot of development without the headset. Unfortunately this breaks down when you are trying to code around user interactions. You can use the Mouse as a raycast source which gets you partly there but you really want an emulator for the hand controllers to try different things out.

DayDream officially has an emulator for its controller, but that controller only seems to target Unity and Unreal based projects. There are other projects like DayFrame for AFrame but since my problem was specific to the DayDream controller, using a proxy emulator didn’t make much sense.

What I really wanted to do was to pair the official Google DayDream controller to my PC but I haven’t been able to find any way to do that yet.

Conclusion

I have been generally enjoying working with AFrame and it has a surprisingly (to me) strong community of active developers. However the developer workflows, esp for on-device testing, still need work. Ideally what I am looking for is a one click that deploys my WebVR app to a server and then launches DayDream pointed to the WebVR page running in fullscreen VR. Or even better, a WebVR/AFrame equivalent of Create React App or similar boilerplate projects, that automatically sets up all the best tools for developing and testing WebVR projects on both the browser and on-device.

 

 

Different Programming Metaphors

Its interesting that for an industry pushing humanity into the future, Software engineering practices have not changed significantly in the last 50 years.  We are still using basic text editors with syntax highlighting, often on machines with hundreds of times the power of the devices they were originally designed for, an irony highlighted by Bret Victor in his talk linked below

I have been thinking about this for a while and collecting links on different ideas around this for the last few years. The deck below collects some of these ideas. If you have others that could be added, please leave a comment.

 

Other Links

An interesting article on exploring visual programming in Go. Some interesting points there on why visual programming failed