I am a little behind on my 2018 review, seeing that its almost mid Jan already. But better late than never, so here goes
VR
2018 continued to be a year of tremendous education professionally. For a big part of the year I continued to work in the VR space, specifically using A-Frame and WebVR. I am definitely a fan of WebVR and I really do believe that the app model that is prevalent in the mobile ecosystems is a bad idea for VR. I completely agree with the vision of VR being a web of experiences, and the web technology stack has matured to a point where deploying a VR experience is trivial. Hopefully more people will take up building VR experiences in JavaScript and WebVR. What the community really needs is a diversity of ideas and to grow VR beyond its early base of gamers.
While it was a lot of fun, I am looking at other things beyond VR this year and am excited for certain new ideas I am playing with. Will share more on that later.
Blockchains
I did a fair bit of work on Blockchains in 2018, mostly at the Dapp level. Its early days for this space but I do believe they present a once in a generation opportunity for a step function change in how we use technology. There is a lot of pessimism about the space right now, after the unrealistic craziness that was the 2018 bubble when Bitcoin hit $19,000 but I am excited about where the tech is going.
I spoke at a panel at the Coinvention Conference on the Philly Blockchain scene (Thanks Mike) as well as at the inaugural session of the Drexel Blockchain club (Thanks Adit)
That’s me, that pink blob there!
Besides these, I did work on a LOT of JavaScript and Rails which has been pretty rewarding and learned a lot about Google’s cloud infrastructure, specially Firebase. I need to distill a lot of that into a future blogpost
Reading
I did a fair bit of reading this year but I did abandon a lot of books halfway. I am trying to be okay with that rather than pushing through a bad book, just to complete it. I wish there was a better app than Goodreads for books though
Misc
Some other things that happened this year:
Philly Google Developer Group (GDG) continues to go strong in its 8th (!) year since its start as AndroidPhilly in 2011. Its a great community that I look forward to meeting at least once a month and have made some great friends there.
I didn’t travel as much for work this year, which was good. My favorite event though was the MIT Media Lab’s Fall Member Event. I do like all the demos that the Media Labs groups present but the best part is the talks with other sponsors from different organizations.
I worked with a lot of interns and co-ops this year, mostly from Drexel, and I loved it. These guys and girls are smart, enthusiastic and I find conversations with them refreshing since they question a lot of assumptions I often have. Maybe I should consider some work in academia 😉
2019 is guaranteed to be a year of many changes and I am excited for most of them. Stay tuned.
In the last couple of years I have found myself in a couple of projects using web based rich text editors. Since these projects were written using React, I primarily focused on libraries that worked with that framework but have been keeping an eye on other projects as well. This post is a braindump of my thoughts on the space
Many years prior to that a few friends of mine and I had tried building a web-based Evernote clone that would do smart things based on patterns we could define in Regular Expressions. So for example, if I typed in an email address in my document, the JavaScript would recognize that and convert it to a mailto: link. With the advent of contentEditable, we thought this would be easy to do
We were totally wrong. Turned out contentEditable is a terrible API as documented in this article by the Medium engineering team. Modifying the underlying document while the user was editing it was just impossible to get right even on one browser. And for bonus pain points, every browser wrote different underlying HTML when the user edited a contentEditable element.
DraftJS
About 3 years ago I was assigned to work on an online CMS for an internal portal at work and started looking at RTEs again. Since the rest of the project was going to be in React, I started looking closely at React libraries. React had one advantage that didn’t exist when we had tried our earlier adventure: Since React keeps the document structure in its Virtual DOM, it didn’t have to fight with the browser specific implementations of contentEditable as well as fight the browser for things like cursor position and selections.
DraftJS seemed like the best choice at the time. It was (and still is) actively developed by Facebook and is used by them for text editors on Facebook.com.
Generally it worked well enough. We could have used more documentation but were able to get a default experience working. Draft comes with very few extensions and really tries to sell its “toolkit” nature by having you code behavior I’d have expected as default. There is a different project called DraftJS-Plugins that gives you a lot of that. Weirdly, DraftJS-Plugins requires you to use its Editor which seems to be a modified version of the DraftJS editor. Ideally Draft should support plugins out of the box.
We did make a mistake with storing the document though. Our thinking was that the document in the database should be saved as HTML and Draft should just re-render the HTML when the document needs to be edited. Turns out Draft really prefers saving and loading a serialized state of the document. This does make some sense but makes any future migrations to a different editor much harder. This brings me to the next library I looked at
Ghost
Ghost is a pretty interesting CMS. Visually its very polished and the editor interface there is pretty amazing. Earlier versions of Ghost used an editor that really was centered around Markdown, which is awesome but not a fit for the target audience of our system. Ghost also had very limited database support: basically SQLite for development and MySQL for production. Since we didn’t want to use MySQL it was a deal breaker.
But from the point of view of Text Editors, Ghost 2.0 that was recently released shipped with a new and much nicer editor. The editor is now part of the main Ghost repo but I imagine can/should be extracted to its own project.
Whats really interesting about the Ghost Editor is that they also released a library for building WYSIWYG editors supporting rich content via cards called MobileDoc-Kit and an open source spec for these documents called MobileDoc. This addresses my issue with storing a proprietary format for a document in the database. So yay!
WordPress Gutenburg
Meanwhile WordPress has also been working on their own new editor that just shipped as part of WordPress 5.0 today. In a big departure from previous PHP centric WordPress editors, Gutenburg is written in ReactJS. Additionally, unlike Draft (as well as Ghost 2.0 editor I think), Gutenburg works on mobile devices. My biggest issue is that I really hate WordPress’s “HTML comments as a data structure” approach. And I am not the only one. Using Gutenburg has so far been pretty okay. If you’d like to learn more about it, read Matt Mullenweg’s post on it today
CKEditor
Other projects also crossed my path during this project. CKEditor for example looks promising but I have’t dug into it. What makes it really interesting is its newly added support for collaborative editing out of the box, something I’d love to add to my project but its not a big ask (yet)
Trix
If Rails is your thing, Rails 6.0 will include a Rich Text Editor out of the box which is really cool. The editor itself, Trix, is out right now and can be added to your project if you need it.
Final Thoughts: Blocks
One interesting thing is that almost all the editors are starting to move into the concept of a rich document as a group of content blocks. This is a departure from earlier architectures that gave you the entire document as a rich document with all the formatting. One reason for this is the ability to export the content blocks in a variety of formats besides simple HTML (like Markdown).
Blocks also let you use non-text elements within the document. These can be things like media or even rich widgets like photo galleries etc. Given a well defined document data-structure, these can also be supplied by third-parties.
For example, Elementor has released a bunch of widgets for WordPress’s Gutenburg
Modern word processor centric startups are taking these ideas even further. For example,Notion takes the same block based approach to a person notes app
I was recently looking to scaling up an API currently hosted on Heroku.com. While adding dynos to Heroku was an option, I also thought it was a good excuse to get more familiar with Google Cloud Platform (GCP), which I have been curious about for a while and have had some really good conversations on at previous events at GDG Philly.
After setting up the Google Cloud SDK and enabling billing on my active project, I clicked on the Cloud Endpoints left nav item to get this screen…
Turns out, the Cloud Endpoints service has no user interface by default. To get started, you have to deploy your api spec (as a Swagger2 / OpenAPI document) to see anything at all. Maybe that works for folks that live and breathe the command line but I was kinda put off, specially since I was aware of how AWS API Gateway worked (we’ll look at that later)
Anyway, so clicking on “Learn more” and then a further link to the documentation takes you to this page with the following table:
So based on their documentation, it seems that the only JSON apis that Cloud Endpoints can front are those hosted on Google App Engine or Compute Engine.
Argh! (Though gRPC does look kinda interesting, just not for now)
At this point I actually debating deploying the Rails app on the App Engine. Ok, let me try out how hard that is. After all they have a blog post on how to do it.
After an hour of configuring, I was able to deploy my Rails app to Google Cloud, but it took me too much effort to connect it to the CloudSQL backend and even more to try to get migrations running (never did, and I finally gave up). Google does have a video on how to get rails migrations working using their Cloud SQL proxy but I was kinda done by then. This was taking me down a path I wasn’t really planning to go down.
Getting it work on Amazon’s API Gateway was so much easier.
The AWS console for API Gateway actually has a fairly intuitive user interface for fronting any third party api with API Gateway. At the end, you end up with a nice UI that describes the flow from the client through the gateway to your api base URL (Screenshot below)
I generally was surprised how easy AWS made their service and hopefully Google Cloud Endpoints reach the same level of simplicity. I am sure given how much Google is investing in their cloud services, this post will be out of date in a couple of months but for now AWS API Gateway does seem to have the edge
The last year or so I have been working on an app for teaching programming that I hope to release in a few weeks. The CMS for the app is a React based web app powered by a Rails server. The Rails server is running in an API-only mode which has been working out reasonably well.
One of the recent additions to Rails has been the ActiveStorage system that purports to make attaching media to Rails models really simple. Unfortunately most of the documentation around it involves using Rails’ View tier as well which the API-only mode strips out. In fact, getting there is an open bug on ActiveStorage to actually make it work out of the box in API-only projects.
After a couple of days of struggling through it, I finally have it working in an API only mode using the React Dropzone Component instead of a boring filepicker.
Also note, there is an open source React component for interacting with ActiveStorage which I did not try, mostly cause I was already halfway done with the Dropzone implementation by the time I discovered it.
Issue #1: ActiveStorage CSRF errors in API only mode
ActiveStorage ships with a JavaScript library that does a lot of the work, but one of the issues with using ActiveStorage from a react app was that it tries to do CSRF token validation on the requests but cannot. To solve it, I added a initializer in the initializers folder that skipped CSRF checks for it
So this isn’t a bug but when working on a React app using the CreateReactApp project, the React app runs on port 3000 while the Rails server runs on port 3001. To prevent CORS issues when uploading images, I had to add the Ract::Cors gem and a cors initializer that allowed requests from localhost:3000 during development
Rails.application.config.middleware.insert_before 0, Rack::Cors do
allow do
origins 'localhost:3000'
resource '*',
headers: :any,
methods: [:get, :post, :put, :patch, :delete, :options, :head]
end
end
Also, since the React app was using DirectUpload to S3, you have to add the right Cors settings for your S3 buckets as well (this blog post helped figure out the CORS issue)
Uploading an Image using React and Dropzone:
The core part of the image uploader component is below:
The DirectUpload class comes from the ActiveStorage JavaScript file. When this code is run, the image is uploaded to the storage provider and you get back a blob (just a JavaScript hash) with a bunch of metadata (that is saved in the `active_storage_blobs` database table.
One of the keys in the hash is called signed_id. To assign the uploaded image as an attachment to any of the models, the model has to have a has_one_attached or a has_many_attached field. In my case its something like
class Book < ApplicationRecord
has_many_attached :images
end
In my book controller I can then accept a PUT request that updates the record like so:
@book.images.attach( params[:signed_id] )
or better yet just let strong_params do it automatically.
I recently updated the React Native app I have been working on for a while from RN 0.47 to 0.55. I’ll admit I was a bit callous about the update and hadn’t really looked at the change log, but hey, version control does give one a foolish sense of bravado.
Anyway, needless to say there were issues. As of RN 0.55.4, the `setJSMainModuleName` has been renamed to `setJSMainModulePath` and it took me a bit of sleuthing to figure that out (Find the Github commit here)
However a bigger issue came up when I tried to package the app after resolving the compile errors.
This was a total fail for me, since my app uses local npm modules to hold pieces of common code for the web and mobile clients.
Thankfully someone did come up with a bit of a hack that generates absolute paths for all symlinked libraries and launches the cli.js file of the packager with a config file with the list of absolute paths.
It works for now, but hopefully this bug will get fixed soon.
Last week’s Comcast Lab Week gave me another opportunity to dig deeper into Blockchains. In my previous writeup on CodeCoin, I had used Ethereum to create a bounty system for Github issues. However under the hood, we had cloud servers managing various wallets belonging to the different issues. Smart Contracts offer a better way to handle this.
What Are Smart Contracts?
Smart Contracts are pieces of code that execute on the Blockchain. Think of them as classes in an Object Oriented Programming model. Once deployed, you can invoke methods on the Contract from any wallet on the Blockchain. The method, when called, gets not only the parameters that you explicitly sent as part of the method call but also the caller’s wallet address and any value (Ether or its smaller fraction Wei) that the contract was sent. The contract can then keep part or all of the value sent in return for executing the code.
The world’s simplest Smart Contract.
Note that the above contract is a “hello world” contract. Ours was a bit more complicated.
Tools and Setup
Similar to the last time, we used the TestRPC program, now rebranded as Ganache, or specifically it’s CLI version, the Ganache-CLI) to develop and test the application. The app itself was pretty simple: It allowed users to rent an asset in our store by sending a particular amount of ether to our smart contract.
Configuring Ganache
In addition to Ganache, we also used the Truffle framework to build the application as well as MetaMask to run the transaction.
The Smart Contract itself was written in Solidity. Even though there are may editor plugins you can add to your favorite editor, I found using the the cloud hosted Remix IDE the best to get started. It’s already configured with linters and debug tools that make developing your contract much simpler.
The Truffle framework can be thought of as an equivalent to Ruby on Rails, just for Ethereum projects. When you start a new project, it creates a project structure with folders for contracts, migrations, tests and a truffle.js configuration file that lets you deploy your code to various dev/test/prod environments (think of truffle.js as a config.yml from Rails)
Once you place your contract in the contracts/ folder, running truffle compile compiles the Solidity code to the bytecode that can be deployed to the Ethereum Blockchain. Running truffle migrate deploys the contract to the chain. Truffle also provides a handy REPL console that you can use to interact with your contract from the command line. Very convenient to find your contract’s deployed address for example by calling
Find the address of the SimpleStorage Contract
Running the Application
Our client application ran a small React app that would send a little bit of Ether to a deployed smart contract. If the transaction was successful, we’d send the successful Transaction ID to a middleware server that would validate it and authorize the user to a piece of content
The client code communicates with the Blockchain via the Web3 library injected into the browser by MetaMask. The code to use the contract looks something like this:
That’s pretty much it.
Random Learnings
We had issues with MetaMask and Ganache seeing each other’s wallets. This might be by design but to get anything done, I had to write another small server script that funded Metamask accounts with Ether from Ganache accounts
All transactions are done in Wei (10^-18 Ether). During development we’d send small values like 10 Wei across and were perplexed that Metamask or other wallets didn’t change their displays till we realized that the number was too small for MetaMask to show in its UI
We assumed a successful transaction on the contract assumed successful payment. We did not wait on the transaction to be mined to declare payment success. We should be waiting on that using the Web3’s filter API but we just ran out of time on the project.
To connect with a contract using Web3, you need to point it to its address and the JSON interface of the contract. Truffle’s console has a `toJSON()` method but that is not the JSON you are looking for. The right JSON file is located in the `/build/contracts/` directory. Once you have that JSON file you can create the Contract object in Web3 by using
var myContract = new web3.eth.Contract(SimpleStorageABI, address)
We found an interesting project called OpenZeppelin that seems to be a public repo/tool for often used Contracts. I need to try that next.
Final Thoughts
This was a fun 5 day project that demystified a lot of things around Smart Contracts that I wasn’t sure about. And as always, it was great to work with a bunch of smart engineers I usually don’t get the opportunity to work with otherwise 🙂
During my last trip to India (this Jan) I was once again struck by how different the Indian tech scene is from the US. In my last trip I talked about the very different mobile behavior in India, but during this trip, I was more struck by India’s beginning of transition to a digital economy. India is going through an interesting transition phase where its leaders, specifically Prime Minister Modi, are pushing a change towards a more digital nation. The road is bumpy but hopefully it will lessen some of the big problems India faces today.
In my three-ish weeks there, I found a number of things that I found interesting. Here are some notes from there.
Digital Society
India is at an interesting moment in history with the Prime Minister pushing the nation into the digital age. These initiatives include a digital identity program (Aadhaar), bank accounts for every citizen, a universal digital API for payments mandated for every bank (UPI) etc.
The transition may not be smooth, as evidenced by occasional reports of data breaches, and an overzealous broadening of the scope of what Aadhaar was supposed to enable (which is going through a course correction now) but I am optimistic that this can really accelerate digital services in India and arrest the corruption epidemic that plagues the non-digital economy.
Sometime in the next couple of months, I am hoping to dig more into the India Stack which aims to be the platform for the new digital society.
Digital Payments
The other thing that was really interesting to see was the rise of the “pay-with-QR-code” options all over the city, once again enabled by the UPI banking api and accelerated by the demonetization event in 2016.
Digital payment companies like PayTM saw a huge growth in the last year and with WhatsApp’s recent announcement of adding payments into the app (which Indians are addicted to), there will be a huge transition towards a more cashless economy.
The government is clearly doing everything it can to push the transition and news reports like the one shown below that show that government controlled services would cost digital payers less, are getting pretty commonplace to see.
There is definitely a lot of Pay-with-QR code options, but I am curious if systems like this could be abused. For example, I ran into the sign below at a railway station and while I am sure its legitimate, it could just as well have been part of a scam where someone just pasted these signs when people weren’t looking.
Some of this is prevented by 2-factor-auth or one-time-passwords (OTPs) which are enforced for all digital payments. So every time you make a payment, you get an SMS to confirm the transaction and it will not go through till you reply to the SMS message.
Uber vs Ola
The availability of Uber and Ola ride sharing services has also been good to see. Ola, at least for now, does outnumber Uber though I did end up taking each of them roughly the same number of times. And the fact that my Uber app from the US worked without a hitch (given it was connected to my globally valid American Express card) was also great. Its also a great convenience in India where its easy to travel 100 miles and end up in a place where you don’t speak the local language. Its also a relief to get away from the haggling over the price of a ride that used to be the norm earlier.
Uber and Ola though do have other challenges in India, from less accurate GPS data for mapping to local drivers who cant speak English used in a lot of routing apps (The Forbes article is an interesting read on the local challenges)
Oh, and apparently India is about to launch its own GPS system to address this and other local mapping challenges
Crime
India has had a lot of spotlight shone on its rampant crime problem, especially against women. While some of these problems are too systemic to really be fixed in a few years and require a huge cultural change, there are a lot of initiatives at play here as well. Its also a warning for startups aiming to start in India. Personal safety is a given in a lot of societies which does allow the sharing economy, but applying the same model in a complex country like India can really burn you, as it did Uber which was banned from Delhi for a while when a rider was raped by a driver there.
Last year, the Delhi police launched the Himmat app to allow women to broadcast an SOS to the police if they felt threatened. The app itself has had limited success and does feel rather poorly thought through (would you really have the time to find and launch an app if you were attacked?), but hopefully its a work in progress
Uber has added a series of security features as well including partnering with the Himmat app as part of its initiatives to add to rider safety and get unbanned.
I would love to see more startups look into this space. Personal safety for women is a big problem globally and services like Roar can make a real difference while also carving a sustainable business for themselves.
Conclusion
Its definitely an interesting time to be a technologist in India. There is a strong drive to grow technology companies and transform the society into a more digital one. There is no dearth of problems that need to be addressed. It’ll be interesting to see India’s transformation in the next 5 years as it goes through this phase.
2017 was an intense year of learning for me. A change of charter for the labs group I work in late last year meant we focused deeper on core technology which was exciting as a technologist. This year that list included Unity, WebVR, Blender, React, React Native, Ruby on Rails and Blockchains (specifically Ethereum). Phew!
Oh! And I got recognized by Philadelphia Business Journal as one of the city’s 10 Technical Disruptors to watch
A big part of this year for me was centered around building Virtual Reality experiences. The first half of the year was focused around building these experiences in Unity which is a very different environment to work in compared to as XCode or Android Studio (which I was deep into last year) but more reminiscent of my previous work in Flash. I really do enjoy Unity and this year made me truly appreciate the game development process. My friend and colleague Jack Zankowski (who did most of the design for our earlier VR work) gave a talk on our early VR experiences at a WiCT event early this year.
However later in the year, we started doing a lot more work in WebVR which, though flaky at times, with platform-specific eccentricities, still was a much faster way to prototype VR experiences. Using AFrame, ThreeJS and WebGL was a fantastic learning experience and hopefully I can do more web animation and 3d graphics work, with or without VR, next year.
I gave a talk on building WebVR experiences at PhillyGDG that you can find below.
One thing I didn’t see coming was how much time I’d end up spending with Blender this year. I had never worked with 3d modeling tools before but our VR project needed 3D models and since I have some experience with illustration and design (I used to work as a freelance illustrator), that task fell on me. In the last 4 months of working with Blender I have gone from god-awful to okay-ish.
Blender work in progress
Another project I was very involved with was an internal knowledge portal for our team that we built with ReactJS and Express. Having never done React till before this year, that was educational as well, and I completely fell in love with it (even given its weird licenses though hopefully thats starting to change).
The project also made me look deeper into React Native as a platform for mobile experiences. Late last year I had started an app (more on that later) that needed a CMS and a native mobile client and gave a talk on that at Modev DC and at React Philly
I built the CMS in Rails, being most familiar with that, though that wasn’t saying much as of last year. This year, I definitely feel I have leveled up my Rails game a bit. Perfect timing as most Rails devs I know are moving to Elixir/Phoenix or Node/Express 😅
A lot of time this year was also spent giving talks on Blockchains to various internal and external groups. Turns out I needn’t have bothered since Crypto-mania swept the US this year and now EVERYONE is talking about Blockchains. But I did get to work on one project on it, so that was cool
That about covers the tech news in 2017, and there are already a few interesting projects in the hopper for 2018. Stay tuned 🙂
If you know me, there is a good chance that you know how 👍 I am about Blockchain and Decentralized apps. I have given a few talks on it but till recently these were mostly either focused on Bitcoin or on the academics of Blockchain technology. At a recent Comcast Labweek, I was finally able to get my hands dirty with building a Blockchain based decentralized app (DApp) on Ethereum.
Labweek is a week long hackathon at the T&P org in Comcast that lets people work on pretty much anything. I was pretty fortunate to end up working with a bunch of really smart engineers here. The problem we decided to look into was the challenge of funding open source projects. I am pretty passionate about open source technologies but I have seen great ideas die on Github because supporting a project when you aren’t getting paid for it is really hard. Our solution to this problem was a bounty system for Github issues that we called CodeCoin.
The way CodeCoin worked was as follows:
A project using CodeCoin would sign up on our site and download some Git hooks.
When anyone creates an issue on Github, we create an Ethereum wallet for the issue and post the wallet address back to Github so its the first comment on the issue.
We use a Chrome extension that adds a “Fund this issue” button on the Github page that starts the Ethereum payment flow.
To actually handle the payment, we require MetaMask that we can trigger using its JavaScript api
Ether is held in the wallet till the issue is marked resolved and merged into master. At this time another Git hook fires that tells our server to release the Ether into the wallets of all the developers who worked on the issue.
Issue page design. Most of the UI changes came from a custom Chrome extension
Application Flow
Note that while we held the Ether on our side in wallets, the right way to do this would have been to use a Smart Contract. We started down that route but since most of the code was done in like 2 days (while juggling other real projects), wallets seemed like the easier route.
Releasing money into developer accounts was also a hack. Since developers don’t sign up to Github with any digital wallet address, we need the wallet addresses as part of the final commit message. This could be done with a lookup on a service like Keybase.IO maybe and with more time we would have tried integrating it to our prototype. In fact it was the next week that I heard about their own Git offering. I haven’t read enough about that yet though.
Development notes:
For local development, we used the TestRPC library to run a Ethereum chain simulation on our machine.
We used web3js, the Ethereum JavaScript api for doing most of the actual transactions
Web3js was injected into the browser by the MetaMask extension. There were some challenges getting Metamask to talk to the TestRPC. Basically, you had to make sure that you initialized MetaMask with the same seed words as you used for your account on TestRPC (which makes sense) but there isn’t a way afaik to change that information in MetaMask. Early on, we were restarting TestRPC without configuring the initial accounts so we’d have to reinstall MetaMask to configure it with the new account. Chalk that to our own unfamiliarity with the whole setup.
MetaMask transaction
We did try to use Solidity to run a smart contract on TestRPC which worked for the demo apps, but canned that effort in the last moment as we were running out of time
All in all, it was a fun couple of days of intense coding and I feel I learnt a lot. Most of all I enjoyed working with a group of really smart peers, most of whom I didn’t know before the project at all. Hopefully we get to do more of that in the future 🙂
I had a great time last week attending Oculus Connect 4. Just like last year, the keynotes were really interesting and the sessions pretty informative. Here are some quick thoughts on the whole event:
Oculus Go and Santa Cruz
Oculus announced two new self contained headsets: the Go, a 3DoF inexpensive ($199) headset that will be coming early next year and much later, Project Santa Cruz, the 6DoF headset with inside-out tracking. Whats interesting is that both these devices will run mobile CPU/GPUs which means that 3 of the 4 VR headsets released by Oculus will have mobile processing power. If you are a VR developer, you better be optimizing your code to run on low horsepower devices, not beefy gaming machines.
Oculus Go
Both Go and Santa Cruz are running a fork of Android
The move to inexpensive hardware makes sense, since Oculus has declared it their goal to bring 1 billion people into VR (no time frame was given 😉 )
Oculus Dash and new Home Experience
The older Oculus Home experience is also going away in favor of the new Dash dashboard that you’ll be able to bring up within any application. Additionally you’ll be able to pin certain screens from Dash enabled applications (which based on John Carmack‘s talk seem to be just Android apks). There could be an interesting rise of apps dedicated to this experience, kinda like Dashboard widgets for Mac when that was a thing.
Oculus Dash
The removal of the app-launcher from Oculus Home means Home now becomes a personal space that you can modify with props and environments to your liking. It looks beautiful, though not very useful. Hopefully it lasts longer than PlayStation’s Home
New Oculus Home (pic from TechCrunch,com)
New Avatars
The Oculus Avatars have also undergone a change. They no longer have the weird mono-color/ wax-dolls look but actually look more human with full color. This was also done to allow for custom props and costumes that you’ll be able to dress your avatar in in the future (go Capitalism 😉 )
New Avatars (Pic from VentureBeat.com)
Another change is that the new Avatars have eyes with pupils! The previous ones with pupil-less eyes creeped me out. The eyes have also been coded to follow things happening in the scene to make them feel more real.
Oh and finally, the Avatar SDK is going to go cross platform, which means if you use the Avatars in your app, you’ll be able to use them in other VR platforms as well like Vive and DayDream.
More Video
Oculus has been talking quite a bit lately about how Video is a huge use case for VR. A majority of use of VR seems to be in video applications, though detail on that wasn’t given. For example, apps like BigScreen that let you stream your PC cannot be classified as video or game since who knows whats being streamed. Also since the actual usage number of VR sessions wasn’t said, its hard to figure out if the video sessions count is a lot or not.
Either way, one of the big things that Carmack is working on is a better video experience. Apparently last year their main focus was better text rendering and now the focus is moving to video. The new video framework no longer uses Google’s ExoPlayer and improves the playback experience by syncing audio to locked video framerate rather than the other way as its done today.
Venues
One of the interesting things announced at Connect was Venues: a social experience for events like concerts, sports etc. It will be interesting to see how that goes.
Oculus Venues
There were numerous other talks that were interesting, from Lessons from One Year of Facebook Social to analyzing what is working in the app store. All the videos are on their YouTube Channel
Conclusion:
While I was wowed by a lot of the technology presented, it definitely feels like VR has a Crossing the Chasm problem: They have a pretty passionate alpha-user base but are trying really hard to actually get the larger non-gaming-centric audience in.
Oculus Go seems like a good idea to get the hardware and experience more widely distributed but what is really needed is that killer app that you really have to try in VR. The technology pieces are right there for the entrepreneur with the right idea.