For the last 3 days I have been at San Jose at the third official Oculus Connect conference (OC3) and its been amazing to see some of the prototypes, talk to a few developers and just learn from other folks who are charting the VR space. Its been an amazing mix of education and fun and given me a lot of mull over on the flight back tomorrow. Here are some highlights though:
The Social Focus: Putting people at the center
Social was definitely the big message at OC3, and the Facebook Social concept that Mark Zuckerberg demoed was really well done.
I really liked the avatars and the way they worked here. What was a little weird was it was at the same stage where a few minutes later they talked about the new Oculus Avatars system which as developers we are encouraged to use. These 2 projects are totally independent of each other and a later talk by Facebook’s Mike Booth talked about a lot of learnings that they developed while building Facebook’s avatar system that flies in the face of the look of the official Avatars app/sdk. Hopefully these two will me merged at some time in the future but there is enough of a chance that they may not.
Oh and I hope you didn’t liked the boxy avatars from Oculus Social app, cause that effort seems to be dead.
Oculus finally revealed the pricing and availability of the Touch controllers. At $200 they are a little pricey and make a full Rift setup be almost the exact same price as the Vive. That aside the controllers are really nice and bringing your hands into VR does up the level of immersion in VR tremendously (at work we had used the Leap Motion sensor on top of the DK2 to get some hand tracking in a demo but thankfully we can leave those kludges behind). The only unfortunate thing with the Touch controllers being optional purchase is that developers can’t really rely on them being available which might prevent them from leveraging them to avoid splitting the market. Hopefully most Rift owners choose to get it cause I will say they work really well.
Oculus can now do Room-scale VR but requires a third sensor that you can now buy for $79. I can’t imagine a lot of people going for this at least immediately and Room scale might remain the domain of the Vive for now.
One of the messages Oculus apparently wanted to send was that passive experiences shouldn’t be dismissed as apparently usage of Oculus is equally divided between games and video apps. At the keynote, Oculus also announced a Video SDK that will let video publishers create content but let Facebook host and distribute that content efficiently based on their research around optimized 360 video streaming (Foveated rendering etc). I need to dig more into this.
Other things also interesting included:
Facebook continuing to fund more VR development with another 250M fund for VR apps and games
Oculus is adding an Education category to their stores so expect more apps and games for that
Cheaper Oculus Ready certified PCs including a $500 one. Oh and Oculus Ready certified laptops for you developers on the go.
Most of the OC3 event was about trying out the demos for different games coming soon. Lots of good ones to choose from but Eagle Flight was totally awesome.
Favorite moment: Talking to John Carmack
Okay so this was a totally nerd moment, but I have been a big fan of John Carmack for a long time and he was kinda like a geek hero of mine. Being able to talk to him for a bit was really amazing. I even captured a part of it on video (vertical cause I saved it from Periscope🙂 )
I had the pleasure to help with Matthieu Lecce and Danelle Ross on an effort to use computer vision to detect the volume of transparent liquid in a glass container. The project built on previous effort by Matthieu and the research team on seeing glassware (more information on that research can be found here). Technically Philly did a great writeup on their site as well on the final day of the project.
The program was really intense for the teachers and major props to Danelle for completing it and going through this intense learning experience. My own contribution was fairly limited as I spent only a couple of hours a week with the team answering how groups like Comcast Innovation Labs (where I work) investigate such technology and how machine vision is being used in Virtual Reality and Augmented Reality, a domain I am currently very involved in at the lab.
More than anything though, it was a great learning experience for me. I learnt a lot of new concepts on how machine vision is approached (my only real experience with raw machine vision previously had been some OpenCV experiments in Processing and some half complete Android projects with face detection). Learning the core concepts that go into machine vision and the current state of the art in that field was a great experience.
For the last 8 months, I have been working on VR prototypes for Comcast, one of which we showed off recently at the Cable Show and have been talking about at different conferences (you can see the deck here).
We recently started adding some new faces to my team and so I figured it might be a good idea to put down a quick how-to on GearVR apps. A lot of it can be found at different links on the internet, but this might be useful to go from zero to a quick “Hello World” app using Unity.
1. Setting Up GearVR for Android build:
Since VR apps for the GearVR are Android apps, the standard Android setup for development is required:
Go to the Settings app on your Samsung phone and go to About Phone settings.
Tap the “Build Number” item on the list repeatedly till you see a “You are now a developer” message
Now in the main Settings list, you’ll see a Developer options menu item
Tap on it and in the list that comes up, turn on `USB debugging` toggle button.
Connect your phone to your laptop and an alert will appear asking you if you’d like to allow the PC to debug your app. Tap yes.
2. Create a Unity app
Install the latest version of the Unity IDE including the Android plugin.
Create a quick new project (just put a cube in front of the camera so that you have something to see when the app launches)
From the Unity IDE toolbar, click on File -> Build Settings
Unity probably defaulted you to a Mac/Windows app export. From the platform list, click on Android and then click on Switch Platform. This might take a couple of minutes as Unity converts the project.
When done, tap on the Player Settings. In the settings that appear, tap on the Android tab, and in the Other Settings, select the Virtual Reality Supported option and enable the Oculus option.
3. Enable your Android device to run Oculus apps on device.
Save the osig file in your project in the Project > Plugins > Assets folder. This will allow Unity to package the file into the generated apk
4. Build and Run
Click the Build and Run item in the Unity toolbar. Unity will pack and deploy the app to your phone.
You’ll probably get an alert saying “The app cannot be launched because there is no Launcher Activity in the app. But the app will be deployed to the device.
Back on the phone, go to Settings > Application Manager and find an app called “Gear VR Service”. Select it.
Tap Storage and then Manage Storage
Tap the VR Service Version label multiple times till you get a notification saying “You are now a developer”. Two other options will appear below: Developer Mode and Add Icon to app list
Tap on the Add Icon to app list. This will add an icon at the phone’s app list (where you find all your other apps)
Tap the icon from the apps list. The launching activity will list all your available VR apps. Find your app and tap on it. You’ll get a screen instructing you to insert the phone into the GearVR. Inserting it will launch your app 🙌
Every couple of months I meet a few friends over lunch to geek out over the latest in the world of Bitcoin, Blockchains and Crypocurrencies in general. Just so that I dont forget them, here is a list of things we discussed today🙂
Yesterday Jack Zankowski (who leads the next gen UX at Comcast) and I gave a talk on the design and engineering challenges in building VR experiences for TV content at the WICT Tech It Out event at Villanova University. While there, we were also able to check out their pretty interesting VR cave as well.
The talk is based on a VR prototype we demoed recently at the Cable Show and the Code Conference. Personally its been a very educational experience. In a way, working with Unity is like working with Flash all over again, with similar challenges ( managing visual assets, code architecture, working in a team of varying skillsets from design to development). Hopefully I’ll do some more write-ups here on those challenges. But for now, the deck from the event is embedded below.
Quite a few years back, I got really interested in Treemaps. The whole project had started as an academic discussion between a friend and I on how hard a treemap would be to build (they seemed to be a pretty popular data visualization method back then, though I don’t see them around much these days). Anyway, what I thought would be trivial weekend hack turned out to be a lot more involved and I ended up reading and implementing Mark Bruls, Kees Huizing, and Jarke J. van Wijk’s algorithm for an optimum Squarified Treemap implementation (in ActionScript 3, you can find all the code which I open sourced here).
To demo the algorithm, I took the visual aesthetics of Marumushi’s NewsMap but instead used it to show trending topics on Digg.com. The project, not so creatively named DiggGraphr, got fairly popular and was mentioned on a few data visualization blogs and even won an award for a Digg.com API contest.
DiggGraphr has been dead for a while now, but I was pleasantly surprised to have it be included in a research paper titled ‘A case study on news services using big data analytics (뉴스빅데이터 서비스 사례 및 모델 개발 연구)’ conducted by professor Kim of the Korea Aerospace University and three researchers from a non-profit research organization (Media and Future Institute).
If anyone of you can read Korean, feel free to read the paper here
At last week’s AndroidPhilly event, I was surprised to find a lock screen notification for a “Physical Web Page” on my phone
Tapping on that notification linked to an explanation page of the Physical Web pages and then a link to Nick Dipatri’s BLE geo-fencing app.
This was the first time I have seen the Physical Web pages in action, though Google has talked about them for a while. While not a lot of people talk about iOS’s iBeacons much anymore (compared to the rage they were when they were announced), the Physical Web pages approach is different with Google Chrome being the receiver app that detects the beacons and notifies the user. This is great for developers who don’t have to worry about having their app installed but also means users wont be able to skip notifications from services they don’t care for. Bundling the beacon technology within Chrome also means that the Google approach is more cross platform and will work on multiple devices.
It’ll be interesting to see how this evolves in the next few years.