I had the pleasure to help with Matthieu Lecce and Danelle Ross on an effort to use computer vision to detect the volume of transparent liquid in a glass container. The project built on previous effort by Matthieu and the research team on seeing glassware (more information on that research can be found here). Technically Philly did a great writeup on their site as well on the final day of the project.
The program was really intense for the teachers and major props to Danelle for completing it and going through this intense learning experience. My own contribution was fairly limited as I spent only a couple of hours a week with the team answering how groups like Comcast Innovation Labs (where I work) investigate such technology and how machine vision is being used in Virtual Reality and Augmented Reality, a domain I am currently very involved in at the lab.
More than anything though, it was a great learning experience for me. I learnt a lot of new concepts on how machine vision is approached (my only real experience with raw machine vision previously had been some OpenCV experiments in Processing and some half complete Android projects with face detection). Learning the core concepts that go into machine vision and the current state of the art in that field was a great experience.
For the last 8 months, I have been working on VR prototypes for Comcast, one of which we showed off recently at the Cable Show and have been talking about at different conferences (you can see the deck here).
We recently started adding some new faces to my team and so I figured it might be a good idea to put down a quick how-to on GearVR apps. A lot of it can be found at different links on the internet, but this might be useful to go from zero to a quick “Hello World” app using Unity.
1. Setting Up GearVR for Android build:
Since VR apps for the GearVR are Android apps, the standard Android setup for development is required:
Go to the Settings app on your Samsung phone and go to About Phone settings.
Tap the “Build Number” item on the list repeatedly till you see a “You are now a developer” message
Now in the main Settings list, you’ll see a Developer options menu item
Tap on it and in the list that comes up, turn on `USB debugging` toggle button.
Connect your phone to your laptop and an alert will appear asking you if you’d like to allow the PC to debug your app. Tap yes.
2. Create a Unity app
Install the latest version of the Unity IDE including the Android plugin.
Create a quick new project (just put a cube in front of the camera so that you have something to see when the app launches)
From the Unity IDE toolbar, click on File -> Build Settings
Unity probably defaulted you to a Mac/Windows app export. From the platform list, click on Android and then click on Switch Platform. This might take a couple of minutes as Unity converts the project.
When done, tap on the Player Settings. In the settings that appear, tap on the Android tab, and in the Other Settings, select the Virtual Reality Supported option and enable the Oculus option.
3. Enable your Android device to run Oculus apps on device.
Save the osig file in your project in the Project > Plugins > Assets folder. This will allow Unity to package the file into the generated apk
4. Build and Run
Click the Build and Run item in the Unity toolbar. Unity will pack and deploy the app to your phone.
You’ll probably get an alert saying “The app cannot be launched because there is no Launcher Activity in the app. But the app will be deployed to the device.
Back on the phone, go to Settings > Application Manager and find an app called “Gear VR Service”. Select it.
Tap Storage and then Manage Storage
Tap the VR Service Version label multiple times till you get a notification saying “You are now a developer”. Two other options will appear below: Developer Mode and Add Icon to app list
Tap on the Add Icon to app list. This will add an icon at the phone’s app list (where you find all your other apps)
Tap the icon from the apps list. The launching activity will list all your available VR apps. Find your app and tap on it. You’ll get a screen instructing you to insert the phone into the GearVR. Inserting it will launch your app 🙌
Every couple of months I meet a few friends over lunch to geek out over the latest in the world of Bitcoin, Blockchains and Crypocurrencies in general. Just so that I dont forget them, here is a list of things we discussed today🙂
Yesterday Jack Zankowski (who leads the next gen UX at Comcast) and I gave a talk on the design and engineering challenges in building VR experiences for TV content at the WICT Tech It Out event at Villanova University. While there, we were also able to check out their pretty interesting VR cave as well.
The talk is based on a VR prototype we demoed recently at the Cable Show and the Code Conference. Personally its been a very educational experience. In a way, working with Unity is like working with Flash all over again, with similar challenges ( managing visual assets, code architecture, working in a team of varying skillsets from design to development). Hopefully I’ll do some more write-ups here on those challenges. But for now, the deck from the event is embedded below.
Quite a few years back, I got really interested in Treemaps. The whole project had started as an academic discussion between a friend and I on how hard a treemap would be to build (they seemed to be a pretty popular data visualization method back then, though I don’t see them around much these days). Anyway, what I thought would be trivial weekend hack turned out to be a lot more involved and I ended up reading and implementing Mark Bruls, Kees Huizing, and Jarke J. van Wijk’s algorithm for an optimum Squarified Treemap implementation (in ActionScript 3, you can find all the code which I open sourced here).
To demo the algorithm, I took the visual aesthetics of Marumushi’s NewsMap but instead used it to show trending topics on Digg.com. The project, not so creatively named DiggGraphr, got fairly popular and was mentioned on a few data visualization blogs and even won an award for a Digg.com API contest.
DiggGraphr has been dead for a while now, but I was pleasantly surprised to have it be included in a research paper titled ‘A case study on news services using big data analytics (뉴스빅데이터 서비스 사례 및 모델 개발 연구)’ conducted by professor Kim of the Korea Aerospace University and three researchers from a non-profit research organization (Media and Future Institute).
If anyone of you can read Korean, feel free to read the paper here
At last week’s AndroidPhilly event, I was surprised to find a lock screen notification for a “Physical Web Page” on my phone
Tapping on that notification linked to an explanation page of the Physical Web pages and then a link to Nick Dipatri’s BLE geo-fencing app.
This was the first time I have seen the Physical Web pages in action, though Google has talked about them for a while. While not a lot of people talk about iOS’s iBeacons much anymore (compared to the rage they were when they were announced), the Physical Web pages approach is different with Google Chrome being the receiver app that detects the beacons and notifies the user. This is great for developers who don’t have to worry about having their app installed but also means users wont be able to skip notifications from services they don’t care for. Bundling the beacon technology within Chrome also means that the Google approach is more cross platform and will work on multiple devices.
It’ll be interesting to see how this evolves in the next few years.
I have been reading a lot of interesting things on robotics and AI lately.
Boston Dynamics’ new Atlas 2 Robots video is fascinating to watch. While the capabilities of the robot to balance itself and track items is great, almost every person who I showed the video felt sorry for the robot who keeps getting shoved, kicked and its box pushed away. The empathy towards a machine is really a fascinating thing to observe.
I just finished the book “Machines of Loving Grace“. While the book doesn’t have any unique answers to the question of human-machine balance in the coming years, the book is full of fascinating information on the history of AI and robotics. Definitely worth a read.