My Experience with UPenn’s Research Experience For Teachers program

For the last few weeks I have been involved with Penn’s Research Experience for Teachers program (RET). The 6 week NSF funded program paired 10 teachers from local high schools in Philadelphia with a research engineer at Penn’s General Robotics, Automation, Sensing & Perception (GRASP) laboratory at Penn and an industry mentor in an effort to expose these teachers to advanced engineering concepts that they could take back to their students.

I had the pleasure to help with Matthieu Lecce and Danelle Ross on an effort to use computer vision to detect the volume of transparent liquid in a glass container. The project built on previous effort by Matthieu and the research team on seeing glassware (more information on that research can be found here). Technically Philly did a great writeup on their site as well on the final day of the project.

ret
Learning some machine vision at Matthieu’s office at the GRASP lab

The program was really intense for the teachers and major props to Danelle for completing it and going through this intense learning experience. My own contribution was fairly limited as I spent only a couple of hours a week with the team answering how groups like Comcast Innovation Labs (where I work) investigate such technology and how machine vision is being used in Virtual Reality and Augmented Reality, a domain I am currently very involved in at the lab.

More than anything though, it was a great learning experience for me. I learnt a lot of new concepts on how machine vision is approached (my only real experience with raw machine vision previously had been some OpenCV experiments in Processing and some half complete Android projects with face detection). Learning the core concepts that go into machine vision and the current state of the art in that field was a great experience.

Sometimes I really miss grad school.

My Experience with UPenn’s Research Experience For Teachers program

Getting Started with Gear-VR Development

Screen Shot 2016-08-22 at 10.54.34 PM.png

For the last 8 months, I have been working on VR prototypes for Comcast, one of which we showed off recently at the Cable Show and have been talking about at different conferences (you can see the deck here).

We recently started adding some new faces to my team and so I figured it might be a good idea to put down a quick how-to on GearVR apps. A lot of it can be found at different links on the internet, but this might be useful to go from zero to a quick “Hello World” app using Unity.

1. Setting Up GearVR for Android build:

Since  VR apps for the GearVR are Android apps, the standard Android setup for development is required:

  1. Go to the Settings app on your Samsung phone and go to About Phone settings.
  2. Tap the “Build Number” item on the list repeatedly till you see a “You are now a developer” message
  3. Now in the main Settings list, you’ll see a Developer options menu item
  4. Tap on it and in the list that comes up, turn on `USB debugging`  toggle button.
  5. Connect your phone to your laptop and an alert will appear asking you if you’d like to allow the PC to debug your app. Tap yes.

USB-debug_auth

2. Create a Unity app

  1. Install the latest version of the Unity IDE including the Android plugin.
  2. Create a quick new project (just put a cube in front of the camera so that you have something to see when the app launches)
  3. From the Unity IDE toolbar, click on File -> Build Settings
  4. Unity probably defaulted you to a Mac/Windows app export. From the platform list, click on Android and then click on Switch Platform. This might take a couple of minutes as Unity converts the project.Screen Shot 2016-08-22 at 10.18.39 PM
  5. When done, tap on the Player Settings. In the settings that appear, tap on the Android tab, and in the Other Settings, select the Virtual Reality Supported option and enable the Oculus option.Screen Shot 2016-08-22 at 10.22.37 PM

3. Enable your Android device to run Oculus apps on device.

  1. Create a profile at developer.oculus.com 
  2. Connect your phone to your computer and use the adb devices command on the command line to detect the connected device. Copy the device id.
  3. Use the device id at the Oculus osig file generator. Paste the id in the form field and you’ll get an osig file.
  4. Save the osig file in your project in the Project > Plugins > Assets folder. This will allow Unity to package the file into the generated apk

4. Build and Run 

  1. Click the Build and Run item in the Unity toolbar. Unity will pack and deploy the app to your phone.
  2. You’ll probably get an alert saying “The app cannot be launched because there is no Launcher Activity in the app. But the app will be deployed to the device.
  3. Back on the phone, go to Settings > Application Manager and find an app called “Gear VR Service”. Select it.
  4. Tap Storage and then  Manage Storage
  5. Tap the VR Service Version label multiple times till you get a notification saying “You are now a developer”. Two other options will appear below: Developer Mode and Add Icon to app list Screen Shot 2016-08-22 at 10.44.42 PM
  6. Tap on the Add Icon to app list. This will add an icon at the phone’s app list (where you find all your other apps)
  7. Tap the icon from the apps list. The launching activity will list all your available VR apps. Find your app and tap on it. You’ll get a screen instructing you to insert the phone into the GearVR. Inserting it will launch your app 🙌

 

Getting Started with Gear-VR Development

Bitcoin Pow Wow 2 notes

Every couple of months I meet a few friends over lunch to geek out over the latest in the world of Bitcoin, Blockchains and Crypocurrencies in general. Just so that I dont forget them, here is a list of things we discussed today🙂

 

Bitcoin Pow Wow 2 notes

Bringing Cable TV to VR (Slides)

me.png

 

Yesterday Jack Zankowski (who leads the next gen UX at Comcast) and I gave a talk on the design and engineering challenges in building VR experiences for TV content at the WICT Tech It Out event at Villanova University. While there, we were also able to check out their pretty interesting VR cave as well.

The talk is based on a VR prototype we demoed recently at the Cable Show and the Code Conference. Personally its been a very educational experience. In a way, working with Unity is like working with Flash all over again, with similar challenges ( managing visual assets, code architecture, working in a team of varying skillsets from design to development). Hopefully I’ll do some more write-ups here on those challenges. But for now, the deck from the event is embedded below.

Thanks to the folks at WICT for having us.

 

 

Bringing Cable TV to VR (Slides)

DiggGraphr referenced in technical paper on news services using big data analytics

Quite a few years back, I got really interested in Treemaps. The whole project had started as an academic discussion between a friend and I on how hard a treemap would be to build (they seemed to be a pretty popular data visualization method back then, though I don’t see them around much these days). Anyway, what I thought would be trivial weekend hack turned out to be a lot more involved and I ended up reading and implementing Mark Bruls, Kees Huizing, and Jarke J. van Wijk’s algorithm for an optimum Squarified Treemap implementation (in ActionScript 3, you can find all the code which I open sourced here).

To demo the algorithm, I took the visual aesthetics of Marumushi’s NewsMap but instead used it to show trending topics on Digg.com. The project, not so creatively named DiggGraphr, got fairly popular and was mentioned on a few data visualization blogs and even won an award for a Digg.com API contest.

DiggGraphr has been dead for a while now, but I was pleasantly surprised to have it be included in a research paper titled ‘A case study on news services using big data analytics (뉴스빅데이터 서비스 사례 및 모델 개발 연구)’  conducted by professor Kim of the Korea Aerospace University and three researchers from a non-profit research organization (Media and Future Institute).

digggraphr.png

If anyone of you can read Korean, feel free to read the paper here

 

DiggGraphr referenced in technical paper on news services using big data analytics

The Physical Web Cometh

At last week’s AndroidPhilly event, I was surprised to find a lock screen notification for a “Physical Web Page” on my phonepage-1.png

Tapping on that notification linked to an explanation page of the Physical Web pages and then a link to Nick Dipatri’s BLE geo-fencing app.

page-2.png

This was the first time I have seen the Physical Web pages in action, though Google has talked about them for a while. While not a lot of people talk about iOS’s iBeacons much anymore (compared to the rage they were when they were announced), the Physical Web pages approach is different with Google Chrome being the receiver app that detects the beacons and notifies the user. This is great for developers who don’t have to worry about having their app installed but also means users wont be able to skip notifications from services they don’t care for.  Bundling the beacon technology within Chrome also means that the Google approach is more cross platform and will work on multiple devices.

It’ll be interesting to see how this evolves in the next few years.

 

 

The Physical Web Cometh

Reading up on Robotics

I have been reading a lot of interesting things on robotics and AI lately.

  • Boston Dynamics’ new Atlas 2 Robots video is fascinating to watch. While the capabilities of the robot to balance itself and track items is great, almost every person who I showed the video felt sorry for the robot who keeps getting shoved, kicked and its box pushed away. The empathy towards a machine is really a fascinating thing to observe.
  • If you are really interested in Atlas, give DARPA’s YouTube channel a look, specially their playlist on the Robotics challenge thats pushing Robotics just as their autonomous cars challenge did to cars in 2012.
  • I just finished the book “Machines of Loving Grace“. While the book doesn’t have any unique answers to the question of human-machine balance in the coming years, the book is full of fascinating information on the history of AI and robotics. Definitely worth a read.
  • On the other hand, I couldn’t get too far into “Our Final Invention: Artificial Intelligence and the End of the Human Era“. The few chapters that I did read were so speculative, lacking real science that I just couldn’t finish it.
  • While Google has been in the news quite a bit on its Robotics initiatives, Wired ran a really interesting article on Andy Ruben’s new AI and Robotics incubator: Playground.global, that recently closed a $300M funding round
  • With the dramatic increase in interest in Robotics and AI, IBM wants to reassure everyone with a series of TV ads that Watson will not become the next Skynet and take over the world. Here’s hoping…

robot.gif

Reading up on Robotics