Using ActiveStorage with DropZone in a React app

The last year or so I have been working on an app for teaching programming that I hope to release in a few weeks. The CMS for the app is a React based web app powered by a Rails server. The Rails server is running in an API-only mode which has been working out reasonably well.

One of the recent additions to Rails has been the ActiveStorage system that purports to make attaching media to Rails models really simple. Unfortunately most of the documentation around it involves using Rails’ View tier as well which the API-only mode strips out. In fact, getting there is an open bug on ActiveStorage to actually make it work out of the box in API-only projects.

Oh and one more thing: ActiveStorage doesn’t support uploading to multiple buckets, just in case thats a dealbreaker for you.

After a couple of days of struggling through it, I finally have it working in an API only mode using the React Dropzone Component instead of a boring filepicker.

Also note, there is an open source React component for interacting with ActiveStorage which I did not try, mostly cause I was already halfway done with the Dropzone implementation by the time I discovered it.

Issue #1: ActiveStorage CSRF errors in API only mode

ActiveStorage ships with a JavaScript library that does a lot of the work, but one of the issues with using ActiveStorage from a react app was that it tries to do CSRF token validation on the requests but cannot. To solve it, I added a initializer in the initializers folder that skipped CSRF checks for it

ActiveStorage::DirectUploadsController.instance_eval { skip_forgery_protection }

Issue #2: Cors challenges

So this isn’t a bug but when working on a React app using the CreateReactApp project, the React app runs on port 3000 while the Rails server runs on port 3001. To prevent CORS issues when uploading images, I had to add the Ract::Cors gem and a cors initializer that allowed requests from localhost:3000 during development

Rails.application.config.middleware.insert_before 0, Rack::Cors do
 allow do
   origins 'localhost:3000'
   resource '*',
    headers: :any,
    methods: [:get, :post, :put, :patch, :delete, :options, :head]
 end
end

Also, since the React app was using DirectUpload to S3, you have to add the right Cors settings for your S3 buckets as well (this blog post helped figure out the CORS issue)

Uploading an Image using React and Dropzone:

The core part of the image uploader component is below:

const url = '/rails/active_storage/direct_uploads'
const upload = new DirectUpload(file, url, this)

upload.create((error, blob) => {
 if (error) {
    console.log("Image Error:", error)
 }
 else {
    this.dropzone.removeAllFiles(true);
    this.props.onImageUploaded(blob)
   }
})

The DirectUpload class comes from the ActiveStorage JavaScript file. When this code is run, the image is uploaded to the storage provider and you get back a blob (just a JavaScript hash)  with a bunch of metadata (that is saved in the `active_storage_blobs` database table.

One of the keys in the hash is called signed_id. To assign the uploaded image as an attachment to any of the models, the model has to have a has_one_attached or a has_many_attached field. In my case its something like

class Book < ApplicationRecord
  has_many_attached :images
end

 

In my book controller I can then accept a PUT request that updates the record like so:

@book.images.attach( params[:signed_id] )

or better yet just let strong_params do it automatically.

 

 

Using Symlinked Node libraries with React Native 0.55

I recently updated the React Native app I have been working on for a while from RN 0.47 to 0.55. I’ll admit I was a bit callous about the update and hadn’t really looked at the change log, but hey, version control does give one a foolish sense of bravado.

Anyway, needless to say there were issues. As of RN 0.55.4, the `setJSMainModuleName` has been renamed to `setJSMainModulePath` and it took me a bit of sleuthing to figure that out (Find the Github commit here)

However a bigger issue came up when I tried to package the app after resolving the compile errors.

Screen Shot 2018-05-16 at 1.19.27 AM

Turns out the new Metro packager cannot follow symlinks, like those created by npm-link

This was a total fail for me, since my app uses local npm modules to hold pieces of common code for the web and mobile clients.

Thankfully someone did come up with a bit of a hack that generates absolute paths for all symlinked libraries and launches the cli.js file of the packager with a config file with the list of absolute paths.

It works for now, but hopefully this bug will get fixed soon.

Building Smart Contracts on the Ethereum Blockchain

Last week’s Comcast Lab Week gave me another opportunity to dig deeper into Blockchains. In my previous writeup on CodeCoin, I had used Ethereum to create a bounty system for Github issues. However under the hood, we had cloud servers managing various wallets belonging to the different issues. Smart Contracts offer a better way to handle this.

What Are Smart Contracts?

Smart Contracts are pieces of code that execute on the Blockchain. Think of them as classes in an Object Oriented Programming model. Once deployed, you can invoke methods on the Contract from any wallet on the Blockchain. The method, when called, gets not only the parameters that you explicitly sent as part of the method call but also the caller’s wallet address and any value (Ether or its smaller fraction Wei) that the contract was sent. The contract can then keep part or all of the value sent in return for executing the code.

The world’s simplest Smart Contract.

Note that the above contract is a “hello world” contract. Ours was a bit more complicated.

Tools and Setup

Similar to the last time, we used the TestRPC program, now rebranded as Ganache, or specifically it’s CLI  version, the Ganache-CLI) to develop and test the application. The app itself was pretty simple: It allowed users to rent an asset in our store by sending a particular amount of ether to our smart contract.

ganache setup
Configuring Ganache

In addition to Ganache, we also used the Truffle framework to build the application as well as MetaMask to run the transaction.

The Smart Contract itself was written in Solidity. Even though there are may editor plugins you can add to your favorite editor, I found using the the cloud hosted Remix IDE the best to get started. It’s already configured with linters and debug tools that make developing your contract much simpler.

The Truffle framework can be thought of as an equivalent to Ruby on Rails, just for Ethereum projects. When you start a new project, it creates a project structure with folders for contracts, migrations, tests and a truffle.js configuration file that lets you deploy your code to various dev/test/prod environments (think of truffle.js as a config.yml from Rails)

Once you place your contract in the contracts/ folder, running truffle compile compiles the Solidity code to the bytecode that can be deployed to the Ethereum Blockchain. Running truffle migrate deploys the contract to the chain. Truffle also provides a handy REPL console that you can use to interact with your contract from the command line. Very convenient to find your contract’s deployed address for example by calling

Truffle Console
Find the address of the SimpleStorage Contract

Running the Application

Our client application ran a small React app that would send a little bit of Ether to a deployed smart contract. If the transaction was successful, we’d send the successful Transaction ID to a middleware server that would validate it and authorize the user to a piece of content

The client code communicates with the Blockchain via the Web3 library injected into the browser by MetaMask. The code to use the contract looks something like this:

That’s pretty much it.

Random Learnings

  • We had issues with MetaMask and Ganache seeing each other’s wallets. This might be by design but to get anything done, I had to write another small server script that funded Metamask accounts with Ether from Ganache accounts
  • All transactions are done in Wei (10^-18 Ether). During development we’d send small values like 10 Wei across and were perplexed that Metamask or other wallets didn’t change their displays till we realized that the number was too small for MetaMask to show in its UI
  • We assumed a successful transaction on the contract assumed successful payment. We did not wait on the transaction to be mined to declare payment success. We should be waiting on that using the Web3’s filter API but we just ran out of time on the project.
  • To connect with a contract using Web3, you need to point it to its address and the JSON interface of the contract. Truffle’s console has a `toJSON()` method but that is not the JSON you are looking for. The right JSON file is located in the `/build/contracts/` directory. Once you have that JSON file you can create the Contract object in Web3 by using
    var myContract = new web3.eth.Contract(SimpleStorageABI, address)
  • We found an interesting project called OpenZeppelin that seems to be a public repo/tool for often used Contracts. I need to try that next.

Final Thoughts

This was a fun 5 day project that demystified a lot of things around Smart Contracts that I wasn’t sure about. And as always, it was great to work with a bunch of smart engineers I usually don’t get the opportunity to work with otherwise 🙂

Notes on the Indian tech scene: 2018 edition

During my last trip to India (this Jan) I was once again struck by how different the Indian tech scene is from the US. In my last trip I talked about the very different mobile behavior in India, but during this trip, I was more struck by India’s beginning of transition to a digital economy. India is going through an interesting transition phase where its leaders, specifically Prime Minister Modi, are pushing a change towards a more digital nation. The road is bumpy but hopefully it will lessen some of the big problems India faces today.

In my three-ish weeks there, I found a number of things that I found interesting. Here are some notes from there.

Digital Society

India is at an interesting moment in history with the Prime Minister pushing the nation into the digital age. These initiatives include a digital identity program (Aadhaar), bank accounts for every citizen, a universal digital API for payments mandated for every bank (UPI) etc.

The transition may not be smooth, as evidenced by occasional reports of data breaches, and an overzealous broadening of the scope of what Aadhaar was supposed to enable (which is going through a course correction now) but I am optimistic that this can really accelerate digital services in India and arrest the corruption epidemic that plagues the non-digital economy.

Sometime in the next couple of months, I am hoping to dig more into the India Stack which aims to be the platform for the new digital society.

Digital Payments

The other thing that was really interesting to see was the rise of the “pay-with-QR-code” options all over the city, once again enabled by the UPI banking api and accelerated by the demonetization event in 2016.

Digital payment companies like PayTM saw a huge growth in the last year and with WhatsApp’s recent announcement of adding payments into the app (which Indians are addicted to), there will be a huge transition towards a more cashless economy.

The government is clearly doing everything it can to push the transition and news reports like the one shown below that show that government controlled services would cost digital payers less, are getting pretty commonplace to see.

i1.jpg

There is definitely a lot of Pay-with-QR code options, but I am curious if systems like this could be abused. For example, I ran into the sign below at a railway station and while I am sure its legitimate, it could just as well have been part of a scam where someone just pasted these signs when people weren’t looking.

Some of this is prevented by 2-factor-auth or one-time-passwords (OTPs) which are enforced for all digital payments. So every time you make a payment, you get an SMS to confirm the transaction and it will not go through till you reply to the SMS message.

Uber vs Ola

The availability of Uber and Ola ride sharing services has also been good to see. Ola, at least for now, does outnumber Uber though I did end up taking each of them roughly the same number of times. And the fact that my Uber app from the US worked without a hitch (given it was connected to my globally valid American Express card) was also great. Its also a great convenience in India where its easy to travel 100 miles and end up in a place where you don’t speak the local language. Its also a relief to get away from the haggling over the price of a ride that used to be the norm earlier.

Uber and Ola though do have other challenges in India, from less accurate GPS data for mapping to local drivers who cant speak English used in a lot of routing apps (The Forbes article is an interesting read on the local challenges)

Oh, and apparently India is about to launch its own GPS system to address this and other local mapping challenges

Crime

India has had a lot of spotlight shone on its rampant crime problem, especially against women. While some of these problems are too systemic to really be fixed in a few years and require a huge cultural change, there are a lot of initiatives at play here as well. Its also a warning for startups aiming to start in India. Personal safety is a given in a lot of societies which does allow the sharing economy, but applying the same model in a complex country like India can really burn you, as it did Uber which was banned from Delhi for a while when a rider was raped by a driver there.

Last year, the Delhi police launched the Himmat app to allow women to broadcast an SOS to the police if they felt threatened. The app itself has had limited success and does feel rather poorly thought through (would you really have the time to find and launch an app if you were attacked?), but hopefully its a work in progress

Uber  has added a series of security features as well including partnering with the Himmat app as part of its initiatives to add to rider safety and get unbanned.

I would love to see more startups look into this space. Personal safety for women is a big problem globally and services like Roar can make a real difference while also carving a sustainable business for themselves.

Conclusion

Its definitely an interesting time to be a technologist in India. There is a strong drive to grow technology companies and transform the society into a more digital one. There is no dearth of problems that need to be addressed. It’ll be interesting to see India’s transformation in the next 5 years as it goes through this phase.

 

 

 

2017 Retrospective

redline.gif
2017: The year in a gif

2017 was an intense year of learning for me. A change of charter for the labs group I work in late last year meant we focused deeper on core technology which was exciting as a technologist. This year that list included Unity, WebVR, Blender, React, React Native, Ruby on Rails and Blockchains (specifically Ethereum). Phew!

Oh! And I got recognized by Philadelphia Business Journal as one of the city’s 10 Technical Disruptors to watch

A big part of this year for me was centered around building Virtual Reality experiences. The first half of the year was focused around building these experiences in Unity which is a very different environment to work in compared to as XCode or Android Studio (which I was deep into last year) but more reminiscent of my previous work in Flash. I really do enjoy Unity and this year made me truly appreciate the game development process. My friend and colleague Jack Zankowski (who did most of the design for our earlier VR work) gave a talk on our early VR experiences at a WiCT event early this year.

However later in the year, we started doing a lot more work in WebVR which, though flaky at times, with platform-specific eccentricities, still was a much faster way to prototype VR experiences. Using AFrame, ThreeJS and WebGL  was a fantastic learning experience and hopefully I can do more web animation and 3d graphics work, with or without VR, next year.

I gave a talk on building WebVR experiences at PhillyGDG that you can find below.

One thing I didn’t see coming was how much time I’d end up spending with Blender this year. I had never worked with 3d modeling tools before but our VR project needed 3D models and since I have some experience with illustration and design (I used to work as a freelance illustrator), that task fell on me. In the last 4 months of working with Blender I have gone from god-awful to okay-ish.

botbot
Blender work in progress

 

Another project I was very involved with was an internal knowledge portal for our team that we built with ReactJS and Express. Having never done React till before this year, that was educational as well, and I completely fell in love with it (even given its weird licenses though hopefully thats starting to change).

The project also made me look deeper into React Native as a platform for mobile experiences. Late last year I had started an app (more on that later) that needed a CMS and a native mobile client and gave a talk on that at Modev DC and at React Philly

I built the CMS in Rails, being most familiar with that, though that wasn’t saying much as of last year. This year, I definitely feel I have leveled up my Rails game a bit. Perfect timing as most Rails devs I know are moving to Elixir/Phoenix or Node/Express 😅

A lot of time this year was also spent giving talks on Blockchains to various internal and external groups. Turns out I needn’t have bothered since Crypto-mania swept the US this year and now EVERYONE is talking about Blockchains. But I did get to work on one project on it, so that was cool

That about covers the tech news in 2017, and there are already a few interesting projects in the hopper for 2018. Stay tuned 🙂

Building CodeCoin: A Blockchain DApp prototype

If you know me, there is a good chance that you know how 👍 I am about Blockchain and Decentralized apps. I have given a few talks on it but till recently these were mostly either focused on Bitcoin or on the academics of Blockchain technology. At a recent Comcast Labweek, I was finally able to get my hands dirty with building a Blockchain based decentralized app (DApp) on Ethereum.

Labweek is a week long hackathon at the T&P org in Comcast that lets people work on pretty much anything. I was pretty fortunate to end up working with a bunch of really smart engineers here. The problem we decided to look into was the challenge of funding open source projects. I am pretty passionate about open source technologies but I have seen great ideas die on Github because supporting a project when you aren’t getting paid for it is really hard. Our solution to this problem was a bounty system for Github issues that we called CodeCoin.

The way CodeCoin worked was as follows:

  • A project using CodeCoin would sign up on our site and download some Git hooks.
  • When anyone creates an issue on Github, we create an Ethereum wallet for the issue and post the wallet address back to Github so its the first comment on the issue.
  • We use a Chrome extension that adds a “Fund this issue” button on the Github page that starts the Ethereum payment flow.
  • To actually handle the payment, we require MetaMask that we can trigger using its JavaScript api
  • Ether is held in the wallet till the issue is marked resolved and merged into master. At this time another Git hook fires that tells our server to release the Ether into the wallets of all the developers who worked on the issue.
app-screen.png
Issue page design. Most of the UI changes came from a custom Chrome extension
flow.png
Application Flow

Note that while we held the Ether on our side in wallets, the right way to do this would have been to use a Smart Contract. We started down that route but since most of the code was done in like 2 days (while juggling other real projects), wallets seemed like the easier route.

Releasing money into developer accounts was also a hack. Since developers don’t sign up to Github with any digital wallet address, we need the wallet addresses as part of the final commit message. This could be done with a lookup on a service like Keybase.IO maybe and with more time we would have tried integrating it to our prototype. In fact it was the next week that I heard about their own Git offering. I haven’t read enough about that yet though.

Development notes:

  • For local development, we used the TestRPC library to run a Ethereum chain simulation on our machine.
  • We used web3js, the Ethereum JavaScript api for doing most of the actual transactions
  • Web3js was injected into the browser by the MetaMask extension. There were some challenges getting Metamask to talk to the TestRPC. Basically, you had to make sure that you initialized MetaMask with the same seed words as you used for your account on TestRPC (which makes sense) but there isn’t a way afaik to change that information in MetaMask. Early on, we were restarting TestRPC without configuring the initial accounts so we’d have to reinstall MetaMask to configure it with the new account. Chalk that to our own unfamiliarity with the whole setup.
metamask
MetaMask transaction
  • We did try to use Solidity to run a smart contract on TestRPC which worked for the demo apps, but canned that effort in the last moment as we were running out of time

All in all, it was a fun couple of days of intense coding and I feel I learnt a lot. Most of all I enjoyed working with a group of really smart peers, most of whom I didn’t know before the project at all. Hopefully we get to do more of that in the future 🙂

IMG_0310.jpg

 

Notes from Oculus Connect 4

I had a great time last week attending Oculus Connect 4. Just like last year, the keynotes were really interesting and the sessions pretty informative. Here are some quick thoughts on the whole event:

Oculus Go and Santa Cruz

Oculus announced two new self contained headsets: the Go, a 3DoF inexpensive ($199) headset that will be coming early next year and much later, Project Santa Cruz, the 6DoF headset with inside-out tracking. Whats interesting is that both these devices will run mobile CPU/GPUs which means that 3 of the 4 VR headsets released by Oculus will have mobile processing power. If you are a VR developer, you better be optimizing your code to run on low horsepower devices, not beefy gaming machines.

Image result for oculus go and santa cruz
Oculus Go

Both Go and Santa Cruz are running a fork of Android

The move to inexpensive hardware makes sense, since Oculus has declared it their goal to bring 1 billion people into VR (no time frame was given 😉 )

Oculus Dash and new Home Experience

The older Oculus Home experience is also going away in favor of the new Dash dashboard that you’ll be able to bring up within any application. Additionally you’ll be able to pin certain screens from Dash enabled applications (which based on John Carmack‘s talk seem to be just Android apks). There could be an interesting rise of apps dedicated to this experience, kinda like Dashboard widgets for Mac when that was a thing.

Image result for Oculus Dash
Oculus Dash

The removal of the app-launcher from Oculus Home means Home now becomes a personal space that you can modify with props and environments to your liking. It looks beautiful, though not very useful. Hopefully it lasts longer than PlayStation’s Home

 

Image result for new Oculus Home Connect 4
New Oculus Home (pic from TechCrunch,com)

 

New Avatars

The Oculus Avatars have also undergone a change. They no longer have the weird mono-color/ wax-dolls look but actually look more human with full color. This was also done to allow for custom props and costumes that you’ll be able to dress your avatar in in the future (go Capitalism 😉 )

Image result for new Oculus avatars
New Avatars (Pic from VentureBeat.com)

Another change is that the new Avatars have eyes with pupils! The previous ones with pupil-less eyes creeped me out. The eyes have also been coded to follow things happening in the scene to make them feel more real.

Oh and finally, the Avatar SDK is going to go cross platform, which means if you use the Avatars in  your app, you’ll be able to use them in other VR platforms as well like Vive and DayDream.

More Video

Oculus has been talking quite a bit lately about how Video is a huge use case for VR. A majority of use of VR seems to be in video applications, though detail on that wasn’t given. For example, apps like BigScreen that let you stream your PC cannot be classified as video or game since who knows whats being streamed. Also since the actual usage number of VR sessions wasn’t said, its hard to figure out if the video sessions count is a lot or not.

Either way, one of the big things that Carmack is working on is a better video experience. Apparently last year their main focus was better text rendering and now the focus is moving to video. The new video framework no longer uses Google’s ExoPlayer and improves the playback experience by syncing audio to locked video framerate rather than the other way as its done today.

Venues

One of the interesting things announced at Connect was Venues: a social experience for events like concerts, sports etc. It will be interesting to see how that goes.

Image result for oculus venues
Oculus Venues

There were numerous other talks that were interesting, from Lessons from One Year of Facebook Social to analyzing what is working in the app store. All the videos are on their YouTube Channel

Conclusion:

While I was wowed by a lot of the technology presented, it definitely feels like VR has a Crossing the Chasm problem: They have a pretty passionate alpha-user base but are trying really hard to actually get the larger non-gaming-centric audience in.

Image result for Crossing the Chasm

Oculus Go seems like a good idea to get the hardware and experience more widely distributed but what is really needed is that killer app that you really have to try in VR. The technology pieces are right there for the entrepreneur with the right idea.