AI systems today present not only a new set of technical challenges but ethical ones as well. The one that I have seen mentioned the most often involves the decision a self driving car that is about to crash and has to choose between hitting children on the street, a pedestrian on the footpath or a wall and killing the occupant. As the MIT Technology Review’s article titled “Why Self-Driving Cars Must Be Programmed to Kill” phrases it:
How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?
Its hard to imagine anyone trying to solve this problem since it doesn’t have any good solutions. But not taking a decision here would be a decision itself.
At SXSW today I did see an interesting presentation that did spark an idea. I attended a session titled “Humans and Robots in a Free-for-All Discussion” had two robots have a discussion on different ideas with each other and a human. A video of the session is embedded below:
The idea of robots talking to each other had a previous brief internet popularity moment with two bots on hacked Google Home devices chatting with each other that was live streamed on Twitch.
What is interesting is that the bots were programmed with just facts and allowed to come to their own conclusions. The photo from the presentation below shows how the system took in bare facts and then, by using supporting or negating statements, could come to a conclusion by itself.
The idea is intriguing. Could this be how cars will learn ethics? No human would ever verbally put a price on human life, yet by action a lot of us do all the time.
Could ethics in AI not be something we code but allow to emerge based on facts that we train the model on?