Besides the practical issues of developing sensors and maps, a recent WIRED article poses an interesting, and perhaps under-thought of aspect of self-driving cars / autonomous vehicles – what kind of ethical decisions will they make, when something goes wrong on the road? (This, trust us, inevitably will happen)
So what happens if an autonomous vehicle is faced with a dangerous situation on the roads? Who decides? How does it decide? Will it make ethical (sacrifice the occupant to save a crowd of pedestrians), or selfish decisions (kill the pedestrians to save the occupant)? (We are already getting ahead of ourselves here and assuming the autonomous vehicle will be an adherent of J.S. Mill’s utilitarianism)
The brief answer, so far, is that the autonomous vehicle’s decision-making system will be something like what Google has – big data capabilities discerns patterns in historical/past and available crowd behaviour, learns from them, and chooses the best possible outcome. In the first place, autonomous vehicles will have reduced the need for such ethical dilemmas, as they don’t get distracted, don’t drink, and don’t text while driving – very much unlike humans.
In two words -Machine learning, which will be far more effective than human learning.
We have to say that this does seem to give Google, with their advanced search algorithms and tremendous big data capabilities, a big edge in the autonomous car race. It is also worth noting that Google was the maker behind AlphaGo, the go-playing AI that defeated South Korean grandmaster Lee Sedol at a game that AlphaGo taught itself to play by essentially mining data, playing against itself, and defeating its creators not long after that.
Ultimately, what we have posed are hypothethical, but nevertheless, important questions, which highlight the plethora of issues that pioneering developers of the autonomous car will have to grapple with. They will have their hands full imagining possible black swan events and developing contingencies beyond just working on a product.