(This article first appeared in the 2017 Techonomy print and digital Magazine.)
You are driving along and your car’s brakes suddenly fail. If it swerves to the left, three old men and two elderly women will die. If the car veers to the right, it kills a woman doctor, two babies and a boy and girl.
Who should die? This question is part of MIT Media Lab’s “moral machine,” a platform for gathering peoples’ opinions on moral decisions made by machine intelligence, such as self-driving cars. In the coming age of automation and artificial intelligence (AI), such life and death decisions and many other complicated ones will increasingly be made by machines rather than people.
A lot depends on who determines the value systems for artificial intelligence software. Those values could be carefully and methodically crowd-sourced from society at … Read More