Self-driving cars are on the way (Mayor Pam Hemminger has even mentioned their possible arrival in Chapel Hill during an interview with this board last year), and for the most part, they are a good thing. They will make driving astronomically safer, commute times shorter and traffic less punishing. Yet driverless cars present one glaring issue to the public: Who decides what happens when an accident involving a self-driving car, rare as the case may be, is unavoidable?
As much as it’s become a meme, the classic trolley problem actually provides a perfect example of this dilemma. What happens when a person walks out onto the interstate, and the only way for the car to avoid hitting them is by steering into the median and probably killing its passenger?
Patrick Lin of The Atlantic writes, “If you complain here that robot cars would probably never be in the trolley scenario — that the odds of having to make such a decision are minuscule and not worth discussing — then you’re missing the point.”
At some point in the development of one of these vehicles, a programming team will account for this ever-so-rare possibility and decide how the self-driving car’s system will react.
Should we entrust the federal government to regulate this decision-making process, and in doing so, risk the additional time necessary to establish and staff an entirely new agency within the Department of Transportation? Some American cities, including Austin, Nashville and San Francisco, already have self-driving cars on their roads.