On Tuesday, Feb. 18, 2020, Mois Navon, the founding engineer of Mobileye, spoke at the kickoff event of the YU Tech Ethics Society, a club founded by Zechariah Rosenthal ’22YC and Elimelekh Perl ’22YC, to explore, in Rosenthal’s words, how “the yeshiva part of our day can solve the world’s problems.”
Navon’s topic, “The Trolley Problem Just Got Digital,” was an excellent choice because it encapsulates, said Navon, a very knotty ethical problem connected to autonomous vehicles (AVs): how will the vehicle decide who lives and who dies when confronted by a life-and-death decision on the road? (For a quick tutorial on the trolley problem, click here, and read about Navon’s earlier lecture at YU on “Innovation, Autonomy and Faith.”)
Navon was quick to point out two key elements. First, AVs are being built to save lives. According to a study cited by Navon from the World Health Organization, 1.25 million people die each year in car accidents (3,400 people per day). Actuarial studies done by insurance companies estimate that widespread adoption of AVs will reduce injuries and deaths by 90%.
Having said that “we’re making AVs to save lives,” he also said that he was not going to tell anyone how to drive or code and that Mobileye, as a company, won’t make these moral decisions: “We’ll program for the customers; we’re not moral ethicists but computer scientists.” In fact, he said, of the five groups he feels might have the authority to make these decisions (coders, tech suppliers, car suppliers, car owners and the government), only the government can take a third-party impartial stand when deciding these questions, the other four being riddled with conflicts of interest.
Instead, what he wanted to do in his talk was explore how the application of Jewish wisdom might help those decide who need to decide since, in one way or another, according to Navon, Jewish thought has been dealing with variations of the trolley problem for 3,000 years.
However, after an entertaining excursion through a variety of Jewish texts and commentators, he concluded that the body of wisdom did not offer clear-cut guidance in part because the commentators and philosophers are focused on human beings making decisions: “When a person gets into a trolley problem, he’s looking at real people in real time, and now he’s got to make a decision about ‘Am I going to kill these people now?’ and he has to make a decision about life and life. That’s very different from the computer, where there’s a person busy writing code that will be put into a system that’s going to get on the road in another two years and who knows when it will ever get into a trolley problem, and then it’s going to have to make a decision.”
For the digital driver, the programmer is “programming the car to save lives because the autonomous vehicle is being made for one reason: to save lives. It has a modus operandi that it does everything it can do to save lives. When the AV is faced with the multiple choice of who is going to live and die, the MO of the whole system is to save as many lives as possible—you are trying to raise the statistical odds of everybody surviving. A human driver is very different than a digital driver, and in the computer’s case, you would save the many at the expense of the individual.”
He ended his talk by reiterating the point he made at his opening: AVs are going to save lives, and the way they are programmed ensures that they will encounter far fewer trolley problems than humans encounter in their driving careers. “What we’ve discussed tonight is more of an academic exercise than applied technology. At the end of the day, autonomous vehicles are going to save lives.”