By Josh Kazdan
Philosophy Talk, a national public radio show hosted by two Stanford professors, presented the ethical dilemmas introduced by self-driving cars last Wednesday in Cubberley Auditorium.
Aired on more than 100 radio stations, Philosophy Talk’s latest edition featured professor of psychology at Harvard University Joshua Greene. Greene and the show’s hosts focused on the moral and safety implications of having computers as drivers. Ken Taylor, the show’s co-host, worried about the ramifications of allowing computers to remove moral agency from human drivers.
Currently, approximately 1.3 million people die every year in car accidents. According to the talk’s speakers, even though engineers believe that driving algorithms could eliminate the majority of these deaths, questions remain about how computers will keep people safe. For example: If a car has the opportunity to save 10 individuals by swerving in front of a truck that is barreling towards a school bus filled with children, should the car do so? These are the kinds of questions the Philosophy Talk episode sought to explore.
Another concern that Philosophy Talk discussed is the number of jobs that will become obsolete as a result of automated driving. According to the American Trucking Associations, 3.5 million people are professional truck drivers in the U.S. alone. Automated cars could at least temporarily cause a spike in unemployment, and it is not immediately apparent where new jobs will be created, speakers on the Philosophy Talk episode said.
The show also examined potential benefits of self-driving cars. Matt Hermann – the senior managing director of Ascension Ventures, a firm that invests with the goal of improving health care technology – outlined to The Daily how self-driving cars could promote independent living and mobility for the elderly. According to Hermann, allowing cars to drop off and pick up without drivers could additionally allow people to share cars and reduce waste.
However, Hermann also noted the potential pitfalls of autonomous vehicles. Along with unemployment risks and morally charged decisions, Hermann also warned against the damage that cyber attacks on self-driving cars could cause. As technology continues to progress in many industries, Hermann emphasized the importance of making careful decisions about technological and ethical priorities.
Currently, 15 states have already released legislation about autonomous vehicles, and three others have executive orders out on the subject. Forty-one of the 50 states have debated legislation.
Meanwhile, car-makers continue to push their technologies forward. Elon Musk has announced that the first fully autonomous Tesla will be developed by next year. Uber said that its fleet will be driverless by 2030, and Ford, Toyota and Audi have all declared that they intend to have self-driving vehicles out by 2020.
Regardless of when driverless cars take to the roads, both Hermann and Greene agreed that philosophers and engineers must take a stance to ask and answer important moral questions about autonomous driving technology in tandem with the cars.
Contact Josh Kazdan at jkazdan ‘at’ stanford.edu.