By Chris Peisch
For the first time, Stanford is offering a student-initiated course focused on the ethical implications of artificial intelligence (AI). The product of a joint initiative by the student groups CS + Social Good and the Stanford AI Group, the class teaches students techniques in machine learning and includes guest lectures by Stanford researchers in computer science.
“Students will learn about and apply cutting-edge artificial intelligence techniques to real-world social good spaces (such as healthcare, government, education and environment),” the course website states.
Shubhang Desai ’20, Swetha Revanur ’20 and Karan Singhal ’19 designed the course to highlight the positive impact of socially conscious AI as well as the dangers of potentially biased and insecure AI systems.
Desai said he believes that the course fills a gap in the technical and ethical AI education that Stanford currently offers.
“We wanted to create a course that would educate students not only about AI techniques, but also how they should be applied in a socially conscious way,” added Desai. “The social good motivates exploration into AI, not the other way around. The class is not an AI class, it is a social good class that uses AI to tackle social good problems.”
Though the course required an application, the teaching staff encouraged undergraduate and graduate students across all departments with different levels of computer science experience to enroll.
“This is the most accessible technical AI class offered at Stanford,” Revanur said.
Along with the three undergraduate instructors, course staff Rachel Gardner ’20 and Nidhi Manoj ’19 help coordinate guest lecturers conducting AI research in industry and academia. Upcoming guest lecturers include graduate student Pranav Rajpurkar, whose research applies machine learning to medicine, education and autonomous driving, and assistant management science and engineering professor Sharad Goel, whose work uses AI to improve public policy in domains ranging from racial bias to online privacy.
Course instructors Desai and Revanur said that decision-making algorithms can often make flawed conclusions based on biased representation in data. For instance, AI algorithms in Google Photos accidentally classified black males as “gorillas” because the data input to the software did not include enough images of humans with darker complexions.
According to the teaching staff, while computer scientists may understand the details of an AI system’s inputs and outputs, they do not know exactly how these algorithms make decisions. The nebulous computations that take place within this algorithmic “black box” can potentially undermine the efficacy of many algorithms if scientists do not approach programming in a socially conscious manner.
CS + Social Good board member Vicki Niu ’18 agreed that it is important to be conscientious about the potential negative social impact of AI.
“We’re constantly balancing a number of factors, including the extent to which we [create] programs that allow students to build socially impactful tech projects versus ones that focus on analyzing the social implications — and often times harms — of technologies currently at play in our world,” she said.
An increase in both computational resources and available data has made AI a powerful decision-making tool. Revanur, Desai and Singhal emphasized that as the power of AI systems continues to increase rapidly, students must be aware of the potential dangers of flawed AI.
“We foresee the rapid automation of many of today’s most common occupations, including drivers, cashiers and many doctors,” the trio wrote in an email to The Daily. “Much of this change will be driven by today’s budding AI technologists, many of whom will come from Stanford.”
The course will be offered again next spring.