Former director of the Stanford Artificial Intelligence (AI) Lab Fei-Fei Li and Stanford humanities professor and former Provost John Etchemendy are on a mission to “bring the humanities and social thinking into tech.” They conceptualized the Stanford Institute for Human-Centered Artificial Intelligence (HAI) in 2016 in the hopes of involving communities from all corners of Stanford to tackle global challenges.
“We can’t do this alone,” Li said. “We need people from all seven schools, especially from the humanities.”
Monday marked the launch of this new institute, a day-long affair bringing together panelists, speakers and attendees from all across the globe and bridging all disciplines. The day’s talks centered around human-inspired intelligence, the societal impact of AI and human augmentation using AI. Panelists included LinkedIn co-founder Reid Hoffman, DeepMind co-founder Demis Hassabis, senior vice president of Google AI Jeff Dean and director of Microsoft Research Labs Eric Horvitz.
While most of AI research is focused on its technical development, HAI’s express goal is to ensure that the progress of AI remains diverse, humanistic, and fundamentally human-centric.
“In order to train AI to benefit humanity,” Li said. “The creators of AI need to represent humanity.”
Bill Gates on AI in education and healthcare
While Gates spoke of AI positively, he also recognized its potential consequences if misused, comparing it in both constructive and destructive power to the development of nuclear fission.
“The world hasn’t had that many technologies that are both promising and dangerous,” Gates said. “We had nuclear weapons and nuclear energy, and so far so good.”
Gates also called for “creativity” in navigating privacy concerns about access to datasets.
“In some domains like education, I am more worried that the privacy concerns, which are appropriate — but if you don’t put much creativity in how you have longitudinal data access while not violating privacy, you are going to default to the datasets not being there,” Gates said.
Unlike nuclear reactors and weapons which were developed almost entirely by the government, the forefront of AI research occurs in university labs and in private companies. As a result, Gates observed, “the government just doesn’t see it in the same way they did with previous technologies.”
Despite the potential dangers of AI, however, Gates’ outlook was optimistic. He spoke at length about the potential uses of AI in interpreting large data sets related to health and education — two areas of major focus for the Bill and Melinda Gates Foundation.
Specifically in Africa, the Gates Foundation has been experimenting with the use of AI to help identify the factors that cause high rates of premature birth and malnutrition.
“With AI we are able to take all that data and find some really low cost intervention,” Gates said.
As an example, Gates spoke of applying machine learning algorithms to the connection between health and the gut biome. The complex, non-linear relationship makes it difficult to deduce causes and effects using traditional data analysis methods.
“If you give kids in some countries an antibiotic once a year that costs two cents called azithromycin, it saves a hundred thousand lives,” Gates said. “I do not believe without machine learning techniques we [would have ever been] able to take the dimensionality of this problem to find the solution.”
While health care interventions and drugs are piloted in the United States, Gates said, Africa is where these interventions have the highest return of diminished human suffering per intervention.
“If everybody in the United States lived to 100, it would not match what we can do in the developing world in terms of net change to human benefit,” Gates said.
In addition to health, Gates described education as a promising area for machine learning algorithms to identify factors which contribute to a successful teacher or an engaged student. In regards to the current state of machine learning applied to education research, Gates states that “it’s a desert.”
With a good training set, Gates theorizes that it might be possible to for AI could to identify certains consistent causes of a good education.
“With everything we have learned about education, you could still say that the best teacher ever had lived 100 years ago,” Gates said. “You could not say that about doctors.”
More generally, AI offers a “chance to supercharge the social sciences,” he said.
Newsom speaks for inclusion in AI
As the newly-elected governor of California, Governor Newsom hopes to channel the conversation around AI to the broader public interest.
“The future happens right here, in California,” he said. “That’s why we gotta invest in this lead.”
Newsom served as the lieutenant governor of California for eight years and previously as the mayor of San Francisco. As Mayor, he made his name issuing marriage licenses to same-sex couples, a decision which was later reaffirmed by the Supreme Court. In his keynote address, he spoke of the necessity for social inclusion and cohesion in furthering the development of artificial intelligence.
Despite excitement about the future of AI, Newsom spoke about the anxiety surrounding the potential for AI to displace millions of jobs nationwide. The second-fastest-growing industry in Imperial County — trucking, logistics and warehousing — may be at the greatest risk of being automated away, Newsom said. He contends that part of the solution is education.
“Our number one job is to make everyone smarter,” Newsom said. “We can’t play small ball anymore, playing in the margins. We are living in an industrial system in an information age.”
Quoting Plutarch on the “imbalance between the rich and the poor” being one of “the oldest and most fatal ailments of the Republic,” he spoke about the immense danger of social inequality and stratification.
“We are the richest and poorest state in the United States,” he said, noting that a third of the state’s full-time workers earn less than $14 an hour. Newsom stressed that both “growth and inclusion” were essential for the future of AI.
Newsom closed with a message that was both a warning and resounding message of hope.
“We truly do rise and fall together,” Newsom said. “The best is yet to come.”
Contact Paxton Scott at paxtonsc ‘at’ stanford.edu and William Yin at wyin ‘at’ stanford.edu.