Second day of HAI conference features discussion of AI, innovation, art

Oct. 30, 2019, 12:32 a.m.

Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) concluded its inaugural Ethics, Policy & Governance conference on Tuesday, a two-day event that featured dozens of discussions on the implications of artificial intelligence (AI) for a range of industries and disciplines. 

Stephanie Dinkins and Ge Wang delivered the final keynote conference in the morning, focusing on artistic possibilities enabled by AI. U.S. Chief Technology Officer Michael Kratsios also spoke in the morning, before afternoon breakout sessions whose topics included “AI, Democracy and Elections,” “AI Safety” and “Race, Rights and Facial Recognition.”

AI innovation and government regulation

Kratsios opened the second day of the conference in a moderated conversation with Eileen Donahoe, the executive director of Stanford’s Global Digital Policy Incubator. Responding to concerns from both Donahoe and the audience about the ethics of AI development, Kratsios defended the compatibility of AI innovation with democratic values while trumpeting the Trump administration’s pro-technology stance.

An office created in 2009 under the Obama administration, the U.S. CTO is the president’s top advisor on technology- and data-related issues. Kratsios explained that his office deals with issues ranging from quantum computing to rural broadband, all with the broader goal of facilitating a regulatory environment that supports technological innovation. 

Kratsios described how his team translated the Trump administration’s high-level themes — American exceptionalism, prioritizing the American worker and attacking unfair trade deals — into a more precise technological agenda for the nation.

“We set the core high-level agenda to be American leadership in emerging technologies,” he said. “We fundamentally believe that the U.S. must lead the world in next-generation technologies.” 

He called for America to lead not just in AI, but also other emerging technologies like quantum computing and 5G. 

While conceding that China has made gains in developing advanced AI technologies, Kratsios contended that the U.S. would remain a technological leader so long as it maintained a free-market system of innovation, based on a synergistic relationship of federally funded university research and private-sector commercialization of such research. 

“This system is able to generate results,” he said. “This is very different from a centrally-planned industrial policy that some of our adversaries are pursuing. Everything that we do … is centered on maximizing the output of this unique U.S. innovation.”

Kratsios encouraged audience members to read a regulatory guidance memo set to be released by the White House in the next month. The memo, he said, will outline general principles for creating a federal regulatory environment that best supports AI innovation.

Responding to questions about the compatibility of AI innovation and civil liberties and privacy rights, Donohoe pointed to the Trump administration’s recent decision to sign on to AI principles recently developed by the Organization for Economic Co-operation and Development (OECD). Contrasting that with Trump’s usual reluctance towards entering multilateral agreements, Kratsios framed the decision as an example of the administration’s commitment to encouraging ethical AI innovation alongside Western allies. 

“We have adversaries … who are using artificial-intelligence technologies to surveil their people, to imprison minorities, to flagrantly violate human rights, and those are the type of values that we cannot see exported, or do not want to see exported around the world,” he said. “That is why American leadership in artificial intelligence is so critical.”

AI and art

Ge Wang, Stanford associate professor at the Stanford Center for Computer Research in Music and Acoustics opened the conference’s final keynote speech with the question, “What do we really want from AI?” 

Wang emphasized the “humanistic, artistic and social act of engineering” and emphasized two key principles from his book, “Artful Design: Technology in Search of the Sublime”: “that which can be automated should be” and “that which cannot be meaningfully automated should not be.”

In closing, Wang focused on bringing “human interaction into the loop with AI.” He asked the audience four questions: how do we do no harm, how do we do good, how do we want to live with our technologies, and how do you want to live period? 

Stephanie Dinkins, the new artist in residence at HAI, introduced herself as “interested in documenting things.” 

Dinkins works with AI and deep-learning algorithms to create art pieces. One of her sculptures, titled “Not the Only One,” uses AI to respond out loud to viewers’ questions 

She has a personal relationship with the AI, describing it as “the multigenerational memoir of one black American family told from the ‘mind’ of an artificial intelligence with evolving intellect.” She said she interviewed members of her family to create “a substantive, particular dataset” on which to train the deep-learning algorithm. 

She expressed concern with bias in data sets involved in developing AI. Challenging the belief that big data holds the solution to today’s economic and social ills, Dinkins asked the audience to consider alternate approaches to social justice. 

“How can oral history, vernacular learning and small data break the mold of ‘big-data dominance’ to become resources and sustaining processes for underutilized communities?” she asked. 

She said she doesn’t believe “that the math will take care of” what she described as a minimal range of perspectives in the tech industry.

Contact Berber Jin at fjin16 ‘at’ stanford.edu and Emma Talley at emmat332 ‘at’ stanford.edu.

Berber Jin is a senior history major and desk editor for the university beat at the Daily. He enjoys covering university China policy and technology ethics, and is currently writing an honors thesis on the Caribbean anti-colonialist George Padmore. He is originally from New York, NY.Emma Talley is the Vol. 265 Executive Editor. Previously, she was the Vol. 261 Editor in Chief. She is from Sacramento, California, and has previously worked as a two-time news editor and the newsroom development director. Emma has reported with the San Francisco Chronicle with the metro team covering breaking news and K-12 education. Contact her at etalley 'at' stanforddaily.com.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds