Support independent, student-run journalism.

Your support helps give staff members from all backgrounds the opportunity to conduct meaningful reporting on important issues at Stanford. All contributions are tax-deductible.

Symbolic Systems Society hosts technology and ethics symposium

JODIE BHATTACHARYA / The Stanford Daily

Three Ph.D. students presented their work on potential pitfalls of linguistic technology and medical artificial intelligence (AI), as well as social networks’ effects on partisan tensions, at Friday’s Symbolic Systems Society symposium on technology and ethics.

Nikhil Garg, electrical engineering Ph.D. candidate and member of the Stanford Crowdsourced Democracy Team and Society & Algorithms Lab, began by speaking about his research on word embeddings — computer-generated associations between words — and how they reveal the ways in which gender and ethnic stereotypes have changed in the past 70 years.

Word embeddings are generated by having a computer scan works of literature and newspapers, allowing it to find associations between words. However, some word embeddings have raised ethical concerns brought about by the potential for AI to pick up racist stereotypes. For example, studies have shown word embeddings form strong associations between European or American names and pleasant terms, and African-American names and unpleasant terms.

Garg, however, emphasized his belief that computation can be used to positive ends, and stated that he believes “we can build better systems” for addressing such ethical concerns.

“I really think computation can be a positive social force … to encourage conversations or to understand how people think and do things in our world,” Garg said.

Electrical engineering Ph.D. candidate Allen Nie ’17 followed Garg by discussing the intersection between AI and healthcare, bringing up examples of AI that can detect skin cancer or assist doctors in diagnoses. According to Nie, the ideal healthcare AI would incorporate human knowledge, collaborate with humans and learn on its own.

However, he cautioned that the potential for AI diagnoses to affect the amount of care doctors take in making their own judgments remains a concern.

You can always say [AI] is ‘advisory,’ but how much advisory effect is it going to take?” Nie said. “I think we’re moving a little bit too fast into a field without really thinking about ethical problems at this stage.”

Psychology Ph.D. student Luiza Santos spoke next about the Stanford Social Neuroscience Lab’s research on the idea of attitude polarization and her project on the ways in which social networks can influence the expression of moral outrage in the context of political partisanship.

She mentioned studies demonstrating the impacts of polarization, such as confirmation bias, which is the tendency to perceive information as affirming one’s own beliefs, as well as distrust of politicians not in their political group or preferences for marrying a person with the same ideology as them. She also touched on the idea of attitude moralization, where one judges or looks down on others for some attitude they have based on their values.

In Santos’ department, research has found that the popularization of online social networks may facilitate hostile partisan interaction.

“People are somehow seeing the same content and they’re still cherry-picking what they want to see,” Santos said.

During the discussion portion of the event, the speakers and attendees had the opportunity to exchange their views on technological evolution.

Garg said that companies often end up introducing a technology before thoroughly studying its potential effects. He believes this is partially due to the difficulty of acquiring sufficient data regarding the impacts of a technology.

I would say in a lot of fields … it’s much easier to deploy something than it is to study it,” Garg said. “On an online platform … there’s like 1,000 deployments there that they update a week. But if you actually wanted to study something, and you’re at an institution, the [Institutional Review Board] is probably going to say no.”

With regards to the benefits and harms of technology, Santos noted that there is much more transparency in studies now due to the Internet. She mentioned how hypotheses are published beforehand and how there are now fewer published non-replicable results in her field of psychology.

“I do feel that in case it was really important how technology helped us to create better science,” Santos said. “Nowadays there’s a big push towards transparency … We pre-register our hypothesis, we open our data, we show our code. This is very different from how psychology used to be conducted 10 years ago.”

Event attendee Elizabeth Nguyen ’22 expressed appreciation for the benefits of technology while noting the importance of giving proper consideration to potential concerns.

“I think technology is great, but when we’re in such a transformative environment such as Stanford, we should always keep ethical concerns in the back of our minds,” Nguyen said. “I feel like we’re all pushing for something, but at the same time we need to keep in mind what those effects might have.”

Friday’s symposium is the first of four that the Symbolic Systems Society — which hosts events on campus related to the major — plans to hold throughout the year. The next symposium’s theme will be “AI and Arts.”

 

Contact Jodie Bhattacharya at jodieab ‘at’ stanford.edu.

While you're here...

We're a student-run organization committed to providing hands-on experience in journalism, digital media and business for the next generation of reporters.
Your support makes a difference in helping give staff members from all backgrounds the opportunity to develop important professional skills and conduct meaningful reporting. All contributions are tax-deductible.