Support independent, student-run journalism.

Your support helps give staff members from all backgrounds the opportunity to conduct meaningful reporting on important issues at Stanford. All contributions are tax-deductible.

Complexity Theory: ‘I am just an engineer!’

By

Part of “Complexity Theory,” a new column on the tangled questions of our technological age

One of my biggest sources of joy in life is satisfying my intellectual curiosity. I feel almost certain that, in any given time throughout history, there were many other people out there who shared those feelings. Although for some it can take the form of passively learning something new, for others it is discovering a new insight, or inventing a tool. Enthralled by the joy of intellectual satisfaction, we may overlook the possible adverse effects that our discoveries and inventions have on third parties. Saying that people who got trapped in this tragic situation have outright bad intentions would be an overly superficial interpretation of the problem. Even those acting with good intentions can find themselves in this trap.

Part of the problem is, in seeking satisfaction, people driven by intellectual curiosity turn to educational institutions, which, despite being a great catalyst to curiosity, do not push students enough to think about all the ramifications of applying their learning to real-life problems, leaving their “ethical-reasoning muscle” unworked. This is especially true for students and researchers working on technical disciplines, who tend not to value ethical education as much as they value technical education. 

In a recent talk she gave on ethical tech as part of the Human Computer Interaction Seminar series at Stanford, Dr. Casey Fiesler, the director of Internet Rules Lab (IRL) at the University of Colorado Boulder, shared her work on research ethics and ethics education as well as broadening participation in computing.

When asked questions about their data collection methods, Fiesler said that many researchers resort to the sentence “but the data was already public.” Researchers take it for granted that any data made public can be used for research purposes, which may have unintended consequences, especially for vulnerable communities. According to a study conducted by one of Fiesler’s Ph.D. students, being mentioned in an article caused the identity of a Tumblr user to be revealed to a much broader audience. Fiesler also shared a 2018 study by Ayers et. al., which found that “72% of PubMed (a health database) articles using Twitter data quoted at least one tweet, leading to a Twitter user 84% of the time.” It would definitely not be fun to figure out your tweet had been quoted in an article diagnosing mental disorders through social media activity.

“A tweet is not a piece of data in the same way like a measurement of the weather is. It’s people,” Dr. Fiesler correctly noted. In the study, Fiesler and her group conducted to figure out “How do Twitter users feel about the use of their tweets in research?” they found that more than half of the Twitter users surveyed were unaware that their tweets were used in research and almost half of them thought researchers were not permitted to use their tweets without permission. When asked about how they feel about researchers using their data, participants expressed caring about things like the purpose of the study, and whether they would be identifiable. Dr. Fiesler summarized their findings, saying, “Data collection decisions should not be context-agnostic.” Beyond the “publicness,” what data is collected, what the purpose is, who is collecting the data, and the expectations and norms of the users are all things that should matter when researchers are making a data collection decision.

In the 2018 iteration of the Artificial Intelligence, Ethics and Society (AIES) conference, a paper on predictive policing sparked controversy. When asked a question about the ethical implications of the work, one of its authors responded by saying he couldn’t be sure, and added, “I am just an engineer!” — an unfortunate selection of words, yet a great title for this piece. In a tweet, Rob Reich, professor of political science at Stanford, quoted this expression with his remark that it was the “[r]oot of a very large problem with (some) technologists.” People driven by intellectual curiosity sometimes focus too much on pushing the frontiers of what they can do that they forget to ask themselves whether they should in the first place. Educational institutions excel at pushing students to ask the former question, and perhaps it is time to encourage students to ask the latter more often. Integrating ethics education into the curricula of all disciplines might be a good place to start.

If we agree that ethics education is important, then the crucial question to ask is how to best integrate it into the already established curricula, specifically the computer science curriculum. Stanford incorporates ethics into the CS curriculum via two classes: CS181, the historical CS, ethics and public policy class, and CS182, a newly developed version that integrates study of more recent ethical troubles in tech. Both are standalone courses.

According to Fiesler, the “I am just an engineer” problem is caused by the common way computer science is taught, which treats ethics as its own specialization. She thinks ethics should be taught as part of the technical practice, rather than in its own class as is done at Stanford. In a forthcoming study, Fiesler looked into the syllabi of artificial intelligence courses in the institutions ranked top 20 in the nation by US News report. Out of the 100 classes surveyed, fewer than 20 had mention of an ethics-related topic, and the definition was intentionally left loose; this figure included the standalone ethics classes as well. Ideally, ethics would have been mentioned in more of these classes, especially given that their focus is on AI.

In her talk, Fiesler listed arguments for integrating ethics into traditionally technical classes. Doing so conveys the strong message that ethics is “a part of the technical practice,” “avoids isolating ethics from its context,” and “ensures that everyone who takes a technical class is exposed to ethics content,” she says. I find the last argument particularly subtle but compelling. The barrier to entry to become a programmer is low for those who have access to the vast amount of resources the web has to offer: anyone can take an introductory coding class and apply the skills they learn to their own domain of specialty, often having the effect of speeding up their productivity. Also, getting up to speed with applying AI models to various problems keeps getting easier with ever-improving AI-as-a-service offerings by cloud computing companies. 

A common worry with integrating ethics into even the elementary CS classes is taking time away from teaching other course material that is essential to the technical objective of the course. Fiesler, however, has suggestions to achieve this goal without interrupting the usual flow of the classes. One of her suggestions is “re-contextualizing existing assignments to include implicit ethical components.” 

For example, students in an introductory CS class can be given an assignment to code an algorithm to be used in college admissions, requiring them to loop through all the students and give a recommendation on whether to admit a student based on the kinds of things that such algorithms might take into account in reality. The assignment can then present a few outliers to the students, showing that where these algorithms might fail and provide unexpected or undesirable results. Through this assignment, students would learn how to use loops and conditionals, but also see what can go wrong with such algorithms.

Integrating ethics education into the CS curriculum is a pressing need, given there is a new tech controversy every year. That said, I find it odd that this conversation is often focused on just CS and its related disciplines. People with all sorts of backgrounds can and do work on tech projects. Even prominent people — those who played key roles in the tech scandals that push us to have these conversations about CS and ethics — can come from non-technical backgrounds. One example is Dr. Aleksandr Kogan, the researcher behind the app that was at the center of the Cambridge Analytica scandal. Kogan has a B.A. and Ph.D. in psychology. 

As well, students studying other disciplines, including but not limited to management, business, finance, political science, law, physics and medicine, can also go on to work on controversial projects in their disciplines. History is full of such catastrophes. There are regulatory bodies reactively established to prevent these from happening again, but there will always be situations unaccounted for. Instead of waiting for catastrophes to take reactive measures, ethics should be proactively integrated into the curricula of every discipline, pushing students to think about the implications of their work or ideas all the time.

Knowledge is power, and the institutions equipping people with this power are also responsible for teaching them about how not to use it.

Contact Dilara Soylu at soylu ‘at’ stanford.edu