This past week, the San Francisco Board of Supervisors announced that it would ban the usage of facial recognition by police and other agencies. This is a declaration that seems at odds with the ethos of a city usually at the forefront of technological innovation, but follows a trend of California cities putting restrictions on the use of the technology.
The ban goes a bit further than just an outright ban of the technology, though. State departments that wish to install any new technology that collects people’s information have to submit a policy plan to officials along with a surveillance impact report. Based on those documents, the city will then either allow them to use the technology or not. For those departments that already use a type of surveillance technology that collects citizen data, they have to submit a specific ordinance that outlines their usage of the technology and allow the city to audit their use of it on a yearly basis. City officials said this move was largely meant to “ensure the safe and responsible use of surveillance technology.”
This ban continues a trend that has appeared across the globe in the past few months: the questioning of the ethics of facial recognition technology. The conversation began when it became public that China was allegedly using, and had been using for some time, facial recognition technology on its citizens, but more specifically to single out a specific minority group: the Uighurs, a largely Muslim minority. According to a report from the New York Times, the Chinese government was keeping tabs on the comings and goings of the Uighurs for the purpose of search and review– the first recorded usage of AI technology for the purpose of racial profiling.
After this instance came to light, it is no wonder that the city of San Francisco felt the need to reassess its surveillance technology. The example in China highlights the opportunity to use facial recognition for racial profiling in a way the world has never seen before, and so reviewing its use in cities in the US may be the best way to to avoid discrimination of the kind perpetrated by the Chinese government.
There are two sides to the San Francisco decision, though. Some argue that abolishing use of the technology was the right thing to do, but others believe the stance is too extreme and that it hinders the ability of police departments to catch suspects. Of these two stances, I believe that San Francisco made the right choice by abolishing the use of the technology — for now.
To understand why, all we have to do is look at both China, and how the technology was used against Chinese people, and New York, where the government is struggling to deal with its police departments using facial recognition technology. The New York Police Department (NYPD) is now juggling a lawsuit brought by Georgetown Law’s Center on Privacy and Technology who want the NYPD to turn over information regarding their use of facial recognition technology. This is because almost nothing is known about the technology at the moment: it’s unclear what kind of facial recognition technology is being used, who it is being shared with (if anyone), and what sort of databases are being used to store the information. These are basic questions that people should have answers to because not having those answers is a serious privacy concern. These are questions San Francisco will also have to find answers to if they want to reimplement the technology in the future, as well as any city planning on using it.
While I believe that San Francisco is in the right to ban facial recognition at the moment, I think it could be reinstated in the future because of the benefits it provides to criminal investigations. This could only happen once there is a more comprehensive understanding of the technology and how it is being used, though. We should have answers to basic questions about what kind of information is being stored and whether it is being shared with anyone other than the government. There should be strict regulations on who gets to use the technology and how it used should be put in place. The more difficult aspect to control would be the will of the government and its employees, and how they choose to use the information. But the only way to keep such concerns in check is to elect officials we believe in and put in place proper oversight.
San Francisco made a difficult decision by banning facial recognition, but it is the right one for the moment. Until there is a better system in place to control the technology and its use, not having it — and thus avoiding the challenges occurring in China and New York — might be the best option.
Contact Alex Durham at ahdurham ‘at’ stanford.edu.