San Francisco stands up against potential abuse of technology for facial recognition used by policemen and other agencies. This year in early May, the software used for facial recognition has been officially banned in the city.
San Francisco is now the first-ever American city disallowing the use of a tool that police officers use to search for both small fry criminals, terrorists, and dangerous people responsible for mass killings.
In June’s mass shooting, the authorities utilized the same technology for identifying the suspect. However, the civil liberty groups expressed their fear of the app’s potential abuse that could result in overly oppressive surveillance in the country.
The city supervisor supporting the bill, Aaron Peskin, said it shouts a powerful message to the nation, exposing which cities have been transformed by technology.
“I think part of San Francisco being the real and perceived headquarters for all things tech also comes with a responsibility for its local legislators,” Mr. Peskin said in an interview with N.Y. Times.
He added, “we have an outsize responsibility to regulate the excesses of technology precisely because they are headquartered here.”
On the other hand, opposing critics believe, instead of focusing on bans, S.F. must come up with regulations admitting the usefulness of the tool.
According to constitutional law expert at George Washington University, Jonathan Turley, “it is ridiculous to deny the value of this technology in securing airports and border installations.”
“It is hard to deny that there is a public safety value to this technology,” he added.
It appears San Francisco will not be the only city in the United States that aims to ban facial recognition. In Oakland and Somerville, Mass., similar bills are under consideration. A Massachusetts state legislature bill bestows a prohibition on facial recognition as well as on remote biometric surveillance systems. Capitol Hill also introduced a law banning users of commercial face recognition technology which gathers and shares data identifying and tracking customers without their consent. The bill, however, doesn’t mention the government’s use of such technology.
A lawyer with the A.C.L.U. of Northern California, Matt Cagle, pointed out, in a nutshell, the many concerns about facial recognition. Cagle believes the technology, “provides [the] government with unprecedented power to track people going about their daily lives. That’s incompatible with a healthy democracy.”
Cagle added, the San Francisco proposal “is really forward-looking and looks to prevent the unleashing of this dangerous technology against the public.”
The technology is currently being used across the U.S. in airports, big stadiums, and by police authorities. Popstar, Taylor Swift, also acknowledged the usefulness of the tool for identifying stalkers, which she admitted had been used in one of her shows.
The battle in San Francisco over facial recognition technology is mainly theoretical. Authorities do not deploy such tools besides the airports and ports that are under federal jurisdiction and where the legislation do not apply.
According to Jennifer Friedenbach, the executive director of the Coalition on Homelessness, biometric finger scans and photos are used in some local homeless shelters. Friedenbach said the use these technologies prevented undocumented residents from using the shelters.
The issue has been a debate between opposing sides residing in the city with a high rate of property crimes. The bill prohibits agencies from using facial recognition technology or other systems that provide leaning information. It’s included in the legislative package, where local agencies provide policies to control the use of such tools. Still, some exemptions favor prosecutors if transparency requirements interfere with the investigations.
The San Francisco Police Officers Association, a union of officers believes, the ban limits the efforts of the officers in investigating crimes.
“Although we understand that it’s not a 100 percent accurate technology yet, it’s still evolving,” said the president of the association, Tony Montoya. “I think it has been successful in at least providing leads to criminal investigators.”
“Basically, governments and companies have been very secretive about where it’s being used, so the public is largely in the dark about the state of play,” Mr. Cagle said on the widespread of the use of technology in the country and why it’s impossible to identify precisely.
The senior investigative researcher at the Electronic Frontier Foundation, Dave Maass, provided an incomplete list of departments that used the technology. Maass included Las Vegas, San Jose, Orlando, San Diego, Boston, New York City, Detroit, and Durham, N.C.
Maass also added to the list, Colorado Department of Public Safety, the California Department of Justice, the Pinellas County Sheriff’s Office in Florida and the Virginia State Police.
In many airports and seaports in the country, the U.S. Customs and Border Protection uses facial recognition for travelers, where they stand before cameras and have their passport pictures matched with their photos. The agency said they comply with privacy laws but still receives criticisms from the Electronic Privacy Information Center.
“When you have the ability to track people in physical space, in effect everybody becomes subject to the surveillance of the government,” said the group’s executive director, Marc Rotenberg, pointing out a bigger concern.
The technology has grown big and rapidly, which has now made into reality the once far-fetch ideas implemented to smartphones. Now devices are featuring facial recognition which is also used to unlock phones and other gadgets.
However, experts fear that the government is stripping out its responsibility to protect people’s privacy. A similar scenario already exists in China where there’s a close surveillance of Uighurs, a mostly Muslim minority. The technology is integrated with a system powered by about 200 million cameras.
Civil liberties in the U.S. expressed that facial recognition without people’s consent may threaten their freedom to attend political protests or meetings anonymously. Bradford L. Smith, the president of Microsoft, asked the congress to oversee the use of this technology as it’s too risky for the companies to police on their own.
Arguments over the issue have been stronger after studies are published, implying the technology’s bias in recognizing faces. Since then, companies like I.B.M., Microsoft, and Amazon, has improved its tools and found no difference in recognizing faces.
Last year, the American Civil Liberties Union and non-profit organizations called on Amazon to stop selling such technologies to government, as African-Americans and women could be easily wrongly identified and arrested.
In an essay from a postdoctoral researcher at Microsoft Research Montreal, Luke Stark, facial surveillance is described as “the plutonium of artificial intelligence,” reasoning that it must be “recognized as anathema to the health of human society, and heavily restricted as a result.”
Director of Georgetown University’s Center on Privacy and Technology, Alvaro Bedoya, said that 30 states allow authorities to check people’s driver’s license. Bedoya said it’s equivalent to an endless police line up as they use it for matching the faces of suspected criminals. However, the difference is that an algorithm does the work with such technologies versus real people.
“This is the most pervasive and risky surveillance technology of the 21st century,” Bedoya added, pointing out that there’s a lack in the regulation of the technology use.
Directing the Center for Data Innovation at the Information Technology and Innovation Foundation, and among who oppose the facial technology use, Daniel Castro said he prefers the authorities to have access to the technology’s data after they get a warrant from a judge, following the Supreme court’s guidelines.
On the other hand, supporters of the technology said they are willing to cease the use of the tools and perform further studies before harm is done.
“The government and the public don’t have a handle on what the technology is and what it will become,” said Ben Ewen-Campen, the councilor who sponsored the surveillance ban in Somerville, the Boston suburb.
The former police commissioner in Boston, Ed Davis, expressed it was “premature to be banning things.” Davis was the leader of the department during the Boston Marathon attack. He also said that no person in the country would aim to follow the scenario in China.
“This technology is still developing,” Davis said on the potential of the technology, “and as it improves, this could be the answer to a lot of problems we have about securing our communities.”
Vice president of Stop Crime SF, Joel Engardio, agrees that the technology isn’t yet seamless, but he said that shouldn’t wholly abandon its use in the future when the tools have undergone improvements.
“Instead of an outright ban, why not a moratorium?” asks Engardio.
“Let’s keep the door open for when the technology improves. I’m not a fan of banning things when eventually it could actually be helpful.”