Just four years ago, the movement to ban police departments from using face recognition in the US was riding high. By the end of 2020, around 18 cities had enacted laws forbidding the police from adopting the technology. US lawmakers proposed a pause on the federal government’s use of the tech.
In the years since, that effort has slowed to a halt. Five municipal bans on police and government use passed in 2021, but none in 2022 or in 2023 so far, according to a database from the digital rights group Fight for the Future. Some local bans have even been partially repealed, and today, few seriously believe that a federal ban on police use of face recognition could pass in the foreseeable future. In the meantime, without legal limits on its use, the technology has only grown more ingrained in people’s day-to-day lives.
However, in Massachusetts there is hope for those who want to restrict police access to face recognition. The state’s lawmakers are currently thrashing out a bipartisan state bill that seeks to limit police use of the technology. Although it’s not a full ban, it would mean that only state police could use it, not all law enforcement agencies.
The bill, which could come to a vote imminently, may represent an unsatisfying compromise, both to police who want more freedom to use the technology and to activists who want it completely banned. But it represents a vital test of the prevailing mood around police use of these controversial tools.
That’s because when it comes to regulating face recognition, few states are as important as Massachusetts. It has more municipal bans on the technology than any other state, and it’s an epicenter for civil liberty advocates, academics, and tech companies. For a movement in need of a breakthrough, a lot rides on whether this law gets passed.
Right now in the US, regulations on police use of face recognition are trapped in political gridlock. If a leader like Massachusetts can pass its bill, that could usher in a new age of compromise. It would be one of the strictest pieces of statewide legislation in the country and could set the standard for how face recognition is regulated elsewhere.
On the other hand, if a vote is delayed or fails, it would be yet another sign that the movement is waning as the country moves on to other policy issues.
A history of advocacy
Privacy advocates and public interest groups have long had concerns about the invasiveness of face recognition, which is pivotal to a growing suite of high-tech police surveillance tools. Many of those fears revolve around privacy: live video-based face recognition is seen as riskier than retroactive photo-based recognition because it can track people in real time.
Those worries reached a fever pitch in 2018 with the arrival of a bombshell: a privacy-shredding new product from a small company called Clearview AI.
The very same year, evidence started to mount that the accuracy of face recognition tools varied by race and gender. A groundbreaking study out of MIT by Joy Buolamwini and Timnit Gebru, called Gender Shades, showed that the technology is far less accurate at identifying people of color and women than white men.
The US government corroborated the results in a 2019 study by the National Institute of Science and Technology, which found that many commercial face recognition algorithms were 10 to 100 times more inaccurate in identifying Asian and Black faces than white ones.
Politicians started to wake up to the risks. In May 2019, San Francisco became the first city in the US to ban police use of face recognition. One month later, the ACLU of Massachusetts announced a groundbreaking campaign called “Press Pause,” which called for a temporary ban on the technology’s use by police in cities across the state. Somerville, Massachusetts, became the second city in the United States to ban it.
Over the next year, six more Massachusetts cities, including Boston, Cambridge, and Springfield, approved bans on police and government use of face recognition. Some cities even did so preemptively; in Boston, for example, police say they were not using the technology when it was banned. Major tech companies, including Amazon, Microsoft, and IBM pulled the technology from their shelves, and civil liberties advocates were pushing for a nationwide ban on its police use.
“Everyone who lives in Massachusetts deserves these protections; it’s time for the Massachusetts legislature to press pause on this technology by passing a statewide moratorium on government use of face surveillance,” Carol Rose, the executive director of the ACLU’s Massachusetts chapter, said in a statement after Boston passed its ban in June 2020.
That moratorium would never happen.
Is your face private?
At first, momentum was on the side of those who supported a statewide ban. The murder of George Floyd in Minneapolis in May 2020 had sent shock waves through the country and reinvigorated public outcry about abuses in the policing system. In the search for something tangible to fix, activists both locally and nationwide alighted on face recognition.
At the beginning of December 2020, the Massachusetts legislature passed a bill that would have dramatically restricted police agencies in the state from using face recognition, but Governor Charlie Baker refused to sign it, saying it was too limiting for police. He said he would never sign a ban into law.
In response, the legislature passed another, more toned-down bill several weeks later. It was still a landmark achievement, restricting most government agencies in the state from using the technology. It also created a commission that would be tasked with investigating further laws specific to face recognition. The commission included representatives from the state police, the Boston police, the Massachusetts Chiefs of Police Association, the ACLU of Massachusetts, several academic experts, the Massachusetts Department of Public Safety, and various lawmakers from both political parties, among others.
Law enforcement agencies in the state were now permitted access only to face recognition systems owned and operated by the Registry of Motor Vehicles (RMV), the state police, or the FBI. As a result, the universe of photos that police could query was much more limited than what was available through a system like Clearview, which gives users access to all public photos on the internet.
To hunt for someone’s image, police had to submit a written request and obtain a court order. That’s a lower bar than a warrant, but previously, they’d just been able to ask by emailing over a photo to search for suspects in misdemeanor and felony offenses including fraud, burglary, and identity theft.
At the time, critics felt the bill was lacking. “They passed some initial regulations that don’t go nearly far enough but were an improvement over the status quo, which was nothing,” says Kade Crockford of the ACLU of Massachusetts, a commission member.
Still, the impetus toward a national ban was building. Just as the commission began meeting in June 2021, Senator Ed Markey of Massachusetts and seven other members of Congress introduced a bill to ban federal government agencies, including law enforcement, from using face recognition technology. All these legislators were left-leaning, but at the time, stricter regulation had bipartisan support.
The Massachusetts commission met regularly for a year, according to its website, with a mandate to draft recommendations for the state legislature about further legal limits on face recognition.
As debate ensued, police groups argued that the technology was essential for modern policing.
“The sort of constant rhetoric of many of the appointees who were from law enforcement was that they did not want to tie the hands of law enforcement if the X, Y, Z worst situation happened—a terrorist or other extremely violent activity,” said Jamie Eldridge, a Massachusetts state senator who cochaired the commission, in an interview with MIT Technology Review.
Despite that lobbying, in March 2022 the commission voted to issue a strict set of recommendations for the legal use of face recognition. It suggested that only the state police be allowed to use the RMV database for face matching during a felony investigation, and only with a warrant. The state police would also be able to request that the FBI run a face recognition search.
Of the commission’s 21 members, 15 approved the recommendations, including Crockford. Two abstained, and four dissented. Most of the police members of the commission voted no.
One of them, Norwood Police Chief William Brooks, told MIT Technology Review there were three major things he disagreed with in the recommendations: requiring a warrant, restricting use of the technology to felonies only, and preventing police from accessing face recognition databases outside those of the RMV and the FBI.
Brooks says the warrant requirement “makes no sense” and “would protect no one,” given that the law already requires a court order to use face recognition technology.
“A search warrant is obtained when the police want to search in a place where a person has an expectation of privacy. We’re not talking about that here. We’re just talking about what their face looks like,” he says.
Other police groups and officers serving on the commission, including the Massachusetts public safety office, the Boston Police Patrolmen’s Association, and the Gloucester Police Department, have not responded to our multiple requests for comment.
An unsatisfying compromise
After years of discussion, debate, and compromise, in July 2022 the Massachusetts commission’s recommendations were codified into an amendment that has already been passed in the state house of representatives and may come to a vote via a bill in the state senate any day.
The bill allows image matching, which looks to retroactively identify a face by finding it in a database of images, in certain cases. But it bans two other types of face recognition: face surveillance, which seeks to identify a face in videos and moving images, and emotion recognition, which tries to assign emotions to different facial expressions.
This more subtle approach is reminiscent of the path that EU lawmakers have taken when evaluating the use of AI in public applications. That system uses risk tiers; the higher the risks associated with a particular technology, the stricter the regulation. Under the proposed AI Act in Europe, for example, live face recognition on video surveillance systems in public spaces would be regulated more harshly than more limited, non-real-time applications, such as an image search for in an investigation of a missing child.
Eldridge says he expects resistance from prosecutors and law enforcement groups, though he is “cautiously optimistic” that the bill will pass. He also says that many tech companies lobbied during the commission hearings, claiming that the technology is accurate and unbiased, and warning of an industry slowdown if the restrictions pass. Hoan Ton-That, CEO of Clearview, told the commission in his written testimony that “Clearview AI’s bias-free algorithm can accurately find any face out of over 3 billion images it has collected from the public internet.”
Crockford and Eldridge say they are hopeful the bill will be called to a vote in this session, which lasts until July 2024, but so far, no such vote has been scheduled. In Massachusetts, like everywhere else, other priorities like economic and education bills have been getting more attention.
Nevertheless, the bill has been influential already. Earlier this month, the Montana state legislature passed a law that echoes many of the Massachusetts requirements. Montana will outlaw police use of face recognition on videos and moving images, and require a warrant for face matching.
The real costs of compromise
Not everyone is thrilled with the Massachusetts standard. Police groups remain opposed to the bill. Some activists don’t think such regulations are enough. Meanwhile, the sweeping face recognition laws that some anticipated on a national scale in 2020 have not been passed.
So what happened between 2020 and 2023? During the three years that Massachusetts spent debating, lobbying, and drafting, the national debate moved from police reform to rising crime, triggering political whiplash. As the pendulum of public opinion swung, face recognition became a bargaining chip between policymakers, police, tech companies, and advocates. Perhaps importantly, we also got accustomed to face recognition technology in our lives and public spaces.
Law enforcement groups nationally are becoming increasingly vocal about the value of face recognition to their work. For example, in Austin, Texas, which has banned the technology, Police Chief Joseph Chacon wishes he had access to it in order to make up for staffing shortages, he told MIT Technology Review in an interview.
Some activists, including Caitlin Seeley George, director of campaigns and operations at Fight for the Future, say that police groups across the country have used similar arguments in an effort to limit face recognition bans.
“This narrative about [an] increase in crime that was used to fight the defund movement has also been used to fight efforts to take away technologies that police argue they can use to address their alleged increasing crime stats,” she says.
Nationally, face recognition bans in certain contexts, and even federal regulation, might be on the table again as lawmakers grapple with recent advances in AI and the attendant public frenzy about the technology. In March, Senator Markey and colleagues reintroduced the proposal to limit face recognition at a federal level.
But some advocacy groups still disagree with any amount of political compromise, such as the concessions in the Montana and Massachusetts bills.
“We think that advocating for and supporting these regulatory bills really drains any opportunity to move forward in the future with actual bans,” says Seeley George. “Again, we’ve seen that regulations don’t stop a lot of use cases and don’t do enough to limit the use cases where police are still using this technology.”
Crockford wishes a ban had been politically feasible: “Obviously the ACLU’s preference is that this technology is banned entirely, but we get it … We think that this is a very, very, very compromised common-sense set of regulations.”
Meanwhile, some experts think that some activists’ “ban or nothing” approach is at least partly responsible for the current lack of regulations restricting face recognition. Andrew Guthrie Ferguson, a law professor at American University Washington College of Law who specializes in policing and tech, says outright bans face significant opposition, and that’s allowed continued growth of the technology without any guardrails or limits.
Face recognition abolitionists fear that any regulation of the technology will legitimize it, but the inability to find agreement on first principles has meant regulation that might actually do some good has languished.
Yet throughout all this debate, facial recognition technology has only grown more ubiquitous and more accurate.
In an email to MIT Technology Review, Ferguson said, “In pushing for the gold standard of a ban against the political forces aligned to give police more power, the inability to compromise to some regulation has a real cost.”
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Technology Review – https://www.technologyreview.com/2023/07/20/1076539/face-recognition-massachusetts-test-police/