For activists across the country who have taken to the streets to demand racial justice and police accountability, the sound of protest has been not just the sound of chants, but the sound of helicopters. For many police departments, these protests have been an occasion to bring out all their high-tech toys, and those include surveillance aircraft, ranging from police helicopters to fixed-wing surveillance aircraft to drones. Like all expensive law enforcement practices, police aerial surveillance should be questioned and reevaluated as part of a broader divestment from police in the United States. That is doubly true considering the powerful new surveillance technologies that will increasingly be put into the skies above American cities and towns — if nothing is done to stop them.
 
Last week, U.S. Customs and Border Protection (CBP) was discovered flying a large, high-altitude Predator drone above Minneapolis. CBP has no business deploying military-grade drones — authorized by Congress for border patrol — over domestic political protests, and these drones should not be flying over Minneapolis, or any other U.S. community. Such flights raise questions about the lack of transparency (we don’t know what kinds of equipment the agency had on board), the lack of privacy protections (CBP being a troubled agency with a particular absence of institutional respect for privacy), and mission creep.
 
The Minneapolis large-drone deployment was especially ominous because it involved surveillance of people protesting police abuse — but it’s just the tip of the iceberg. CBP is the only non-military agency that has received permission from the FAA to fly large drones at high altitudes, but it frequently lends out its Predators to other agencies for uses ranging far afield from CBP’s border mission — and from the border. The defense contractor General Atomics is carrying out tests of a similar drone flight over San Diego this year.
 
Of course, surveillance abuses can come not just from drones but also from piloted aircraft. In 2004, for example, a New York City couple was filmed having sex at night on a pitch-black rooftop balcony — where they had every reason to expect privacy — by a $9.8 million NYPD helicopter equipped with night vision that had been deployed to monitor a nearby bicycle protest. Rather than apologize, NYPD officials flatly denied that this filming constituted an abuse, telling a television reporter, “This is what police in helicopters are supposed to do, check out people to make sure no one is . . . doing anything illegal.”
 
More recently, we have seen piloted airplanes used for long-term, mass surveillance over the entire city of Baltimore using wide-area motion imagery. This is just the latest surveillance technique to be deployed against communities of color, and is a clear violation of residents’ constitutional right to privacy. A historic legal battle over the program is now underway as the result of an ACLU legal challenge.
 
The FBI also regularly flies “a small air force” of surveillance aircraft above American cities — including over protest marches such as those in Ferguson, Missouri and in Baltimore following the 2015 death of Freddie Gray in police custody. The planes are typically registered to front companies to hide their identities as government planes. In addition to the CBP drone in Minneapolis, the Department of Homeland Security deployed piloted surveillance aircraft over George Floyd protests in Washington, D.C. and 13 other cities, sending video to a centralized CBP facility, letting federal agents view live aerial footage on their phones, and storing the footage for potential use in criminal investigations.
 
Ultimately, the answer is for communities — and the federal government — to put in place strong privacy protections that apply equally to drones and piloted aircraft.
 
Today, police helicopters — which first appeared in the late 1940s — have become a well-established law enforcement tool in many cities. Police helicopters and fixed wing aircraft are used for a variety of purposes including patrol, pursuit, search and rescue, and surveillance.
 
But police helicopters are also used to intimidate through a militaristic display of raw power. That role was epitomized by the use of a military Blackhawk helicopter last week to disperse peaceful racial justice protesters in Washington, D.C. by hovering low over a street, creating wind gusts strong enough to snap tree limbs. Experts call this tactic a “show of force” and say it’s a common military tactic to “intimidate and remind potential enemies of your armed presence.”
 
Beyond such a clearly abusive deployment, however, even civilian police helicopters can have a similar effect. Police helicopters — many of which are military surplus aircraft — are large, loud machines, heavily associated with military weaponry and which, by virtue of their position in the sky, signify surveillance, dominance, and control. In at least some places they are consciously used by police for the purpose of deterring crime — which might sound like a good thing until you pause to reflect that they do so by making all residents of certain neighborhoods feel as though they are being watched by an overpowering occupying force. And those neighborhoods aren’t affluent white ones.
 
American communities should take a hard look at their police departments’ aerial surveillance programs as part of an overall reassessment and divestment from law enforcement. How beneficial are they really for the community as a whole? Are their benefits proportional to their cost — including the opportunity cost of underfunding social programs to support expensive aircraft? To what extent are they used in positive ways such as for search and rescue, compared to negative ones such as “dominating” people exercising their First Amendment right to peaceably assemble? Do they fit into positive community visions that stress support, uplift, and assistance, rather than the harsh hammer of a militarized enforcement approach?
 
For many communities, the answer will be no, and those communities should end their aerial surveillance programs. Maybe others will decide to allow their law enforcement departments to retain aerial surveillance capabilities — but those should be re-focused, regulated, and scaled back.
 
Regulations should ensure transparency so communities can decide for themselves what kind of surveillance the police departments that serve them are deploying. For no justifiable reason, CBP refused to say what agency it was flying its Predators over Minneapolis on behalf of and whether it was a federal or state agency. (The New York Times reported weeks later that it was for a branch of Immigration and Customs Enforcement.) We also don’t know what kinds of surveillance technologies those drones are carrying. The U.S. House of Representatives has launched an investigation into the case, but shouldn’t take a congressional investigation to get such information.
 
The privacy protections for all aerial surveillance that we think are necessary (which we have previously laid out with regard to drones) would not allow for aerial mass surveillance of any kind, including wide-area surveillance and the use of Dirtboxes — electronic dragnets that sweep up people’s cell phone data. In general, communities should engage in careful monitoring and regulation of the devices that are installed on law enforcement aircraft. Police also shouldn’t be permitted to engage in suspicionless aerial surveillance — flying around aimlessly looking for trouble based on the hunches or curiosity of their pilots, or for any other form of patrol. Where aerial surveillance is used it should be carried out only in emergencies, for specific purposes that don’t implicate privacy such as accident- or crime-scene photography, or where there are specific and articulable grounds to believe that the aircraft will collect evidence relating to a specific instance of criminal wrongdoing (preferably through a warrant requirement as in some states such as Minnesota).
 
Law enforcement will argue that it needs aerial surveillance to achieve “situational awareness” across large areas during times of civil unrest and/or large protests. Communities should do a hard examination of that claim. Just how often does law enforcement have a legitimate need for that kind of surveillance? Can the aims of such flights be achieved through ground observations or other techniques that have lower costs, fewer chilling effects on protest, and less risk of lending themselves to abusive surveillance? If and when there is a genuine need for aerial surveillance, are high-tech surveillance devices necessary, or can that need be met through plain-view visual surveillance? And how much is the community paying for what is most likely a rarely-needed capability?
 
Good privacy protections are especially important given the futuristic surveillance devices that are now available. Drones and other aircraft are a platform — one that can be used to carry any number of other technologies up into the sky. Among the sensors they can carry are GPS, radar, range-finders, magnetic-field change sensing, sonar, radio frequency sensors, and chemical and biochemical sensors. They can carry Lidar, which can be used to see through some barriers such as foliage and for such functions as change detection, in which even small changes in a landscape, such as tire tracks, are automatically flagged. And of course, aircraft can carry all kinds of cameras, including super-powerful gigapixel lens arrays that can sweep in enormous areas, and infrared sensors that “see” beyond the visual part of the electromagnetic spectrum.
 
Perhaps most significantly, camera footage and other data can increasingly be analyzed using face recognition, license plate recognition, and other artificial intelligence techniques that promise to supercharge the analysis of datasets that are too large for humans to reasonably review. That’s not even counting whatever technologies may be developed in the future.
 
Given the role that aerial surveillance has played in the George Floyd and other protests, as well as the tsunami of new aerial surveillance technologies that are coming our way, such capabilities should be part of the conversation over police divestment that the recent protests have sparked.

Jay Stanley, Senior Policy Analyst, ACLU

Date

Wednesday, June 24, 2020 - 11:00am

Featured image

Protestors react to a low flying helicopter during a march in Brooklyn, New York.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy Police Practices Criminal Justice

Show related content

Imported from National NID

33291

Menu parent dynamic listing

22

Imported from National VID

33317

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Early this year, Detroit police arrested Robert Williams — a Black man living in a Detroit suburb — on his front lawn in front of his wife and two little daughters (ages 2 and 5). Robert was hauled off and locked up for nearly 30 hours. His crime? Face recognition software owned by Michigan State Police told the cops that Robert Williams was the watch thief they were on the hunt for.
 
There was just one problem: Face recognition technology can’t tell Black people apart. That includes Robert Williams, whose only thing in common with the suspect caught by the watch shop’s surveillance feed is that they are both large-framed Black men.

Michigan State Police Investigative Lead Report

But convinced they had their thief, Detroit police put Robert William’s driver’s license photo in a lineup with other Black men and showed it to the shop security guard, who hadn’t even witnessed the alleged robbery firsthand. The shop security guard — based only on review of a blurry surveillance image of the incident — claimed Robert was indeed the guy. With that patently insufficient “confirmation” in hand, the cops showed up at Robert’s house and handcuffed him in broad daylight in front of his own family.
 
It wasn’t until after spending a night in a cramped and filthy cell that Robert saw the surveillance image for himself. While interrogating Robert, an officer pointed to the image and asked if the man in the photo was him. Robert said it wasn’t, put the image next to his face, and said “I hope you all don’t think all Black men look alike.”
 
One officer responded, “The computer must have gotten it wrong.” Robert was still held for several more hours, before finally being released later that night into a cold and rainy January night, where he had to wait about an hour on a street curb for his wife to come pick him up. The charges have since been dismissed.
 
The ACLU of Michigan is lodging a complaint against Detroit police, but the damage is done. Robert’s DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record. Robert’s wife, Melissa, was forced to explain to his boss why Robert wouldn’t show up to work the next day. Their daughters can never un-see their father being wrongly arrested and taken away — their first real experience with the police. Their children have even taken to playing games involving arresting people, and have accused Robert of stealing things from them.
 
As Robert puts it: “I never thought I’d have to explain to my daughters why daddy got arrested. How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?”
 
One should never have to. Lawmakers nationwide must stop law enforcement use of face recognition technology. This surveillance technology is dangerous when wrong, and it is dangerous when right.
 
First, as Robert’s experience painfully demonstrates, this technology clearly doesn’t work. Study after study has confirmed that face recognition technology is flawed and biased, with significantly higher error rates when used against people of color and women. And we have long warned that one false match can lead to an interrogation, arrest, and, especially for Black men like Robert, even a deadly police encounter. Given the technology’s flaws, and how widely it is being used by law enforcement today, Robert likely isn’t the first person to be wrongfully arrested because of this technology. He’s just the first person we’re learning about.
 
That brings us to the second danger. This surveillance technology is often used in secret, without any oversight. Had Robert not heard a glib comment from the officer who was interrogating him, he likely never would have known that his ordeal stemmed from a false face recognition match. In fact, people are almost never told when face recognition has identified them as a suspect. The FBI reportedly used this technology hundreds of thousands of times — yet couldn’t even clearly answer whether it notified people arrested as a result of the technology. To make matters worse, law enforcement officials have stonewalled efforts to obtain documents about the government’s actions, ignoring a court order and stonewalling multiple requests for case files providing more information about the shoddy investigation that led to Robert’s arrest.
 
Third, Robert’s arrest demonstrates why claims that face recognition isn’t dangerous are far-removed from reality. Law enforcement has claimed that face recognition technology is only used as an investigative lead and not as the sole basis for arrest. But once the technology falsely identified Robert, there was no real investigation. On the computer’s erroneous say-so, people can get ensnared in the Kafkaesque nightmare that is our criminal legal system. Every step the police take after an identification — such as plugging Robert’s driver’s license photo into a poorly executed and rigged photo lineup — is informed by the false identification and tainted by the belief that they already have the culprit. They just need the other parts of the puzzle to fit. Evidence to the contrary — like the fact that Robert looks markedly unlike the suspect, or that he was leaving work in a town 40 minutes from Detroit at the time of the robbery — is likely to be dismissed, devalued, or simply never sought in the first place. And when defense attorneys start to point out that parts of the puzzle don’t fit, you get what we got in Robert’s case: a stony wall of bureaucratic silence.
 
Fourth, fixing the technology’s flaws won’t erase its dangers. Today, the cops showed up at Robert’s house because the algorithm got it wrong. Tomorrow, it could be because a perfectly accurate algorithm identified him at a protest the government didn’t like or in a neighborhood in which someone didn’t think he belonged. To address police brutality, we need to address the technologies that exacerbate it too. When you add a racist and broken technology to a racist and broken criminal legal system, you get racist and broken outcomes. When you add a perfect technology to a broken and racist legal system, you only automate that system’s flaws and render it a more efficient tool of oppression.
 
It is now more urgent than ever for our lawmakers to stop law enforcement use of face recognition technology. What happened to the Williams’ family should not happen to another family. Our taxpayer dollars should not go toward surveillance technologies that can be abused to harm us, track us wherever we go, and turn us into suspects simply because we got a state ID.

Victoria Burton-Harris, Criminal Defense Attorney, McCaskey Law, PLC,
& Philip Mayor, Senior Staff Attorney, ACLU of Michigan

Date

Wednesday, June 24, 2020 - 7:00am

Featured image

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy Police Practices Criminal Justice

Show related content

Imported from National NID

33219

Menu parent dynamic listing

22

Imported from National VID

33290

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Clare Garvie, Georgetown Law’s Center on Privacy and Technology

In January, Michigan resident Robert Williams was arrested for shoplifting from a watch store in downtown Detroit a year ago—a crime he did not commit. Police thought he was connected to the crime because of a face recognition search that found similarities between grainy surveillance footage of the theft and Mr. Williams’ driver’s license photo.
 
What makes this case unique is not that face recognition was used, or that it got it wrong. What makes it unique is that we actually know about it.
 
The sheer scope of police face recognition use in this country means that others have almost certainly been—and will continue to be—misidentified, if not arrested and charged for crimes they didn’t commit. At least one quarter of the 18,000 law enforcement agencies across the United States have access to a face recognition system. Over half of all American adults are—like Mr. Williams—in a driver’s license database searched using face recognition for criminal investigations (and in some states, for immigration enforcement too). States have spent millions of dollars on face recognition systems, some of which have been in place for years and are searched hundreds, if not thousands of times per month.
 
Florida, for example, implemented its police face recognition system in 2001. By 2016 and as much as $8 million dollars later, local, state, and federal agencies were searching a database of 11 million mugshots and 22 million state driver’s license photos 8,000 times per month.
 
We have no idea how accurate these searches are, and how many lead to arrests and convictions. If we were to assume that misidentifications happened in only one out of a thousand searches, or .1% or the time, that would still amount to eight people implicated in a crime they didn’t commit every month—in Florida alone. But the Pinellas County Sheriff’s Office, which operates the system, does not conduct audits. Defendants are rarely, if ever, informed about the use of face recognition in their cases.
 
And yet these searches have real consequences.
 
No one knows this better than Willie Allen Lynch, arrested in 2015 for selling $50 worth of crack cocaine to two undercover Jacksonville officers. Like Mr. Williams in Michigan, a face recognition match implicated Mr. Lynch as a suspect and was the main evidence supporting his arrest. Unlike Mr. Williams, however, Mr. Lynch was convicted of the crime. He is currently imprisoned and serving an eight year sentence. He maintains his innocence.
 
No one knows this better than Amara Majeed, who on April 25, 2019 woke up to the nightmare of having been falsely identified by a face recognition system as a suspect in a deadly terrorism attack in Sri Lanka. Sri Lankan authorities eventually corrected the mistake, but not before Ms. Majeed had received death threats targeting both herself and her family back home.
 
And no one knows this better than Robert Williams, who was arrested in front of his young children and detained for 30 hours for a crime to which he had no connection other than a passing resemblance, according to a face recognition system, to a person caught on poor quality surveillance footage.
 
We cannot account for the untold number of other people who have taken a plea bargain even though they were innocent, or those incarcerated for crimes they did not commit because a face recognition system thought they looked like the suspect. But the numbers suggest that what happened to Mr. Williams is part of a much bigger picture.
 
Despite the risks, face recognition continues to be purchased and deployed around the country. Within the month, the Detroit Police Department is set to request $220,000 from the City Council to renew its $1 million dollar face recognition contract. An analysis of thousands of pages of police documents that the Center on Privacy & Technology has obtained through public records requests can confirm up to $92 million spent by just 26 (of a possible 18,000) law enforcement agencies between 2001 and 2018. This is surely a serious undercount, as many agencies continue to shroud their purchase and use of face recognition in secrecy.
 
The risk of wrongful arrests and convictions alone should be enough to cast doubt on the value of acquiring and using these systems. Over the past few years advocates, academics, community organizers, and others have also amplified the myriad other risks police face recognition poses to privacy, free speech, and civil rights. What we haven’t seen is ample evidence that it should be used—that the millions of dollars spent, the risks of misidentification, and the threats to civil rights and liberties are justified somehow by the value of face recognition in maintaining public safety. This absence is particularly stark in light of growing calls to divest from over-militarized, unjust policing structures.
 
If Mr. Williams was the only person mistakenly arrested and charged because of a face recognition error, it would be one too many. But he’s not the only one. And unless we pass laws that permit this technology to be used only in ways consistent with our rights,  or stop using the technology altogether, there will be others.
 
Clare Garvie is a senior associate with the Center on Privacy & Technology at Georgetown Law and co-author of The Perpetual Line-Up; America Under Watch; and Garbage In, Garbage Out, three reports about the use and misuse of face recognition technology by police in the United States.

Date

Wednesday, June 24, 2020 - 7:00am

Featured image

Facial recognition software scanning a crowd.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy Police Practices Criminal Justice

Show related content

Imported from National NID

33243

Menu parent dynamic listing

22

Imported from National VID

33285

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Pages

Subscribe to ACLU of Florida RSS