Early this year, Detroit police arrested Robert Williams — a Black man living in a Detroit suburb — on his front lawn in front of his wife and two little daughters (ages 2 and 5). Robert was hauled off and locked up for nearly 30 hours. His crime? Face recognition software owned by Michigan State Police told the cops that Robert Williams was the watch thief they were on the hunt for.
 
There was just one problem: Face recognition technology can’t tell Black people apart. That includes Robert Williams, whose only thing in common with the suspect caught by the watch shop’s surveillance feed is that they are both large-framed Black men.

Michigan State Police Investigative Lead Report

But convinced they had their thief, Detroit police put Robert William’s driver’s license photo in a lineup with other Black men and showed it to the shop security guard, who hadn’t even witnessed the alleged robbery firsthand. The shop security guard — based only on review of a blurry surveillance image of the incident — claimed Robert was indeed the guy. With that patently insufficient “confirmation” in hand, the cops showed up at Robert’s house and handcuffed him in broad daylight in front of his own family.
 
It wasn’t until after spending a night in a cramped and filthy cell that Robert saw the surveillance image for himself. While interrogating Robert, an officer pointed to the image and asked if the man in the photo was him. Robert said it wasn’t, put the image next to his face, and said “I hope you all don’t think all Black men look alike.”
 
One officer responded, “The computer must have gotten it wrong.” Robert was still held for several more hours, before finally being released later that night into a cold and rainy January night, where he had to wait about an hour on a street curb for his wife to come pick him up. The charges have since been dismissed.
 
The ACLU of Michigan is lodging a complaint against Detroit police, but the damage is done. Robert’s DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record. Robert’s wife, Melissa, was forced to explain to his boss why Robert wouldn’t show up to work the next day. Their daughters can never un-see their father being wrongly arrested and taken away — their first real experience with the police. Their children have even taken to playing games involving arresting people, and have accused Robert of stealing things from them.
 
As Robert puts it: “I never thought I’d have to explain to my daughters why daddy got arrested. How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?”
 
One should never have to. Lawmakers nationwide must stop law enforcement use of face recognition technology. This surveillance technology is dangerous when wrong, and it is dangerous when right.
 
First, as Robert’s experience painfully demonstrates, this technology clearly doesn’t work. Study after study has confirmed that face recognition technology is flawed and biased, with significantly higher error rates when used against people of color and women. And we have long warned that one false match can lead to an interrogation, arrest, and, especially for Black men like Robert, even a deadly police encounter. Given the technology’s flaws, and how widely it is being used by law enforcement today, Robert likely isn’t the first person to be wrongfully arrested because of this technology. He’s just the first person we’re learning about.
 
That brings us to the second danger. This surveillance technology is often used in secret, without any oversight. Had Robert not heard a glib comment from the officer who was interrogating him, he likely never would have known that his ordeal stemmed from a false face recognition match. In fact, people are almost never told when face recognition has identified them as a suspect. The FBI reportedly used this technology hundreds of thousands of times — yet couldn’t even clearly answer whether it notified people arrested as a result of the technology. To make matters worse, law enforcement officials have stonewalled efforts to obtain documents about the government’s actions, ignoring a court order and stonewalling multiple requests for case files providing more information about the shoddy investigation that led to Robert’s arrest.
 
Third, Robert’s arrest demonstrates why claims that face recognition isn’t dangerous are far-removed from reality. Law enforcement has claimed that face recognition technology is only used as an investigative lead and not as the sole basis for arrest. But once the technology falsely identified Robert, there was no real investigation. On the computer’s erroneous say-so, people can get ensnared in the Kafkaesque nightmare that is our criminal legal system. Every step the police take after an identification — such as plugging Robert’s driver’s license photo into a poorly executed and rigged photo lineup — is informed by the false identification and tainted by the belief that they already have the culprit. They just need the other parts of the puzzle to fit. Evidence to the contrary — like the fact that Robert looks markedly unlike the suspect, or that he was leaving work in a town 40 minutes from Detroit at the time of the robbery — is likely to be dismissed, devalued, or simply never sought in the first place. And when defense attorneys start to point out that parts of the puzzle don’t fit, you get what we got in Robert’s case: a stony wall of bureaucratic silence.
 
Fourth, fixing the technology’s flaws won’t erase its dangers. Today, the cops showed up at Robert’s house because the algorithm got it wrong. Tomorrow, it could be because a perfectly accurate algorithm identified him at a protest the government didn’t like or in a neighborhood in which someone didn’t think he belonged. To address police brutality, we need to address the technologies that exacerbate it too. When you add a racist and broken technology to a racist and broken criminal legal system, you get racist and broken outcomes. When you add a perfect technology to a broken and racist legal system, you only automate that system’s flaws and render it a more efficient tool of oppression.
 
It is now more urgent than ever for our lawmakers to stop law enforcement use of face recognition technology. What happened to the Williams’ family should not happen to another family. Our taxpayer dollars should not go toward surveillance technologies that can be abused to harm us, track us wherever we go, and turn us into suspects simply because we got a state ID.

Victoria Burton-Harris, Criminal Defense Attorney, McCaskey Law, PLC,
& Philip Mayor, Senior Staff Attorney, ACLU of Michigan

Date

Wednesday, June 24, 2020 - 7:00am

Featured image

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy Police Practices Criminal Justice

Show related content

Imported from National NID

33219

Menu parent dynamic listing

22

Imported from National VID

33290

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Clare Garvie, Georgetown Law’s Center on Privacy and Technology

In January, Michigan resident Robert Williams was arrested for shoplifting from a watch store in downtown Detroit a year ago—a crime he did not commit. Police thought he was connected to the crime because of a face recognition search that found similarities between grainy surveillance footage of the theft and Mr. Williams’ driver’s license photo.
 
What makes this case unique is not that face recognition was used, or that it got it wrong. What makes it unique is that we actually know about it.
 
The sheer scope of police face recognition use in this country means that others have almost certainly been—and will continue to be—misidentified, if not arrested and charged for crimes they didn’t commit. At least one quarter of the 18,000 law enforcement agencies across the United States have access to a face recognition system. Over half of all American adults are—like Mr. Williams—in a driver’s license database searched using face recognition for criminal investigations (and in some states, for immigration enforcement too). States have spent millions of dollars on face recognition systems, some of which have been in place for years and are searched hundreds, if not thousands of times per month.
 
Florida, for example, implemented its police face recognition system in 2001. By 2016 and as much as $8 million dollars later, local, state, and federal agencies were searching a database of 11 million mugshots and 22 million state driver’s license photos 8,000 times per month.
 
We have no idea how accurate these searches are, and how many lead to arrests and convictions. If we were to assume that misidentifications happened in only one out of a thousand searches, or .1% or the time, that would still amount to eight people implicated in a crime they didn’t commit every month—in Florida alone. But the Pinellas County Sheriff’s Office, which operates the system, does not conduct audits. Defendants are rarely, if ever, informed about the use of face recognition in their cases.
 
And yet these searches have real consequences.
 
No one knows this better than Willie Allen Lynch, arrested in 2015 for selling $50 worth of crack cocaine to two undercover Jacksonville officers. Like Mr. Williams in Michigan, a face recognition match implicated Mr. Lynch as a suspect and was the main evidence supporting his arrest. Unlike Mr. Williams, however, Mr. Lynch was convicted of the crime. He is currently imprisoned and serving an eight year sentence. He maintains his innocence.
 
No one knows this better than Amara Majeed, who on April 25, 2019 woke up to the nightmare of having been falsely identified by a face recognition system as a suspect in a deadly terrorism attack in Sri Lanka. Sri Lankan authorities eventually corrected the mistake, but not before Ms. Majeed had received death threats targeting both herself and her family back home.
 
And no one knows this better than Robert Williams, who was arrested in front of his young children and detained for 30 hours for a crime to which he had no connection other than a passing resemblance, according to a face recognition system, to a person caught on poor quality surveillance footage.
 
We cannot account for the untold number of other people who have taken a plea bargain even though they were innocent, or those incarcerated for crimes they did not commit because a face recognition system thought they looked like the suspect. But the numbers suggest that what happened to Mr. Williams is part of a much bigger picture.
 
Despite the risks, face recognition continues to be purchased and deployed around the country. Within the month, the Detroit Police Department is set to request $220,000 from the City Council to renew its $1 million dollar face recognition contract. An analysis of thousands of pages of police documents that the Center on Privacy & Technology has obtained through public records requests can confirm up to $92 million spent by just 26 (of a possible 18,000) law enforcement agencies between 2001 and 2018. This is surely a serious undercount, as many agencies continue to shroud their purchase and use of face recognition in secrecy.
 
The risk of wrongful arrests and convictions alone should be enough to cast doubt on the value of acquiring and using these systems. Over the past few years advocates, academics, community organizers, and others have also amplified the myriad other risks police face recognition poses to privacy, free speech, and civil rights. What we haven’t seen is ample evidence that it should be used—that the millions of dollars spent, the risks of misidentification, and the threats to civil rights and liberties are justified somehow by the value of face recognition in maintaining public safety. This absence is particularly stark in light of growing calls to divest from over-militarized, unjust policing structures.
 
If Mr. Williams was the only person mistakenly arrested and charged because of a face recognition error, it would be one too many. But he’s not the only one. And unless we pass laws that permit this technology to be used only in ways consistent with our rights,  or stop using the technology altogether, there will be others.
 
Clare Garvie is a senior associate with the Center on Privacy & Technology at Georgetown Law and co-author of The Perpetual Line-Up; America Under Watch; and Garbage In, Garbage Out, three reports about the use and misuse of face recognition technology by police in the United States.

Date

Wednesday, June 24, 2020 - 7:00am

Featured image

Facial recognition software scanning a crowd.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy Police Practices Criminal Justice

Show related content

Imported from National NID

33243

Menu parent dynamic listing

22

Imported from National VID

33285

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

COVID-19 has ripped through nursing homes, psychiatric hospitals, and other congregate settings for people with disabilities. People living in these settings make up less than 1 percent of the U.S. population, but nearly 50 percent of COVID-19 deaths.
 
Some have said these deaths are inevitable. Some have even called for “weeding out the weak” as part of herd mentality. But these deaths are far from inevitable. They arise from decades of indifference, invisibility, and deadly discrimination against the people who live and work in these settings. They also arise from our government’s abdication of its responsibility to regulate and monitor these segregated institutions.  
 
Congregate settings for people with disabilities include nursing homes, psychiatric facilities, and intermediate care facilities for people with developmental and intellectual disabilities. Long before COVID-19, these facilities already had a poor track record with insufficient oversight, poor infection control, under-staffing, and inadequate training. Combined, these conditions created the powder keg. COVID-19 lit the match.  
 
How has this happened? This is the first in a series of ACLU blogs addressing this crisis, in which we will break down the causes at the institutional level and the personal effect on individuals such as staff and residents. The focus today is on the U.S. Department of Health and Human Services (HHS), and its agency, the Centers for Medicaid and Medicare Services (CMS). Together, HHS and CMS are charged with regulating and monitoring the vast majority of the institutions where we have warehoused people with disabilities. HHS is responsible for the primary funding and for ensuring the safety of people in these facilities. And it has failed miserably in the age of COVID-19. 
 
On January 31, 2020, HHS declared a national public health emergency to respond to COVID-19. As a primary response to the pandemic, all of our medical and political leaders demanded social distancing. We closed schools and dormitories, required employees to work from home, and shuttered bars, restaurants, and ball parks. But we did not extend this disease prevention tactic to nursing homes, psychiatric hospitals, and developmental disability facilities. In fact, HHS has done the opposite. It has instructed nursing homes to take new patients without first confirming that they are not infected with COVID-19, and it has waived regulations to help divert people from entering institutions.
 
HHS has mechanisms at its disposal to reduce the overcrowding and dangerous conditions in these institutions. It can increase its funding for Home and Community Based Services and community mental health services, so people can stay in their own homes to get support. It can encourage states to advertise a provision allowing family members — so many of whom are sheltering in place without work — to take their relatives out of nursing homes and get paid to provide their care. And, it could increase the discharged planning process to move those who wish to be back in the community to move there. But it has failed on all counts.
 
HHS also has obligations to step up infection control and safety for the people who cannot yet leave these institutions. But it has not required states to prioritize personal protective equipment (PPE) or testing for staff or residents, and it has failed to increase the consequences for facilities that violate infection prevention measures. As a result, these institutions, rather than being havens from infection, are ‘death pits’ — among the most dangerous places in the country during this pandemic.
 
And finally, HHS should provide transparency, so that individuals and families can decide for themselves whether to enter — or stay — in an institution. Instead, more than four months passed before HHS started to require nursing homes to publicly report COVID-19 infection and death rates. And even this is incomplete — as nursing homes can choose not to report deaths before May 8, and other congregate settings — such as psychiatric hospitals, group homes, and institutions for people with intellectual and developmental disabilities — have no reporting obligations at all. 
 
Yesterday marked the twenty-first anniversary of Olmstead v. L.C., the landmark Supreme Court decision that recognized that “unjustified institutional isolation of persons with disabilities is a form of discrimination.” The court went on to observe that institutional confinement limits every part of a person’s life, and that such confinement “perpetuates unwarranted assumptions that persons so isolated are incapable or unworthy of participating in community life.”
 
Today, we filed a petition calling on HHS and its agencies to meet their obligations under Olmstead and under federal law. We are asking HHS to get people out of institutions as quickly and safely as possible, to provide genuine infection prevention and control measures for those who remain, and to provide true transparency as to who is living, working, and dying in these institutions. 
 
HHS must respond. Collectively, we have much more to do. As a society, we must reckon with our relentless marginalization and de-prioritization of people with disabilities and the people who support them. We must look at the tens of thousands of deaths inside congregate care settings as a collective, systemic tragedy. These victims of COVID-19 are mothers, fathers, brothers, sisters, grandmothers, grandfathers — all of us. We must end the disregard and discrimination that took their lives and that threatens — if we do not act quickly — to take many more.

Susan Mizner, Director, Disability Rights Project

Date

Tuesday, June 23, 2020 - 3:15pm

Featured image

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Free Speech

Show related content

Imported from National NID

32990

Menu parent dynamic listing

22

Imported from National VID

33218

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Pages

Subscribe to ACLU of Florida RSS