Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

A face recognition and video analytics company has created a product that provides a stark reminder of the power of these technologies and how they are likely to be used over time by law enforcement, powerful corporations, and others, if we as a society allow it.

The technology in question involves video search, which we described in our 2019 video analytics report. In the past, video operators looking for something would have to manually scroll through many hours of video, but technology is increasingly automating such searches. In a presentation for subscribers of the surveillance research group IPVM, a company called Vintra presented on its technology for quickly searching through large stores of video footage

The relevant three-minute part of the full presentation is worth watching. In it, a company executive searches through a month’s worth of video footage captured by around 10 fixed cameras, plus body cameras, in a transit center in San Jose, California. He feeds the system with the photograph of a male subject, and the system does a face recognition search through all the stored video from that month and produces 23 snapshots of the man from the center’s cameras. Clicking on any of the snapshots plays the video in which he was captured.

Already, that’s a demonstration of the stunning new power that surveillance camera systems create when combined with face recognition and today’s search capabilities.

But there’s more. The Vintra executive then presses a button called “Find associates.” He selects a time period — he uses 10 minutes but it could have been shorter or longer — and then runs a new search. This search yields snapshots of 154 other people, each of whom was seen on camera within 10 minutes of the subject.

In other words, this system allows face recognition to be used to track not just one person, but to map out people’s associations with each other.

Of the people spotted with the subject in the demo, 150 appeared on camera with him only once, and another three appeared with him twice. One man, however, had 14 “co-appearances” with the subject — clearly not coincidence, but a result of some association between the two men. The system displayed snapshots of the 14 co-appearances, and clicking on them instantly played the video of the two of them together.

The men could be anything from co-workers to commuting partners to lovers. Perhaps clicking through to view their joint appearances would shed light on which. But whatever the case, their association has now been revealed to the prying eyes of this camera network and its operators. One of Vintra’s mottos is “Know what the cameras know,” and if this product lives up to the demo, it’s a spookily accurate slogan, not least because it captures the way that AI is allowing video cameras to “wake up” — rendering them able not just to dumbly record video, but increasingly to understand what they’re seeing.

With this kind of technology, as the Vintra pitchman put it, “You can really start building out a network. You may have one guy, that showed up a few times, that you’re interested in — you can start looking at windows of time around him to see who else is there at the same time, and build out the networks of those people.”

Too many conversations about surveillance focus on how information could be used in isolation against a specific individual. But analytics is a powerful tool, and when information is collected not about just one suspect, but about large numbers of people, we often forget that such data can be cross-referenced to create maps of associations. I wrote this piece in 2013 to try to hammer home that often non-intuitive point, but maps of people’s associations (called “social network analysis”) have long been a product of mass surveillance. It has been done using cellphone data by the NSA, and by the U.S. military overseas using wide-area aerial surveillance, for example.

Now, face recognition and other analytic techniques appear to have brought social network analysis to video surveillance. And who knows what purposes such mining could be used for. The Vintra pitchman told his security audience that his product “will plug in to BI tools” — referring to Business Intelligence, a catch-all buzzword referring to non-security uses of data such as competitive research and marketing: “You may be using the cameras for security, but 94, 96 percent of the time there’s no event that security’s interested in — but there’s always information that the system is generating on those that you can plug into your BI.”

The bottom line is that when we see a video camera today, we need to update our intuitions about what it’s capable of. It may no longer be just collecting inert and unused video, but, especially if that camera is part of a larger network, the data it collects could be mined for insights about our lives across space and time. Communities and policymakers considering the installation of surveillance cameras — especially camera networks — should take heed.

Date

Tuesday, February 8, 2022 - 10:15am

Featured image

A photo of a security camera.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

46246

Menu parent dynamic listing

22

Imported from National VID

46288

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

Video surveillance is becoming far more powerful than most people realize.

Show list numbers

Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

Olga Akselrod, Senior Staff Attorney, Racial Justice Program, ACLU

Last week, news that the IRS has started requiring people who want to set up an account to go through a private company called ID.me created an uproar. What it means is that when dealing with the IRS you may be forced to run a time-consuming, inaccessible, and privacy-invasive gauntlet in the name of “identity verification.” And the IRS is just the latest government agency to place this company as a gatekeeper between itself and the public it’s supposed to serve. During the pandemic, at least 27 U.S. states started using ID.me’s service to verify identity for access to unemployment benefits. The company is also being used by other federal agencies such as the Department of Veteran’s Affairs and the Social Security Administration.

The Treasury Department is reportedly reconsidering the IRS contract, and we strongly urge them to abandon their plans to use ID.me, as should the states that are using it. The ACLU has been working with some of our state affiliates to gather more information about the role of this company in the states via public records requests. We’re still gathering information, but what is already abundantly clear is that the system is beset with privacy and equity problems. We think there are three key problems with relying on ID.me that policymakers need to recognize.

1. The lack of accessible offline options

One problem is ID.me’s lack of accessibility and the barriers that creates for people on the wrong side of the digital divide. Using the service requires uploading government identification documents and taking a live selfie, which means you need an internet-connected device with a camera (no desktop computers that lack webcams). If someone is unable to verify their identity through the automated process, as apparently occurs often, they must go through a live virtual interview with ID.me. That requires a strong enough internet connection to transmit live video, and time to spare. Users of the service report having to wait in a virtual queue for the interview for hours, only to be booted out of line when internet connections fail. This especially disadvantages Latinx, Black, Indigenous, and rural households, which are less likely to have reliable broadband access.

Even worse, many states using ID.me to vet unemployment insurance recipients don’t give people an alternative, offline means of doing business or provide extremely limited offline alternatives, forcing people to use ID.me if they want the government benefits they’re entitled to. It seems likely such problems will worsen as government agencies increasingly move business online.

We should make a commitment as a society to preserve offline ways of doing business. Just as people should have a right to physical and not just digital identity documents, so too should people have a right to do business by mail or in person. And people need not just offline alternatives, but meaningful ones — a single office across the state doesn’t cut it. The IRS and other government agencies have been doing business for more than a century without the need for high-bandwidth video chats; people should have alternatives today.

2. Outsourcing a core government function

Even if you do have reliable internet access, that’s no guarantee that the ID.me system will work. ID.me appears to be nearly universally reviled by users for its poor service and difficult verification process. But this is not a problem of one badly managed company; the problem is structural. A for-profit company is always going to short-change service when the people it serves aren’t its customers. A private company has an incentive not to do extra work even where that’s required for fairness and equity, and it’s exempt from the checks and balances that apply to government such as public records laws or privacy laws specifically applicable to government agencies.

Outsourcing this function also creates privacy problems. ID.me collects a rich stew of highly sensitive personal information about millions of Americans, including biometric data (face and voice prints), government documents, and things like your social security number, military service record, and data from “telecommunications networks, credit card bureaus, [and] financial institutions.” That information will be retained for up to seven and a half years after a person closes their account. The company promises it won’t share personal information with third parties — but reserves a number of exceptions, like voluntarily complying with law enforcement requests that are “not prohibited by law.” The company’s typically dense privacy policy makes it hard to know just what they consider themselves entitled to do with people’s data, and states may or may not choose to add additional privacy protections in their contracts with ID.me. But any pool of information that sensitive will always pose temptations for for-profit entities — and for malicious hackers who see a valuable honeypot ready to be raided.

Government agencies are also susceptible to hackers, of course, but there are great efforts underway to improve their security and they are subject to far more oversight than an up-and-coming Virginia tech company. The IRS already holds enormous troves of sensitive data about Americans and is constrained by strict laws ensuring their confidentiality. Companies like ID.me, meanwhile, are barely regulated at all.

3. Biased biometrics that aren’t subject to independent audits

Another big issue with ID.me is its use of face recognition, which the company uses to decide whether your selfie matches your identity documents. Face recognition is generally problematic; it is often inaccurate and has differential error rates by race and gender, which is unacceptable for a technology used for a public purpose. ID.me claims the face recognition algorithm it uses for these one-to-one identity verifications has “no detectable bias tied to skin type” — but we have no choice but to take the company’s word on this because it is not subject to the transparency requirements of a government agency.

In addition, after claiming for months that it used face recognition only for one-to-one image comparisons, the company last week admitted that it also performs “one-to-many” searches against some larger database of other photographs it holds. Even the CEO previously admitted that kind of search was “more complex and problematic.” The revelation raises numerous questions. How is that one-to-many facial recognition match being conducted? Are they doing a broader search for duplicate applicants among the millions of photos the company now holds (which would greatly increase error rates)? Or is the company maintaining some internal ban list of suspected wrongdoers (which would also raise due process questions)? Or something else? What are the error rates for these one-to-many searches? Do they differ by race and gender? And what standards is ID.me using to determine whether there is a match and when to alert law enforcement for what it thinks may be fraud? Law enforcement uses of one-to-many facial recognition has already lead to people — especially Black people for whom the technology is particularly inaccurate — being wrongly accused and arrested.

People should not have to be subjected to a private company’s dragnet to access government services. More broadly, no biometric technology should be used unless its use in real-world conditions is subject to regular and open auditing by an independent party and found to be accurate, accessible, and free of bias. And the federal government shouldn’t give money to the states for purchasing biometric technology without that kind of auditing. Many of the states using ID.me for unemployment insurance have done so using federal funds.

There is no reason that we can’t have non-biased identity proving systems that protect our privacy, lessen fraud, and make things easy for users. But such systems shouldn’t be run by private companies, shouldn’t be exclusively online, and need to be closely audited. The solution to the security problems created by moving online cannot be a discriminatory system that further erodes privacy and exacerbates the harms of the digital divide.

Date

Wednesday, February 2, 2022 - 4:00pm

Featured image

The U.S. Internal Revenue Service headquarters in Washington.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

46185

Menu parent dynamic listing

22

Imported from National VID

134116

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

Forcing people to use private ID-verification to access tax accounts or other government services raises serious privacy and equity issues.

Show list numbers

Pages

Subscribe to ACLU of Florida RSS