Daniel Kahn Gillmor, Senior Staff Technologist, ACLU Speech, Privacy, and Technology Project

Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

There is widespread concern today about the use of generative AI and deepfakes to create fake videos that can manipulate and deceive people. Many are asking, is there any way that technology can help solve this problem by allowing us to confidently establish whether an image or video has been altered? It is not an easy task, but a number of techniques for doing so have been proposed. They include – most prominently — a system of “content authentication” supported by a number of big tech firms, and which was discussed by the Bipartisan House Task Force Report on AI released this month. The ACLU has doubts about whether these techniques will be effective and serious concerns about potential harmful effects

There are a variety of interesting techniques for detecting altered images, including frames from videos, such as statistical analyses of discontinuities in the brightness, tone, and other elements of pixels. The problem is that any tool that is smart enough to identify features of a video that are characteristic of fakes can probably also be used to erase those features and make a better fake. The result is an arms race between fakers and fake detectors that makes it hard to know if an image has been maliciously tampered with. Some have predicted that efforts to identify AI-generated material by analyzing the content of that material are doomed. This has to led to a number of efforts to use another approach to proving the authenticity of digital media: cryptography. In particular, many of these concepts are based on a concept called “digital signatures.”

Using Cryptography to Prove Authenticity

If you take a digital file — a photograph, video, book, or other piece of data — and digitally process or “sign” it with a secret cryptographic “key,” the output is a very large number that represents a digital signature. If you change a single bit in the file, the digital signature is invalidated. That is a powerful technique, because it lets you prove that two documents are identical — or not — down to every last one or zero, even in a file that has billions of bits, like a video.

Under what is known as public key cryptography, the secret “signing key” used to sign the file has a mathematically linked “verification key” that the manufacturer publishes. That verification key only matches with signatures that have been made with the corresponding signing key, so if the signature is valid, the verifier knows with ironclad mathematical certainty that the file was signed with the camera manufacturer’s signing key, and that not a single bit has been changed.

Given these techniques, many people have thought that if you can just digitally sign a photo or video when it’s taken (ideally in the camera itself) and store that digital signature somewhere where it can’t be lost or erased, like a blockchain, then later on you can prove that the imagery hasn’t been tampered with since it was created. Proponents want to extend these systems to cover editing as well as cameras, so that if someone adjusts an image using a photo or video editor the file’s provenance is retained along with a record of whatever changes were made to the original, provided “secure” software was used to make those changes.

For example, suppose you are standing on a corner and you see a police officer using force against someone. You take out your camera and begin recording. When the video is complete, the file is digitally signed using the secret signing key embedded deep within your camera’s chips by its manufacturer. You then go home and, before posting it online, use software to edit out a part of the video that identifies you. The manufacturer of the video editing software likewise has an embedded secret key that it uses to record the editing steps that you made, embed them in the file, and digitally sign the new file. Later, according to the concept, someone who sees your video online can use the manufacturers’ public verification keys to prove that your video came straight from the camera, and wasn’t altered in any way except for the editing steps you made. If the digital signatures were posted in a non-modifiable place like a blockchain, you might also be able to prove that the file was created at least as long ago as the signatures were placed in the public record.

Content Authentication Schemes Are Flawed

The ACLU is not convinced by these “content authentication” ideas. In fact, we’re worried that such a system could have pernicious effects on freedom.

The different varieties of these schemes for content authentication share similar flaws. One is that such schemes may amount to a technically-enforced oligopoly on journalistic media. In a world where these technologies are standard and expected, any media lacking such a credential would be flagged as “untrusted.” These schemes establish a set of cryptographic authorities that get to decide what is “trustworthy” or “authentic.” Imagine that you are a media consumer or newspaper editor in such a world. You receive a piece of media that has been digitally signed by an upstart image editing program that a creative kid wrote at home. How do you know whether you can trust that kid’s signature — that they’ll only use it to sign authentic media, and that they’ll keep their private signing key secret so that others can’t digitally sign fake media with it?

The result is that you only end up trusting tightly controlled legacy platforms operated by the big vendors like Adobe, Microsoft, and Apple. If this scheme works, you’ll only get the badge of authentic journalist authority if you use Microsoft or Adobe.

Furthermore, if “trusted” editing is only doable on cloud apps, or on devices under the full control of a group like Adobe, what happens to the privacy of the photographer or editor? If you have a recording of police brutality, for example, you may want to ask the police for their story about what happened before you reveal your media, to determine whether the police will lie. But if you edit your media on a platform controlled by a company that regularly gives in to law enforcement requests, they might well get access to your media before you are willing to release it.

Locking down hardware and software chains may help authenticate some media, but would not be good for freedom. It would pose severe threats to who gets to easily share their stories and lived experiences. If you live in a developing country or a low-income neighborhood in the U.S., for example, and don’t have or can’t afford access to the latest authentication-enabled devices and editing tools, will you find that your video of the authorities carrying out abuses will be dismissed as untrusted?

It’s not even certain that these schemes would work to prevent an untrustworthy piece of media from being marked as “trusted.” Even a locked-down technology chain can fail against a dedicated adversary. For example:

  • Sensors in the camera could be tricked, for example by spoofing GPS signals to make the “secure” hardware attest that the location where photography took place was somewhere other than where it really was.
  • Secret signing keys could be extracted from “secure” camera hardware. Once the keys are extracted, they can be used to create signatures over data that did not actually originate with that camera, but can still be verified with the corresponding verification key.
  • Editing tools or cloud-based editing platforms could potentially be tricked into signing material that they didn't intend to sign, either by hacks on the services or infrastructure that support those tools, or by exploitation of vulnerabilities in the tools themselves.
  • Synthetic data could be laundered through the “analog hole.” For example, a malicious actor could generate a fake video, which would not have any provenance information, and play it back on a high-resolution monitor. They then set up an authentication-capable camera so that the monitor fills the camera’s field of view and hit “record.” The video produced by the camera will now have “authentic” provenance information, even though the scene itself did not exist outside of the screen.
  • Cryptographic signature schemes have often proven to be far less secure than people think, often because of implementation problems or because of how humans interpret the signatures.

Another commonly proposed approach to helping people identify the provenance of digital files is the opposite of the scheme described above. Instead of trying to establish proof that content is unmodified, establish proof that modified content has been modified. To do this, these schemes demand every AI photo creation tool to register all “non-authentic” photos and videos using a signature, or a watermark. Then people can check if a photo has been created by AI rather than a camera.

There are numerous problems with this concept. People can strip digital signatures, evade media comparison, or elide watermarks by changing parts of the media. They can create a fake photo manually in image editing software like Photoshop, or with their own AI,which is likely to become increasingly possible as AI technology is democratized. It’s also unclear how you could force every large corporate AI image generator to participate in such a scheme.

A Human Problem, Not a Technology Problem

Ultimately, no digital provenance mechanism will solve the problem of false and misleading content, disinformation, or the fact that a certain proportion of the population is deceived by it. Even content that has been formally authenticated under such a scheme can be used to warp perception or reality. No such scheme will control how people decide what is filmed or photographed, what media is released, and how it is edited and framed. Choosing focus and framing to highlight the most important parts is the ancient essence of storytelling.

The believability of digital media will most likely continue to rely on the same factors that storytelling always has: social context. What we have always done with digital media, as with so many other things, is judge the authenticity of images based on the totality of the human circumstances surrounding them. Where did the media come from? Who posted it, or is otherwise presenting it, and when? What is their credibility; do they have any incentive to falsify it? How fundamentally believable is the content of the photo or video? Is anybody disputing its authenticity? The fact that many people are bad at making such judgments is not a problem that technology can solve.

Photo-editing software, such as Photoshop, has been with us for decades, yet newspapers still print photographs on their front page, and prosecutors and defense counsel still use them in trials, largely because people rely on social factors such as these. It is far from clear that the expansion of democratized software for making fakes from still photos to videos will fundamentally change this dynamic — or that technology can replace the complex networks of social trust and judgment by which we judge the authenticity of most media.

Voters hit with deepfakes for the first time (such as a fake President Joe Biden telling people not to vote in a Republican primary) may well fall for such a trick. But they will only encounter such a trick for the first time once. After that they will begin to adjust. Much of the hand-wringing about deepfakes fails to account for the fact that people can and will take the new reality into account in judging what they see and hear.

If many people continue to be deceived by such tricks in the future, as a certain number are now, then a better solution to such a problem would be increased investments in public education and media literacy. No technological scheme will fix the age-old problem of some people falling for propaganda and disinformation, or replace the human factors that are the real cure.

Date

Tuesday, December 24, 2024 - 12:45pm

Featured image

Binary code on a screen.

Show featured image

Hide banner image

Override default banner image

Binary code on a screen.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Free Speech

Show related content

Imported from National NID

195176

Menu parent dynamic listing

22

Imported from National VID

195237

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

Futile “content authentication technologies” are being pushed by companies and others.

Show list numbers

Johanna Silver, she/her/hers, Digital Producer, ACLU

Kellen Zeng

Madeleine Wren

On election night, Americans across the country were focused on one number: 270.

It’s a well-known – and hotly-contentious – fact that in American politics, presidential candidates must win 270 Electoral College votes to secure the White House. While news media rushed to declare the candidate with the most Electoral College votes on or around Election Day, the Electoral College’s work does not begin, nor end, on Election Day. Instead, electors cast their official votes on the first Monday after the second Wednesday in December following the presidential election, which this election cycle falls on December 17.

At the ACLU, we have long argued that the Electoral College is an antiquated and undemocratic process for choosing the highest elected offices in our nation. So why do we have the Electoral College? What does it really do and do we actually need it? The ACLU explains.

The Electoral College was Created as a Compromise

Though the term "Electoral College" doesn't appear in the Constitution, it was established during the Constitutional Convention of 1787 to address disagreements over how to select the president and vice president. The original system, outlined in Article II of the Constitution, allowed each elector to cast two votes for president, with the candidate who received the most votes becoming president and the second place finisher becoming vice president. This led to complications when political rivals were elected to these roles, prompting the adoption of the 12th Amendment in 1804, which required electors to cast separate votes for each office.

At the time, regional divisions also influenced the College’s creation. Southern states, where non-voting enslaved people made up about one-third of the population, opposed a direct popular vote that would have given their states less votes. After much debate, the convention eventually reached the decision to establish the system we now refer to as the Electoral College, applying the three-fifths compromise that counted three out of five enslaved people as part of a state’s total population, though they were still prohibited from voting.

State Size Determines The Number of Electoral Votes a State Gets

The number of Electoral College votes allocated to each state is equal to its total representation in Congress: two votes for its Senators and a number corresponding to its members in the House of Representatives. This allocation is based on the Census, which determines congressional apportionment every 10 years. In total, the Electoral College consists of 538 members, including three votes for the District of Columbia, granted by the 23rd Amendment ratified in 1961. A simple majority of electoral votes (270 or more) is required to elect the president and vice president.

State Electors Aren’t Actually Elected

The process for selecting electors varies by state, but typically involves a two-step process. Political parties first nominate a slate of electors who pledge to support their party’s candidate before the general election, often selecting party loyalists, state officials, or individuals with ties to their candidate. On Election Day, voters choose their state’s electors indirectly by voting for their preferred presidential candidate. Most states follow a winner-take-all system in which the candidate with the most votes receives all of the electoral votes in the state. Maine and Nebraska use a proportional allocation system, assigning two “at-large” electors to the overall statewide winner and appointing individual electors based on the winner of the popular vote within each Congressional district.

Electors Who Stray From the Popular Vote Could Face Fines

Electors are not bound by the Constitution to vote according to the states’ popular vote, but more than 30 states and Washington D.C. have laws that legally obligate them to do so. Some states, such as South Carolina and Oklahoma, even impose criminal action or fines against electors that stray from the states’ popular vote.

Electors Don’t Vote On Election Day

Electors don't actually vote until December – more than a month after the election. During their meeting, electors formally cast separate votes for president and vice president with their results recorded on Certificates of Vote that are sent to the vice president acting as president of the Senate, relevant state officials, the local federal district courts, and the National Archives. These certificates must reach Washington, D.C. by December 25 to be included in the official count.

The final count, however, doesn't occur until even later. This year, on January 6, during a joint session of Congress, officials will declare the president and vice president. The president-elect takes the oath of office and is sworn in two weeks later. If no candidate receives a majority of at least 270 votes, the election is decided by Congress, with the House selecting the president and the Senate choosing the vice president.

Why America Doesn’t Use the Popular Vote

The ACLU has opposed the Electoral College since 1969 for non-partisan reasons, including its undemocratic and unpredictable nature. Unfortunately, amending the Constitution to eliminate this antiquated system is difficult not just because amending the Constitution is hard – it would require at least 37 states to agree to a proposed change – but because the College’s supporters believe that it is a way to give small states power. States receive electoral votes equal to its congressional delegation, guaranteeing a minimum of three votes regardless of population size. This system elevates the influence of smaller states, as larger states would otherwise dominate national elections.

Why We Should Work to Eliminate the Electoral College

The Electoral College thwarts the fundamental principle of “one person, one vote” by awarding each state a number of electoral votes equal to its allocation of representatives plus its two senators. A voter in Wyoming thus has more than three times as much influence on the presidential election as a voter in more densely-populated California. That’s not to mention the racial and ethnic disparities in voting power that influence how electoral votes are allocated. One study calculated that Asian-Americans have barely more than half the voting power of white Americans because they tend to live in “safe” states — like Democratic-leaning New York and California and Republican-leaning Texas.

Right now, the Electoral College harms democracy when it:

  • Nullifies the popular vote. In five presidential elections, the winner of the electoral college has lost the popular vote. This means that a presidential candidate that did not achieve a majority of the votes and was not supported by a majority or even plurality of the American people can still win through the electoral college and thus the election. Critics argue that the nullification of the popular vote also has a negative effect on voter turnout, discouraging voters from feeling like voting matters.
  • Shrouds Electors in Secrecy. In most states, there is very little public information about how electors are selected and who they are. The process is entirely determined by political parties and incorporates little voter input. Many states also do not have laws requiring electors to vote according to the popular vote in their state, risking the possibility of “faithless” electors who may vote contrary to the will of the voters.
  • Gives “swing states” an unfair advantage. The Electoral College system disproportionately benefits certain “swing states” in which the outcome of the election is uncertain. Presidential candidates from both political parties often invest significant resources and attention in these states, neglecting voters in states with a more predictable political leaning.

The Electoral College undermines the principle of “one person, one vote” by giving disproportionate influence to smaller states and swing states allowing a candidate to potentially win the presidency without securing the popular vote. This outdated system fails to reflect the will of the people in a modern democracy, creating inequities in representation. Despite the uphill battle, amending the Constitution to abolish the Electoral College would ensure that every vote carries equal weight in presidential elections.

Date

Tuesday, December 17, 2024 - 11:45am

Featured image

3D illustration of "Electoral College" script on a ballot box, with US flag as a background.

Show featured image

Hide banner image

Override default banner image

3D illustration of "Electoral College" script on a ballot box, with US flag as a background.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Voting Rights

Show related content

Imported from National NID

194950

Menu parent dynamic listing

22

Imported from National VID

194970

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

The Electoral College meets December 17th to certify the results of the 2024 election, but why? The ACLU breaks down if the process is worth keeping.

Show list numbers

Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

Sports stadiums around the country have begun using face recognition to identify ticket holders, threatening to normalize a uniquely powerful surveillance technology that has already been used for abusive purposes. Worse, companies involved are already planning big expansions, raising the specter of a world where our faces become not just our ticket at sports stadiums, but a passport we’re forced to show across society.

There is a big difference between face recognition being used by you, and face recognition being used on you.

Face recognition has been creeping into stadiums for roughly six years. In 2018, a security group used face recognition on fans at a Taylor Swift concert and Madison Square Garden began scanning the faces of attendees, supposedly as a security measure. The Garden’s security rationale was undermined a few years later when the technology was used to identify and eject lawyers who happened to work for a large firm where another lawyer was suing The Garden’s billionaire owner, James Dolan. Dolan, whose companies own many arenas around the country, had previously used his control over them to permanently ban a Knicks fan who, upset over a losing streak, told Dolan he should sell the team. Face recognition technology not only allowed Donlan to expel the lawyers, but also provided a way for him to ban the disgruntled Knicks fan.

These abuses rightly sparked controversy and focused national attention on the potential misuses of face recognition. But, beyond security and marketing deployments, it’s also worth paying attention to the use of the technology for ordinary access control.

Face recognition has been creeping into stadiums for roughly six years.

In recent years many sports arenas and leagues have embraced face recognition. In 2018 baseball stadiums started using face recognition for admittance by partnering with the airport company CLEAR. In August 2024, the NFL announced that it was deploying face recognition to control access to restricted areas in stadiums, such as offices, press boxes, and locker rooms. Meanwhile, a number of NFL teams have begun offering fans the option to use face recognition instead of tickets to enter stadiums. That involves sharing a photo of one’s face with the monopolistic company Ticketmaster, which is then compared against a photo taken when you enter the stadium. In September, an executive with the company Wicket told a D.C. conference that the company planned to provide face recognition ticketing services to more than 40 stadiums “across all the major leagues.” Other vendors are providing similar services.

These deployments have drawn protests — not only from privacy advocates, but also from police officers who work stadiums. In Las Vegas, both the Las Vegas Metropolitan Police Department and the local police union objected to participating in the NFL’s secure-area face recognition program at Allegiant stadium, and subsequently refused to participate in it. The police union president told the Associated Press that “[Privacy is] what everybody’s concerned about — taking our personal information and sharing it with vendors and teams.”

Some have called stadiums “the future of surveillance.”

Some have called stadiums “the future of surveillance.” Indeed, what is happening in stadiums shows signs of spreading elsewhere. Certainly, the corporate providers have big plans. An official with the Cleveland Browns, which was an early adopter of Wicket’s services and whose owner is a Wicket investor, spoke at the D.C. conference about how he’d like faces to become a unique identifier across a variety of services. “Ticketmaster runs our ticketing, we have a concessions partner that runs our concessions, we have a merchandising partner,” he said. “I would love to get to the point where a customer’s face or a fan’s face could be their identity to all these different platforms [and where] all this technology is tied to your identity.”

Sports companies may already be eager to see face recognition used as a unique identifier across the sporting world, but there’s little reason to think it will be contained to sports. The Wicket executive said his company has already expanded to offering face recognition as the means of entry to large conferences and that “some of the [sports] venues we’re in are starting to use us for other purposes beyond just sporting events.” He said they’re also starting to talk to venues that only host concerts.

All of this expansion raises the question: where will it end?

All of this expansion raises the question: where will it end? Are we looking at a future where face recognition is used everywhere and we can’t escape it? It’s already being pushed as a replacement for credentials in airports by the Transportation Security Administration (TSA) and Customs and Border Patrol (CBP) in what are the first government face-recognition checkpoints. It is also used in a few public housing facilities for access control as well as security. But it’s also being used in the private sector to access various facilities, including some office and apartment buildings. Universal Studios has rolled out face-recognition access control at its theme parks in Florida. (“We have done this at our park in Beijing,” a Universal executive boasted to a reporter, seemingly oblivious of the irony.)

What happens if this technology starts appearing everywhere? The identity company Idemia, which sells face-recognition access control devices, promises “Frictionless Access Everywhere!” It and other vendors, which include a number of Chinese companies, mention banks, medical offices, hotels, public transportation, retail stores, schools, and restaurants. CLEAR has expanded its face recognition ID service from airports to Uber, LinkedIn, and tool rental at Home Depot, and wants to become the “universal identity platform” for the physical world.

If this happens, it will be in part because companies have embraced the efficiencies, the marketing advantages, and the security advantages that may be gained from switching to face recognition. They’ll claim it will shorten lines, though it’s unclear how much more efficiency, if any, face recognition provides over barcode tickets. If it is faster, what it’s really doing is increasing profits for businesses — after all, short lines and quick entrances are always possible if the venues will just pay enough workers to staff the entrances.

These efforts may also connect with another technology the ACLU is watching closely: digital driver’s licenses. A Los Angeles stadium executive, Christian Lau, said at the D.C. conference that “We’re also rolling out mobile driver’s licenses acceptance in California…. We’ll have a whole marketing campaign around it. It’s going to be really cool, and we’ll tie all of our systems together ultimately.”

What's Wrong with Face-Recognition Access Control

All this might be great for billionaires, team owners, and big companies, but none of it would be good for ordinary people. The ACLU is concerned that:

  • You can’t reset your face, which means you can’t reset your relationship with any of the entities that use it, who will never forget your history of transactions. That empowers them and disempowers you. We will lose control of when we’re being identified and checked and when we’re not.
  • The face you present to one company is the same face you present to all the others, which makes it easy for them to get together and compare notes on your behavior. When your face is your ID, your ID is plain for all to see. Plans to “tie all our systems together” should be heard as an ominous warning.
  • Centralized data stores always raise questions of data security and breaches. Faceprint databases are no exception, and a breach could result in fraud and other harm to the people whose data is being held. For example, stolen faceprints could be used to hijack pay-by-face apps and steal goods and services. This is particularly worrisome because again, while a credit card or account number can be changed after a breach, you are stuck with your face.
  • The more companies that have your face, the more they can use that face in other contexts and places for other purposes — such as security uses and blacklists. Not only can’t you reset your face, you can’t likely live your life covering it. As we’ve already seen at Madison Square Garden, it’s a small step from using your face to ticket you to using your face to ban you.

The use of face recognition for watchlists may be one of the most inevitable and consequential side effects of “your face is your ticket.” Watchlists mean false positives and failures to update, producing situations where people are mistaken for others who are wanted or banned. They likely mean abusive private blacklists as companies collude – often through third parties – to share people they wish to exclude. There’s a long history of private and quasi-private watchlists being abused, going back to the labor battles of the early 20th century, when workers and organizers were blacklisted as “troublemakers” and could have trouble getting a job. Watchlists also likely mean a lack of due process over who is targeted. We’ve seen that in most watchlist programs in recent decades, especially those run by the government, even though it, unlike private companies, is bound by at least some checks like the Privacy Act and the Fourth Amendment.

As the stadium executive Lau noted, praising the advent of face recognition, “We can all thank Apple, because FaceID has gotten people so used to unlocking their phones, they don’t think about it.” But Apple designed their unlocking systems such that your facial image never leaves your phone. That means it’s a completely different ballgame than sharing your face with the network of billionaires and monopolists that control the sports world, and the business world beyond sports. There is a big difference between face recognition being used by you, and face recognition being used on you.

The danger is that uncritical mass acceptance of these technologies for some very slight convenience will usher in a world where they become inescapable. If you’re offered the option to use face recognition next time you’re seeking admittance somewhere, you should opt out — this is not a trend that will be good for you. Policymakers and companies should also say no to face recognition technology for access control. It’s just not that hard to use a ticket.

Date

Tuesday, December 17, 2024 - 11:15am

Featured image

Madison Square Garden, NY Rangers facade.

Show featured image

Hide banner image

Override default banner image

Madison Square Garden, NY Rangers facade.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

194942

Menu parent dynamic listing

22

Imported from National VID

194961

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

You shouldn't participate in face recognition ticketing schemes.

Show list numbers

Pages

Subscribe to ACLU of Florida RSS