Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

There’s been a lot of discussion recently over whether to create a new system of digital vaccine “passports.” But that conversation is just a small part of a much larger movement aimed at creating a digital identity system, including a push by companies, motor vehicle departments, and some state legislatures to digitize the identity card that most Americans carry: the driver’s license.

At first blush, the idea of a driver’s license we can keep on our phone might sound good. Digital is often touted as the “future” and many people cast such a transition as inevitable. But digital is not always better — especially when systems are exclusively digital. There’s a reason that most jurisdictions have spurned electronic voting in favor of paper ballots, for example. And the transition from a plastic ID to a digital one is not straightforward: Along with opportunities, there are numerous problems that such a switch could create — especially if they’re not designed perfectly.

Today we’re releasing a report looking at digital driver’s licenses and their implications for our civil liberties. While not categorically opposing the concept of a digital identity system, we outline the many pitfalls that such a system creates if not done right, and some ominous long-term implications that we need to guard against. We call on state legislatures to slow down before rushing to authorize digital licenses, ask hard questions about such a system, and, if and when they decide to go ahead, to insist upon strong technological and policy measures to protect against the problems they are likely to create.

So what problems could digital driver’s licenses bring? First, they could increase the inequities of American life. Many people don’t have smartphones, including many from our most vulnerable communities. Studies have found that more than 40 percent of people over 65 and 25 percent of people who make less than $30,000 a year do not own a smartphone, for example, while people with disabilities and homeless people are also less likely to own one. If stores, government agencies, and others begin to favor those who have a digital ID or worse, mandate them, those without phones would be left out in the cold. We believe that people must have a continuing “right to paper” — in other words, the right not to be forced as a legal or practical matter to use digital IDs.

Second, a poorly constructed digital identity system could be a privacy nightmare. Such a system could make it so easy to ask for people’s IDs that these demands proliferate until we’re automatically sharing our ID at every turn — including online. Without good privacy protections, digital IDs could also enable the centralized tracking of every place (again, online and off) that we present our ID. It is possible to build in technological privacy protections to ensure that can’t be done, and there’s no reason not to include them. No system is acceptable unless it does.

In some ways, a digital ID could improve privacy — for example, by allowing you to share only the data on your license that a verifier needs to see. If you’re over 21, a digital ID could let you prove that fact without needing to share your date of birth (or any other information). But if not done perfectly, they are likely to do more harm than good.

In the longer term, the digitization of our driver’s licenses could lead not only to an explosion in demands for those IDs (including by automated systems), but also to an explosion in the data that is stored in them. Digital ID boosters are already proclaiming that they will store everything from health records to tax data to hunting, fishing, and gun licenses. And they could very easily turn into something that becomes mandatory, rather than an optional accessory to the physical license.

How close are digital driver’s licenses to becoming real? A secretive international standards committee (which won’t reveal its members but which appears to be made up exclusively of corporate and government representatives) is currently putting the finishing touches on a proposed interoperable global standard for what it calls “mobile driver’s licenses,” or mDLs. The association representing U.S. DMVs is moving to implement that standard, as are federal agencies such as DHS and the TSA.

But the licenses we would get under this standard are not built to include airtight privacy protections using the latest cryptographic techniques. They are not built primarily to give individuals greater control over their information, but to advance the interests of major companies and government agencies in inescapably binding people to identity documents so they can be definitively identified online and off. It’s vital that we only accept a system with the strongest possible privacy protections, given all the potential ways that mDLs could expand.

In our new report we make a list of recommendations for digital IDs. We call on state legislators to insist that the standards for digital driver’s licenses be refined until they are built around the most modern, decentralized, privacy-protective, and individual-empowering technology for IDs; that they make sure that digital identification remains meaningfully voluntary and optional; that police officers never get access to people’s phones during the identification process; and that businesses aren’t allowed to ask for people’s IDs when they don’t need to.

Identification is necessary sometimes, but it’s also an exercise in power. As a result, the design of our IDs is a very sensitive matter. A move to digital IDs is not a minor change but one that could drastically alter the role of identification in our society, increase inequality, and turn into a privacy nightmare. A digital identity system could prove just and worthwhile, if it is done just right. But such an outcome is far from guaranteed, and much work will have to be done to implement a digital identity system that improves individuals’ privacy rather than eroding it, and is built not to enclose individuals but to empower them.

Date

Monday, May 17, 2021 - 2:30pm

Featured image

A collage of signs that read "Please Show Your ID" and "Please have your ID ready"

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

40963

Menu parent dynamic listing

22

Imported from National VID

41027

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

As states move rapidly to adopt digital identity systems, we need to stop and think about what that means for our privacy rights. 

Vera Eidelman, Staff Attorney, ACLU Speech, Privacy, and Technology Project

Adeline Lee, Former Paralegal, ACLU Speech, Privacy, and Technology Project

Fikayo Walter-Johnson, Former Paralegal, ACLU's Speech, Privacy, and Technology Project

Since May 7, Al-Aqsa, one of the holiest sites for Muslims, and the neighborhood of Sheik Jarrah in Jerusalem have been the site of violent attacks against Palestinians, many of whom had come to the mosque to worship during Ramadan. During this violence, people took to Facebook and Instagram to post about what was happening to Palestinians using hashtags that referenced the mosque (#AlAqsa). But Instagram, owned by Facebook, blocked many of the posts through its content-filtering system because it inaccurately identified the hashtag as referencing a terrorist organization. While this may have been an error — though a demeanning and culturally ignorant one —its impact on users’ expression and the flow of information cannot be ignored. Palestinians and their supporters were silenced by one of the most powerful communications platforms in human history at a critical moment.

This wasn’t an isolated incident. In today’s world, a small handful of private corporations make some of the most important decisions regarding our online speech. Companies like Facebook, Twitter, and YouTube set rules for what kinds of content they’ll allow people to publish, defining what constitutes “hate speech,” “praise of terrorism,” and “fake news.” And they rely on algorithms to automatically flag certain words or images that appear to cross the lines. Once posts are removed or accounts are suspended, users — particularly those who don’t make headlines and cannot access backchannels — have too little recourse to appeal or get reinstated.

The major social media companies often get content moderation wrong — both because of their vague and sweeping rules, and because they make mistakes when applying those rules, often through blunt automated detection systems. Perfect content moderation may be impossible, but the major platforms can do better. They should give users more control, and respond to user experiences and reports, rather than rely so heavily on automated detection. They should also provide transparency, clarity, and appeals processes to regular users, and not just those that make headlines or have a personal connection at the company.

Here’s a rundown of some recent examples of content moderation gone wrong.

What Qualifies as Praise of “Terrorist” Groups?

Instagram’s ban on #AlAqsa is not the first time that the social media giants have misapplied their rules regarding praise or support of “terrorist” content. Over the summer, dozens of Tunisian, Syrian, and Palestinian activists and journalists covering human rights abuses and civilian airstrikes complained that Facebook had deactivated their accounts pursuant to its policy against “praise, support, or representation of” terrorist groups. Facebook deleted at least 52 accounts belonging to Palestinian journalists and activists in a single day in May, and more than 60 belonging to Tunisian journalists and activists that same month.

Relatedly, in October 2020, Zoom, Facebook, and YouTube all refused to host San Francisco State University’s roundtable on Palestinian rights, “Whose Narratives? Gender, Justice and Resistance”—featuring Leila Khaled, a member of the Popular Front for the Liberation of Palestine. The companies decided to censor the roundtable after it became a target of a coordinated campaign by pro-Israel groups that disagree with Khaled’s political views. In this instance, the companies pointed to anti-terrorism laws, rather than their own policies, as justification.

But these decisions, too, highlight platforms’ role in curtailing speech in an increasingly online world — not to mention problems with the underlying material support laws. One fundamental problem is that governments—not to mention the social media giants—do not have an agreed upon and transparent definition for terms like “terrorism,” “violent extremism,” or “extremism,” let alone “support” for them. As a result, rules regulating such content can be highly subjective and open the door to biased enforcement.

“Hate” Against Whom?

For years, Facebook moderators have treated the speech of women and people of color — including specifically when describing their experiences with sexual harassment and racism — differently than that of men and white people, pursuant to their community standards regarding hateful speech.

In 2017, when Black women and white people posted the exact same content, Facebook only suspended Black women’s accounts.

Notwithstanding tweaks to their community standards and algorithms, that problem persists. This year, for example, Facebook removed posts in groups created by users as spaces to vent about sexism and racism for posting “anything even remotely negative about men.” Meanwhile, “posts disparaging or even threatening women” stayed up. The company banned phrases like “men are trash,” “men are scum,” and even “I dislike men” — but posts like “women are trash” and “women are scum” were not removed, even if users reported the posts.

Similarly, in June 2020, at the height of protests against the police murder of George Floyd, Facebook’s automated system removed derogatory terms like “white trash” and “cracker” more often than slurs against Jewish, Black, and transgender individuals. Even after Facebook attempted to tweak its algorithmic enforcement to address this, Instagram still removed a post calling on others to “thank a Black woman for saving our country,” pursuant to the company’s “hate speech” guidelines.

As Vice and the Washington Post reported, these policies and applications have forced users to avoid using certain words, swapping out “m3n” for “men” and “wipipo” for “white.”

What is “Sexually Suggestive” and What is Socially Acceptable?

Similar problems have arisen with enforcement of the companies’ policies regarding “sexually suggestive” content and posts regarding “sexual solicitation.”

In 2019, users pushed back after Instagram repeatedly took down a topless photo of Nyome Nicholas-Williams, a fat Black woman, in which her arms covered her breasts. The users noted that nude images of skinny white women were not subjected to the same treatment and less likely to be considered inherently sexual.

More recently, Facebook mislabeled an ad by Unsung Lilly, an independent band whose members are a same-sex couple, as “sexually explicit” because it pictured the two women with their foreheads touching. As an experiment, Unsung Lilly uploaded the same ad with two different photos, one with a “nonromantic” photo of themselves and the other of a heterosexual couple touching foreheads. Both were approved by Facebook.

Along with queer users, sex workers, certified sex educators, and sexual wellness brands are also suffering from Instagram’s new community guidelines regarding “sexual solicitation.” These users report that posts including “flagged words, like ‘sex’ and ‘clitoris,’ have been removed from Instagram’s search function.”

Censorship Decisions are Arbitrary

Social media giants silence speech in arbitrary ways. For example, Facebook recently banned a recent college graduate for three days after he posted a rant criticizing those opposed to loan forgiveness and deeming America the “land of ignorance and greed.” The company also drew seemingly random lines regarding absentee voting, poll-watching, and COVID-19 during the 2020 presidential election, removing such posts as “If you vote by mail, you will get Covid!” and “My friends and I will be doing our own monitoring of the polls to make sure only the right people vote,” but permitting “If you vote by mail, be careful, you might catch Covid-19!” and “I heard people are disrupting going to the polls today. I’m not going to the polling station.”

A Path Forward

These examples highlight ongoing problems with the social media giants’ content moderation policies and practices — specifically, a lack of clarity in community standards, policies that cover too much speech, a disconnect between automated systems and actual user reports, and insufficient access to appeals. These problems have significant impacts in the offline world, altering discourse and limiting access to potentially critical information.

For these reasons, if they are to err, the major social media companies must err on the side of enabling expression. They should establish content policies that are viewpoint neutral, and that favor user control over censorship. And they should ensure that any expansion of content policies comes with a commensurate expansion of due process protections — clear rules, transparent processes, and the option of appeals — for all users. Because of the scale at which these platforms function, errors are inevitable, but there can be fewer and even those that still occur need not be permanent.

Date

Monday, May 17, 2021 - 1:30pm

Featured image

Social media icons including Facebook, LinkedIn, and YouTube on a phone screen.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Free Speech

Show related content

Imported from National NID

41007

Menu parent dynamic listing

22

Imported from National VID

41015

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

Social media companies often get content moderation wrong — both because of their vague and sweeping rules, and because they make mistakes when applying those rules, often through blunt automated detection systems. Perfect content moderation may be impossible, but the major platforms can do better.

Anonymous

I was shot multiple times by police and then unlawfully thrown in jail to cover up the excessive force violation. I am a 34-year-old Black man. I was charged with aggravated assault, but I never hurt anyone. I was hurt plenty, though. Police do fucked up shit and get away with it, and people need to hold them accountable to make society better as a whole. There’s no accountability when it comes to police or jail staff. In order for any society to function, there has to be accountability on both sides. What if somebody didn’t do nothing wrong and you’re covering it up? That means corruption just kills the system, everything that was meant to be good. I understand that officers risk their lives, they wake up every day and take on a challenge to make sure everyone is safe. That is an honorable act. But when you have people who lie or misuse their power, and another person helps them support that wrong action, that brings shame upon every good police officer. It brings shame on every department. In order for something to be built, something has to be destroyed, and this jail’s administration has to be destroyed. We’re too far along as a society to continue to be submerged in the wrongs and the corruptions, instead of to stand for what’s right.

Artwork from this piece

Oaklee Thiele is an artist and disability rights advocate

After I got shot, those bullets shattered my foot and messed up my bowels. I’m still dealing with obstructions and I walk with a limp that only gets worse every day. I keep putting in sick calls; they don’t care. Since I’ve been inside, I’ve used a wheelchair, a walker, and then a cane. I still need the cane, but it disappeared when I went to solitary and I haven’t had it since. With the pain in my foot, getting around is really hard. I’ve stopped using my recreation hour, the only one I get out of my cell every day, because I’m afraid to get hurt even worse. Having a cell with grab bars in the bathroom would make me feel safer, but there’s only two of those on the pod and guys have to buy their way in. We don’t have any handicap showers, either. Once you go to jail, you get denied medical treatment. These are the people who are supposed to care for you. They take on a leadership role to care for you, but they don’t do it. It’s deception. That’s not a society that I want to live in or one that I think anyone else wants to live in.

There’s no difference between what the police did to me and what’s happening in jail: the non-caring, the covering up, the assaults, being put in the hole. It’s not a different society. I just came off a five-month stint in solitary. Let me tell you - it’s petrifying. The constant dimness, the blood and vomit covering the cells, the neglect and abuse from guards. It’s terrible to be going through all of that. Especially for someone with mental illness like me, the hole is no joke. I have PTSD, severe depression, antisocial disorder, anxiety, and paranoid schizophrenia. Everything about being in here just makes it worse. I went on a hunger strike a few months back to protest how bad things are, and they threw me in a suicide cell under a freezing cold vent to stop me from protesting. At one point, I was too sick to stand when the guards told me to and they tased me and shot me with pepperball bullets instead of calling medical. If you have a mental health crisis, they respond with force and shoot you with pepperballs. I haven’t had the chance to talk to a therapist since I’ve been in here, but I guess I’ll keep asking. I don’t know how I’m supposed to be “rehabilitated” like this.

Artwork from this piece

Oaklee Thiele is an artist and disability rights advocate

It’s just one big system to eradicate everyone’s rights. If someone violated those rights and there’s evidence to support that, that person has to be held accountable. The only way we can address this problem is by coming together. It’s linked to the outside with police misconduct and what happened to Breonna Taylor. How long can we expect people to peacefully protest when their rights are violated? I wish people understood that jail isn’t really what they think it is; it’s a façade. This place isn’t meant to help you get better. When I was in solitary, I did ask for help but it never came. I was left in a cell, spitting up blood, laying in my own vomit, to die. If it wasn’t for certain medical staff who went against the norm and upheld their oath to provide adequate medical treatment despite officers telling them, “That’s not the way we do things here,” I wouldn’t be alive.

I’m not the only one who feels this way. I try to help others write their grievances and get some justice, but it just puts an even bigger target on my back. I won’t stop doing it, though. It’s sad that the people in charge of taking care of us abuse that power rather than help us like they should. It’s not nothing to play with. People in here are being beaten, abused, and left to die. We need people; we need you. Too many people see history as just that - in the past. They don’t realize that history is now.

Note: This piece was originally published in The Breaking Point Project.

Date

Monday, May 17, 2021 - 11:15am

Featured image

artwork from this piece

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Criminal Justice

Show related content

Imported from National NID

40966

Menu parent dynamic listing

22

Imported from National VID

41006

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

This piece is part of a collection of stories from The Breaking Point Project

Pages

Subscribe to ACLU of Florida RSS