Patrick Toomey, Deputy Director, ACLU National Security Project

The U.S. government is embarking on an all-out sprint to develop and deploy artificial intelligence in the name of national security, but its plans for protecting civil rights and civil liberties have barely taken shape. Based on a sweeping new report by a congressionally-mandated commission, it’s clear that U.S. intelligence agencies and the military are seeking to integrate AI into some of the government’s most profound decisions: who it surveils, who it adds to government watchlists, who it labels a “risk” to national security, and even who it targets using lethal weapons.

In many of these areas, the deployment of AI already appears to be well underway. But we know next to nothing about the specific systems that agencies like the FBI, Department of Homeland Security, CIA, and National Security Agency are using, and even less about the safeguards that exist — if any.

That’s why the ACLU is filing a Freedom of Information Act (FOIA) request today seeking information about the types of AI tools intelligence agencies are deploying, what rules constrain their use of AI, and what dangers these systems pose to equality, due process, privacy, and free expression.

Earlier this month, the National Security Commission on Artificial Intelligence issued its final report, outlining a national strategy to meet the opportunities and challenges posed by AI. The commission — composed of technologists, business leaders, and academic experts — spent more than two years examining how AI could impact national security. It describes AI as “a constellation of technologies” that “solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action; and technologies that may learn and act autonomously whether in the form of software agents or embodied robots.” AI systems are increasingly used to make decisions, recommendations, classifications, and predictions that impact Americans and people abroad as we all go about our daily lives.

The report urges the federal government — and especially intelligence agencies — to continue rapidly developing and deploying AI systems for a wide range of purposes. Those purposes include conducting surveillance, exploiting social media information and biometric data, performing intelligence analysis, countering the spread of disinformation via the internet, and predicting threats. The report notes that individual intelligence agencies have already made progress toward these goals, and it calls for “ubiquitous AI integration in each stage of the intelligence cycle” by 2025.

While artificial intelligence may promise certain benefits for national security — improving the speed of some tasks and augmenting human judgment or analysis in others — these systems also pose undeniable risks to civil rights and civil liberties.

Of particular concern is the way AI systems can be biased against people of color, women, and marginalized communities, and may be used to automate, expand, or legitimize discriminatory government conduct. AI systems may replicate biases embedded in the data used to train those systems, and they may have higher error rates when applied to people of color, women, and marginalized communities because of others flaws in the underlying data or in the algorithms themselves. In addition, AI may be used to guide or even supercharge government activities that have long been used to unfairly and wrongly scrutinize communities of color — including intrusive surveillance, investigative questioning, detention, and watchlisting.

The commission’s report acknowledges many of these dangers and makes a number of useful recommendations, like mandatory civil rights assessments, independent third-party testing, and the creation of robust redress mechanisms. But ultimately the report prioritizes the deployment of AI, which it says must be “immediate,” over the adoption of strong safeguards. The commission should have gone further and insisted that the government establish critical civil rights protections now, at the same time that these systems are being widely deployed by intelligence agencies and the military.

One threshold problem is that, when it comes to AI, even basic transparency is lacking. In June 2020, the Office for the Director of National Intelligence released its Artificial Intelligence Framework for the Intelligence Community — and identified “transparency” as one of the framework’s core principles. But there is almost nothing to show for it. The public does not have even basic information about the AI tools that are being developed by the intelligence agencies, despite their potential to harm Americans and people abroad. Nor is it clear what concrete rules, if any, these agencies have adopted to guard against the misuse of AI in the name of national security.

Our new FOIA request aims to shed light on these questions. In the meantime, the work of fashioning baseline AI protections must move ahead. If the development of AI systems for national security purposes is an urgent priority for the country, then the adoption of critical safeguards by Congress and the executive branch is just as urgent. We cannot wait until dangerous systems have already become entrenched.

Date

Friday, March 26, 2021 - 12:30pm

Featured image

Face recognition and personal identification technologies in street surveillance cameras covering people's faces.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

40252

Menu parent dynamic listing

22

Imported from National VID

43476

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

Our FOIA request seeks to uncover information about what types of AI tools intelligence agencies are deploying, what rules constrain their use, and what dangers these systems pose to privacy and due process.

Vera Eidelman, Staff Attorney, ACLU Speech, Privacy, and Technology Project

Sara Rose, Senior Staff Attorney, ACLU of Pennsylvania

B.L. was 14 years old when she posted eight words on Snapchat that got her kicked off her school’s cheerleading team. She never imagined that four years later, her snap would be the subject of a U.S. Supreme Court case.

While hanging out with a friend at a convenience store on a Saturday afternoon, B.L., our client and a high school cheerleader who hadn’t made varsity, posted “Fuck school fuck cheer fuck softball fuck everything” on Snapchat. The words were superimposed over a photo showing B.L. and her friend with their middle fingers raised. The snap disappeared 24 hours later, long before school resumed. Yet, her school responded by kicking B.L. off the cheerleading team for an entire year. Although B.L.’s snap may seem trivial, the stakes could not be higher. Next month, the U.S. Supreme Court will hear arguments in B.L.’s case, and the decision could alter the free speech rights of millions of students and young people across the nation.

The court’s decision in this case, B.L. v. Mahanoy Area School District, will define the scope of young people’s free speech rights whenever they are outside of school — whether they’re marching at a weekend protest or posting on social media — and determine whether schools have the right to punish students for speech and expression in these out-of-school contexts. Today, the ACLU, the ACLU of Pennsylvania, and Schnader Harrison Segal & Lewis LLP filed a brief arguing that outside of school, young people should have every right to express themselves and voice their opinion without worrying if their school will punish them for it.

Fifty years ago, the court ruled that students do not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.” However, under current law, school administrators can discipline students for speech inside school that is deemed likely to be “disruptive” or that interferes with the rights of others. That is not the standard that should apply once young people leave the school or a school-sponsored activity. At that point, they should be free to speak without fear that a principal or school administrator will punish them if they find their speech “disruptive.” The question before the court in this case is what happens beyond school — do young people keep their full free speech rights when they are off campus, or are they always subject to having their expression policed according to the lower protection they have as students in school?

Giving school officials power to monitor and punish off-campus speech just because they deem it “disruptive” would put an unprecedented limit on the free speech rights of students and all young people. Students’ off-campus speech can be punished if it threatens violence or engages in harassment or bullying, much like adult speech can be. But extending the in-school standard outside of school could lead to schools preventing young people and students from criticizing school policies, raising important concerns about racist, sexist, xenophobic, homophobic or just plain inappropriate behavior by school staff or other students, talking about religion, making a joke, or using profanity to emphasize frustration. Young people’s speech rights everywhere would be limited to what they can say in school.

In the past, schools have punished students for what they considered “disruptive” expressions inside school, including speech on racial justice and other social issues. For example, authorities have punished Latinx students for wearing shirts that read “We Are Not Criminals” to protest anti-immigrant legislation, religious students for speaking out against abortion or quoting Bible verses, and have punished students for displaying a Black Lives Matter slide as a background during remote school.

These examples of discipline show how often school officials misuse their authority to police student speech. Giving them the power to police young people’s speech will have even worse results: Students won’t be able to discuss their views on racism, national policy, or religion even outside of school. And, as with most government authority, it’s not hard to imagine how that power will be applied in discriminatory ways. In fact, we’ve already seen schools misuse their power in troubling ways to punish young Black people for what they say outside of school, including for posting a photo of a memorial commemorating a girl’s deceased father, a photo of a boy “holding too much money,” rap music videos, and posts calling out racist slurs used by their white classmates.

Protecting students’ and young people’s full free speech rights when they are outside of school is vital. Taking away that safeguard would have a chilling effect on free speech, deterring young people from engaging in political, social, or religious expression out of fear of punishment. If schools could control young people’s speech rights outside of school like they do inside, young people could never express themselves freely. They’d learn that, in our society, saying anything controversial, unpopular, or critical of the established order can lead to punishment. That’s certainly not the lesson that schools, or the Supreme Court, should be teaching.

Date

Wednesday, March 24, 2021 - 5:00pm

Featured image

Snapchat stories on a smart phone.

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Students & Youth Rights

Show related content

Imported from National NID

40229

Menu parent dynamic listing

22

Imported from National VID

40237

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

The Supreme Court will hear arguments in a case that could change the free speech rights of millions of young people across the country.

Pages

Subscribe to ACLU of Florida RSS