Crystal Grant, Former Technology Fellow, ACLU Speech, Privacy, and Technology Project

Newborn screening programs are a vital public health measure implemented in the U.S. and across the world, with about one third of babies born globally undergoing some screening. As part of this program in the U.S., nearly every baby born has blood drawn soon after birth, and that blood is tested for a panel of potentially life-threatening inherited disorders. Current newborn screening programs have been invaluable, both in lowering medical costs through early detection and intervention and in decreasing the toll of human suffering that comes from a late diagnosis. Unfortunately, innovation in this field is rapidly outpacing the law, leaving families vulnerable to privacy invasions.

In October 2022, a global consortium of scientists and other newborn DNA sequencing researchers convened to discuss a bold possibility for future care: Every baby born in the U.S. could have their full genome sequenced as an addition to existing newborn screening programs. This means that doctors would have on record all of the baby’s DNA, in addition to drawing their blood and testing for specific, known inherited disorders that can cause serious health problems at birth. The data derived from the blood samples now is fairly limited and only relevant to the baby’s health at birth. This includes data on the date, location, and time of birth; the mother’s name and address, the disorder being screened for, and a value indicating whether the baby is likely to have a particular hereditary disorder.

A file card marked with the blood of a newborn is screened to test for genetic illnesses.

A paper strip marked with the blood of a newborn is screened to test for genetic illnesses.

Arno Burgi/picture-alliance/dpa/AP Images


Police Access to Newborn DNA Sequencing Data

It doesn’t take much imagination to guess how this full genome sequencing could be abused. Last year, a public records lawsuit in New Jersey revealed a worrying new trend of police subpoenaing a newborn’s blood sample to implicate a child’s relative in a crime. Because of the lack of regulations surrounding DNA privacy, police in some states are able to access these samples, generate wildly detailed genetic profiles from them, and then use those profiles in criminal investigations. These samples, which were intended only to protect babies from life-threatening diseases, are instead being used by law enforcement to gather our highly personal and detailed genetic information, and to generate family trees dating back generations by uploading the DNA profile to consumer genetic databases operated by companies like GEDMatch.

Moreover, police departments’ systematic mistreatment of marginalized Americans may place them in greater danger of their genetic information being used against them and/or compelled from them. In an abhorrent practice that is currently the subject of a lawsuit, police in Orange County, California have been compelling genetic samples from people in exchange for dropping minor charges against them as part of a “Spit and Acquit” program. Additionally, police across the country have already constructed their own, unregulated genetic databases and have accessed medical samples of relatives — in one case using cells derived from a woman who had undergone a routine pap smear — all in the name of acquiring more detailed genetic data of people often not even accused of a crime.

If states expand their screenings to include the genome sequencing of newborns without new legal protections in place to prevent law enforcement access to this data, we will almost certainly see it abused. Widespread adoption of full genome sequencing would allow police to not only continue their current process of acquiring highly sensitive genetic information as part of criminal investigations, but could streamline it.

Without ironclad protections, a database containing genetic sequences of every baby born in a state will become a one-stop-shop for police. This poses a tremendous cost to public health: If parents are concerned about the privacy of their child’s genetic data, they may hesitate to participate in screening programs or opt out entirely, putting their babies at risk. This makes state regulation of newborn screening programs — including the samples themselves, as well as any data derived from them — essential.


A Patchwork of State Protections for Blood Sample Data

Currently, different states have differing policies on the length of time these blood samples are stored, with some states storing samples for only a few months and others storing them indefinitely. Similarly, policies governing the data associated with the blood samples vary state to state. For example, more than a quarter of states have no policy barring law enforcement access; some states protect the blood sample but not the associated data, and few states protect both.

In our previous blog post on this topic, Iowa was mentioned as a standard for other states to follow, but even Iowa’s law needs improvement in the face of proposals to optionally sequence newborn DNA. Although the Iowa law explicitly bars law enforcement access to newborn blood samples, it offers no such protections to the data associated with these samples. This is particularly worrying because it means even a state that has taken preventative steps to prevent law enforcement abuse of this public health program will see its policies easily circumvented if and when newborn DNA sequencing is incorporated into newborn screening.

These expanded newborn screening programs will likely result in the data associated with the blood sample being more revealing than the sample itself. Some states have tried to address the current privacy problems by destroying the blood samples, or shortening the length of time they are stored. However, this would do little if testing is expanded to the full genome, because data analyzed from these samples is not subject to the same rules. For this reason, states must act quickly to amend their newborn screening programs, and clarify that legal protections safeguard access to both the blood sample itself and any data associated with the sample.


Lawmakers and Researchers Should Act to Ensure Privacy

The need for state legislators to act is clear, but newborn DNA sequencing researchers and whole genome sequencing companies must also do their part. While researchers are eager to enroll communities of color in their pilot studies to gather more and genetically diverse data, it is unclear whether the researchers, their academic institutions, or the federally-regulated Institutional Review Board (which approves the use of human participants in research studies) are requiring participants be informed that the genetic information of their newborns may potentially be legally accessed and misused by police.

Additionally, companies involved in the sequencing, data transfer, and/or storage of newborn DNA sequencing data must also agree not to voluntarily allow police access to this data. With researchers encouraging thousands of parents to enroll their child in pilot studies for sequencing at no financial cost, the cost of this new technology to their genetic privacy must be conveyed to research participants.

Some states, such as New Jersey, have proposed new bills aimed at limiting police’s ability to access newborn screening samples. While bills like these are a step in the right direction, they must also explicitly prohibit law enforcement from accessing both the sample itself and any data associated with the screening program. The newborn screening program is too vital a public health intervention to be hijacked by police’s constant search for new methods to access our most personal information. Just as this program may change as technology advances, so must the protections states put in place.

Date

Wednesday, April 19, 2023 - 3:00pm

Featured image

A newborn has its blood drawn from to test for genetic diseases.

Show featured image

Hide banner image

Override default banner image

A newborn has its blood drawn from to test for genetic diseases.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

106879

Menu parent dynamic listing

22

Imported from National VID

107170

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

As newborn screening programs expand to include babies’ entire genomes, parents must be made aware of the privacy risks and states must create rules to rein in unbridled police access to this data.

Show list numbers

Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

Everywhere you look lately, people are discussing the potential negative uses and consequences of the AI-driven chatbot ChatGPT. Many are concerned about the potential for ChatGPT and other “large language models” (LLMs) to spread a fog of disinformation throughout our discourse, and to absorb the racism and other biases that permeate our culture and reflect them back at us in authoritative-sounding ways that only serve to amplify them. There are privacy concerns around the data that these models ingest from the internet and from users, and even problems with the models “defaming” people.

But there’s another consequence of this technology that may prove to be very significant: its use as a tool for surveillance. If ChatGPT can “understand” complex questions and generate complex answers, it stands to reason that it may be able to understand much of what is said in a wiretap or other eavesdropped conversation, and flag particular conversations that are “suspicious” or otherwise of interest for humans to act upon. That, in turn, could lead to an enormous scaling up of the number of communications that are meaningfully monitored.

To get a feel for the possibilities here, I asked ChatGPT some questions.

A Rudimentary Test Run Talking to ChatGPT

To start off, I asked the model, “How suspicious is someone who says, ‘I really hate the president’?” ChatGPT answered, “It is not necessarily suspicious for someone to express dislike or hatred for a political figure, such as the president. People have varying opinions and beliefs about political leaders, and expressing them is a normal part of political discourse and free speech.”

So far, so good. “However,” it continued, “if the person’s statement is accompanied by specific and credible threats of harm or violence towards the president or others … then it may be cause for concern. In general, it’s important to consider the context and tone of the statement, as well as any accompanying behavior, before making a judgment about its level of suspicion or potential threat.”

Pretty good. Next, I gave ChatGPT a list of statements and told it to rate how suspicious each one was on a scale of 1-10. Though it again issued reasonable-sounding caveats, it dutifully complied with a table of results:

Even in this rudimentary little experiment we can see how a large language model (LLM) like ChatGPT can not only write, but can read and judge. The technology could be put to service as a lookout for statements that score highly by some measure — “suspiciousness” in my example, though one could attempt a variety of other monitoring projects, such as flagging “employees who are looking for a new job,” or “employees who have a positive attitude toward Edward Snowden.” (I ran a collection of published letters to the editor through ChatGPT, asking it to rate how positive each one was toward Snowden, and it was quite accurate.)

No Shortage of Potential Uses

There is a lot of demand for communications monitoring — by both government and the private sector, and covering not only private communications but public ones as well, such as social media posts.

In general, it is not constitutional for the government to monitor private communications without a warrant. Nor is it legal under our wiretapping laws for companies or individuals to do so. But there are plenty of exceptions. The National Security Agency collects communications en masse around the world, including, despite its putative foreign focus, vast amounts of internet traffic entering and exiting the United States including that of Americans. We believe this is unconstitutional, but our challenges have so far been dismissed on secrecy grounds. Companies also monitor private communications when carried out by their workers on work-owned devices. (Financial companies can be required to do so.) Prisons monitor inmates’ phone calls, and call centers record their customers (“This call may be monitored…”).

When it comes to public communications, government agencies including the Department of Homeland Security and the FBI collect social media postings for wide-ranging purposes such as threat detection, the screening of travelers, and that perennial catch-all goal, “situational awareness.” Companies also sometimes search through their workers’ social media posts.

Currently, much of that monitoring is done through keyword searches, which flag the appearance of a particular word or words in a communications stream (aided in the case of oral conversations by rapidly improving speech-to-text engines). More sophisticated versions of keyword searches might look for the appearance of multiple words or their synonyms appearing near each other and try to use other surrounding words for context. Some corporate products for such monitoring claim to use “AI” (though that’s a typical marketing buzzword, and it’s often unclear what it means).

In any case, LLMs appear to have brought the potential for automated contextual understanding to a whole new level. We don’t know how sophisticated automated monitoring systems at the NSA have become, though in general, the private sector has often outpaced even the best-funded big government agencies when it comes to innovations like this. But even if the NSA already had some form of an LLM, this tool has now been brought into the open, and can clearly interpret language in far more sophisticated ways than previously possible for everybody else.

Accuracy and Unfairness Remain Core Concerns

The amazing performance of LLMs does not mean they will be accurate. My little experiment above shows that LLMs are likely to interpret statements that have perfectly innocent meanings — that refer to fiction or reflect sarcasm, hyperbole, or metaphor — as highly suspicious. More extensive experiments would have to be done to test the ability of an LLM to judge the suspiciousness of longer statements, but at the end of the day, these systems still work by stringing words together in patterns that reflect the oceans of data fed to them; what they lack is a mental model of the world, with all its complexities and nuance, which is necessary to properly interpret complex texts. They are likely to make big errors.

Some may argue that if LLMs are more sophisticated than something like a keyword scanner, that they will do less harm as eavesdroppers because of their greater ability to take account of context, which will make them better able to flag only conversations that are, in fact, truly suspicious.

But it’s not entirely clear whether more or fewer innocent people would be flagged as an AI eavesdropper gets smarter. It’s true that by recognizing context, LLMs may skip over many uses of keywords that would be reflexively flagged by even the most sophisticated keyword scanner. At the same time, however, they may also flag mundane words, such as “fertilizer” and “truck,” that might be ignored by a keyword scanner, but which in combination would be flagged because of LLMs’ greater sensitivity to context, such as a recognition that fertilizer can be used to make truck bombs, and a received belief that people with radical views are more likely to build such bombs.

In short, an LLM may make more sophisticated mistakes, but it may make just as many. And the very sophistication of the model’s judgments may lead human reviewers to take an AI warning much more seriously, perhaps subjecting the speaker to investigation and privacy invasions. The racism that the models absorb from the larger culture could also have very real-world consequences. Then there’s ChatGPT’s propensity for making stuff up; it’s unclear how that might play in.

Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked.

And what is lurking inside is egregious.

@Abebab @sama
tw racism, sexism.


To prevent tracking by Twitter, we are showing a preview. See original tweet.

But however effective or problematic LLMs prove to be as eavesdroppers’ aides — which is likely to vary by context — what we can be sure of is that all kinds of institutions are going to be trying it out.

A Question of Scale

Despite the unreliability of ChatGPT and its ilk, humans are also plenty capable of being erratic, ignorant of context, and generally stupid. The last statement in the above table was a 2012 tweet from a 26-year-old British man, Leigh Van Bryan, who was excited about his trip to Los Angeles with a friend. Upon arrival in Los Angeles the two were detained by Homeland Security, held in jail for 12 hours, and blocked from entering the United States despite their attempts to explain that “destroy” was British slang for “party in.” Van Bryan had also exuberantly tweeted that he was going to be “diggin’ Marilyn Monroe up” on Hollywood Boulevard (though she is not buried there), a reference to a line from the TV show “Family Guy.” Literal-minded federal agents searched the pair’s suitcases looking for shovels.

Regardless of relative intelligence levels, the biggest harm that might come from the use of LLMs in surveillance may simply be an expansion in the amount of surveillance that they bring about. Whether by humans or computers, attempts to interpret and search masses of communications are inevitably erratic and overbroad — we have already seen this in corporate social media content-regulation efforts. But if a lot more communications are being meaningfully monitored because humans perceive LLMs as better at it, many more people will be flagged and potentially hurt.

Hiring humans to review communications is expensive, and they’re distractible and easily bored, especially when required to pore over large amounts of ordinary activity looking for very rare events. If only as a matter of economics, AI agents would be able to ingest, scrutinize, and judge far more social media postings, emails, and audio transcripts than humans can do. Not only will that likely result in a higher volume of the kinds of monitoring that are already happening, but it will likely encourage an expansion in the parties that are doing it, and the purposes for which they do it. A company that has never considered monitoring its employees’ internet postings may decide to do so, for example, if it’s cheap and easy, and it doesn’t seem to generate too many false alarms. Or it might move from searching for signs of reputational damage to intelligence on which employees are thinking of leaving or are not dedicated to the company. Because why not? It’s all so easy to do. Any institution that thinks it can increase its power and control by using LLMs for surveillance, will likely do so.

No matter how smart LLMs may become, if they result in an expansion of surveillance — for purposes both serious and trivial — they will engage in far more misunderstandings and false alarms. And that, in turn, would create chilling effects that affect everyone. As stories of various institutions’ “successes” in flagging suspicious communications emerge — not to mention their mistakes — we would all begin to feel the growing presence of machines listening in. And, in certain contexts, begin to subtly or not-so-subtly censor ourselves lest we cause one of those AI minders to flag us. In this, LLMs may have the same effect with regard to communications that video analytics may have when it comes to video cameras.

We need to recognize that large-scale machine surveillance is likely coming our way, and whether the machines perform well or badly, better privacy laws will be vital to prevent powerful institutions from leveraging technology like LLMs to gain even more power over ordinary people, and to protect the values of privacy and free expression that we have always cherished.

Date

Wednesday, April 19, 2023 - 1:45pm

Featured image

A screen with the chatGPT tool open.

Show featured image

Hide banner image

Override default banner image

A screen with the chatGPT tool open.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

107079

Menu parent dynamic listing

22

Imported from National VID

107158

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

Everybody’s talking about ChatGPT’s amazing ability to write, but its ability to read may be just as significant.

Show list numbers

Johanna Silver, she/her/hers, Digital Producer, ACLU

The highly anticipated finale of RuPaul’s Drag Race will feature incredible ensembles, sensational lip sync performances, and of course, the revelation of the season 15 winner. Alongside the competition, the finale will also highlight a joint effort with the ACLU to help defend constitutional rights and creative expression in the LGBTQ community. With our nationwide network of affiliates, supporters, and advocates, we’re showing up to fight back against hate, and holding politicians accountable for their anti-trans agenda — and you can join us.

ACLU employees represented the ACLU on the red carpet during the April 1 taping of the finale, which featured drag queens from past and present seasons. During tonight’s airing of the finale, the ACLU’s Drag Defense Fund will also get a special shoutout as a way to mobilize against mounting legal attacks against drag performers.

What is the Drag Defense Fund?

In partnership with production company World of Wonder, MTV, and RuPaul’s Drag Race, the ACLU’s Drag Defense Fund will support the ACLU’s work defending LGBTQ rights, including censorship of drag performers themselves. The ACLU is committed to the importance of drag as a First Amendment right and an important form of artistic expression. It will also fuel the ACLU’s expansive advocacy work in support of the LGBTQ community, which includes everything from courtroom battles, to guidance on state-level legislation, to challenging LGBTQ censorship in classrooms.

What’s at Stake in the Drag Community?

Drag performance has always been a hallmark of the queer community, allowing members to express themselves openly and joyously. Due in large part to the explosive success of RuPaul’s Drag Race over the years, drag performers have received widespread fanfare and support. But in the past few months, dozens of bills have also been introduced in states around the country that restrict how and where drag queens can perform.

ACLU attorney Chase Strangio, and RuPaul's Drag Race judges Ross Matthews and Michelle Visage being interviewed on the red carpet for the season 15 finale.

ACLU attorney Chase Strangio, and RuPaul’s Drag Race judges Ross Matthews and Michelle Visage being interviewed on the red carpet for the season 15 finale.

Credit: Danielle Silber

These bills censor a fundamental human right to freedom of expression and attempt to remove LGBTQ people from public life — a strategy that ACLU Ambassador for Transgender Justice and former RuPaul’s Drag Race contestant Peppermint recently explained.

“It is sort of a catch all that allows them to attack a word or a notion or an idea, drag, when really what they’re talking about is transgender folks,” she said on a recent episode of our At Liberty podcast. “Trans issues, trans rights, you know, anything having to do with gender and sexuality, they’re trying to dismantle that — dismantle any of the progress that we’ve had on that, and turn us into the boogeyman.”

One Part of a Larger Battle for LGBTQ Rights

These legislative attacks against drag performers, coupled with the anti-trans legislation across the country, underscore an unsettling trend. These bills are bald attempts to prevent people from expressing themselves authentically and restrict their right to bodily autonomy and self-determination. The majority of this legislation specifically targets trans youth and aims to obstruct their ability to find support and access health care. These attacks not only undermine drag queens and trans youth, but the broader LGBTQ community, and the very foundations of gender justice.

How We’re Fighting Back Against Anti-LGBTQ Attacks

Many Drag Race alumni — and RuPaul — have come forward to promote the fund, speak out against the targeted legal attacks, and uplift drag queens as champions for equal rights.

“Drag has influenced everything about my queer identity — as a performer, as a trans individual, drag has just blurred all of the things I love and am into getting to be this kind of walking art installation that is also a protest,” drag queen and season 15 finalist Sasha Colby said of the art form.

The ACLU will always show up to defend our LGBTQ community, especially amid these ongoing legislative attacks. Along with mobilizing our Drag Defense Fund, we will continue to push back against censorship, hold legislators accountable for their anti-LGBTQ agendas, and take legal action to protect our communities. With your help, we’ll be stronger than ever. Join us as we work to preserve our right to freedom of expression, inclusive communities, safe schools, and the right to be who we are without fear.

Date

Friday, April 14, 2023 - 5:30pm

Featured image

Drag queen Sasha Colby.

Show featured image

Hide banner image

Override default banner image

Drag queen Sasha Colby.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Free Speech LGBTQ+ Rights

Show related content

Imported from National NID

106317

Menu parent dynamic listing

22

Imported from National VID

134554

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

The ACLU’s Drag Defense Fund aims to educate and activate supporters and push back against restrictive anti-LGBTQ legislation.

Show list numbers

Pages

Subscribe to ACLU of Florida RSS