Are you a high school or college student in Broward looking for a way to make an impact in the upcoming midterm election? Are you passionate about defending voting rights and educating voters?

Join us to get out the vote and talk to our neighbors about the election in Broward County. We will help voters make a plan to vote and hand out voting resources and information.

Snacks, water, canvassing gear, and training provided.

Event Date

Saturday, October 29, 2022 - 2:30pm to
4:30pm

Featured image

More information / register

Tweet Text

[node:title]

Share Image

YDOC

Date

Saturday, October 29, 2022 - 4:30pm

Menu parent dynamic listing

18

Davin Rosborough, Senior Staff Attorney, ACLU Voting Rights Project

Tish Gotell Faulks, She/Her/Hers, Legal Director, ACLU of Alabama

The Supreme Court this term will hear Merrill v. Milligan, a case concerning Alabama’s congressional districts that may have serious implications for the power of Section 2 of the Voting Rights Act. The Supreme Court should reject Alabama’s dangerous invitation to remake the Voting Rights Act into a law that entrenches — rather than fights against — racial discrimination in voting.

The state’s redrawn congressional maps double down on Alabama’s historical practice of limiting Black voting power, and harm not only Black Alabamians and communities of color, but all Alabamians who care about fair representation.

Section 2 of the Voting Rights Act bans racial discrimination in voting nationwide. When it comes to drawing district lines, the law requires that line drawers do not unfairly dilute the power of voters of color. Congress intended this provision to protect against even subtle or hidden forms of racial discrimination in voting, and it reinforced this vision by amending Section 2 in 1982, in response to a restrictive Supreme Court decision to make clear that proof of intentional discrimination was not required to prevail.

Voters, mostly black, line-up to vote on a queue that snakes along the perimeter outside the building of a polling station.

Voters, mostly black, line up to vote in a queue that snakes along the perimeter outside the polling station building.

Cory Young/Tulsa World via AP, File

Last year, Alabama adopted new congressional maps as the result of its once-in-a-decade redistricting process. These new maps double down on Alabama’s historical practice of limiting Black voting power, and disserve not only Black Alabamians and communities of color, but all Alabamians who care about fair representation.

Alabama’s congressional districts continue to harm Black Alabamians and other communities of color in several ways. The new map creates only one district out of seven in which Black Alabamians can elect preferred candidates, despite comprising more than 27 percent of Alabama’s voting-age population.

It does so by first packing a larger number of Black Alabamians into Congressional District 7 than necessary. It then cracks the rest of state’s strongest community of interest — the Black Belt, including Montgomery, a longstanding majority-Black area — across Congressional Districts 1, 2, and 3.

It’s essential that the Supreme Court uphold and affirm the Voting Rights Act by requiring Alabama to redraw the maps, and reject their arguments to ignore racial discrimination in the state’s political processes.

That’s why earlier this year after a lengthy trial, even a panel of three judges appointed by presidents of different parties ruled in a detailed 200-page opinion that the state must redraw the map. Alabama’s plan reinforced discrimination in voting, employment, health care, and other areas that make it more difficult for Black people to turn out, vote, and sponsor candidates.

It is essential that the Supreme Court uphold and affirm the purpose of the Voting Rights Act by requiring Alabama to redraw these discriminatory maps, and reject their arguments to ignore deep-seated racial discrimination in the state’s political processes. The VRA is as important to democracy today as it was nearly 60 years ago. Alabama’s increasingly diverse population calls for two districts where Alabama’s Black residents can elect candidates of their choice to Congressional office.

Alabama has a long and continuous history of discriminating against Black voters. Without a functional Voting Rights Act, our efforts to make and keep the United States a truly functional, multiracial democracy will be set back. Black voters deserve to be heard in the electoral process, not to be packed into one district or diluted into several districts by a congressional map that attacks their political power.

Date

Tuesday, October 4, 2022 - 10:15am

Featured image

In the foreground, a black box with the words "BALLOT BOX" rests on a sign reading "POLLING STATION" at a polling station, while in the background a person in silhouette is at a voting booth.

Show featured image

Hide banner image

Override default banner image

In the foreground, a black box with the words "BALLOT BOX" rests on a sign reading "POLLING STATION" at a polling station, while in the background a person in silhouette is at a voting booth.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Voting Rights

Show related content

Imported from National NID

52133

Menu parent dynamic listing

22

Imported from National VID

52158

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

The Supreme Court is set to hear Merrill v. Milligan, a case about Alabama’s discriminatory congressional maps and Section 2 of the Voting Rights Act.

Show list numbers

Crystal Grant, Former Technology Fellow, ACLU Speech, Privacy, and Technology Project

Artificial intelligence (AI) and algorithmic decision-making systems — algorithms that analyze massive amounts of data and make predictions about the future — are increasingly affecting Americans’ daily lives. People are compelled to include buzzwords in their resumes to get past AI-driven hiring software. Algorithms are deciding who will get housing or financial loan opportunities. And biased testing software is forcing students of color and students with disabilities to grapple with increased anxiety that they may be locked out of their exams or flagged for cheating. But there’s another frontier of AI and algorithms that should worry us greatly: the use of these systems in medical care and treatment.

The use of AI and algorithmic decision-making systems in medicine are increasing even though current regulation may be insufficient to detect harmful racial biases in these tools. Details about the tools’ development are largely unknown to clinicians and the public — a lack of transparency that threatens to automate and worsen racism in the health care system. Last week, the FDA issued guidance significantly broadening the scope of the tools it plans to regulate. This broadening guidance emphasizes that more must be done to combat bias and promote equity amid the growing number and increasing use of AI and algorithmic tools.


Bias in Medical and Public Health Tools

In 2019, a bombshell study found that a clinical algorithm many hospitals were using to decide which patients need care was showing racial bias — Black patients had to be deemed much sicker than white patients to be recommended for the same care. This happened because the algorithm had been trained on past data on health care spending, which reflects a history in which Black patients had less to spend on their health care compared to white patients, due to longstanding wealth and income disparities. While this algorithm’s bias was eventually detected and corrected, the incident raises the question of how many more clinical and medical tools may be similarly discriminatory.

Another algorithm, created to determine how many hours of aid Arkansas residents with disabilities would receive each week, was criticized after making extreme cuts to in-home care. Some residents attributed extreme disruptions to their lives and even hospitalization to the sudden cuts. A resulting lawsuit found that several errors in the algorithm — errors in how it characterized the medical needs of people with certain disabilities — were directly to blame for inappropriate cuts made. Despite this outcry, the group that developed the flawed algorithm still creates tools used in health care settings in nearly half of U.S. states as well as internationally.

One recent study found that an AI tool trained on medical images, like x-rays and CT scans, had unexpectedly learned to discern patients’ self-reported race. It learned to do this even when it was trained only with the goal of helping clinicians diagnose patient images. This technology’s ability to tell patients’ race — even when their doctor cannot — could be abused in the future, or unintentionally direct worse care to communities of color without detection or intervention.


Tools Used in Health Care Can Escape Regulation

Some algorithms used in the clinical space are severely under-regulated in the U.S. The U.S Department of Health and Human Services (HHS) and its subagency the Food and Drug Administration (FDA) are tasked with regulating medical devices — with devices ranging from a tongue depressor to a pacemaker and now, medical AI systems. While some of these medical devices (including AI) and tools that aid physicians in treatment and diagnosis are regulated, other algorithmic decision-making tools used in clinical, administrative, and public health settings — such as those that predict risk of mortality, likelihood of readmission, and in-home care needs — are not required to be reviewed or regulated by the FDA or any regulatory body.

This lack of oversight can lead to biased algorithms being used widely by hospitals and state public health systems, contributing to increased discrimination against Black and Brown patients, people with disabilities, and other marginalized communities. In some cases, this failure to regulate can lead to wasted money and lives lost. One such AI tool, developed to detect sepsis early, is used by more than 170 hospitals and health systems. But a recent study revealed the tool failed to predict this life-threatening illness in 67 percent of patients who developed it, and generated false sepsis alerts on thousands of patients who did not. Acknowledging this failure was the result of under-regulation, the FDA’s new guidelines point to these tools as examples of products it will now regulate as medical devices.

The FDA’s approach to regulating drugs, which involves publicly shared data that is scrutinized by review panels for adverse effects and events contrasts to its approach to regulating medical AI and algorithmic tools. Regulating medical AI presents a novel issue and will require considerations that differ from those applicable to the hardware devices the FDA is used to regulating. These devices include pulse oximeters, thermal thermometers, and scalp electrodes—each of which have been found to reflect racial or ethnic bias in how well they function in subgroups. News of these biases only underscores how vital it is to properly regulate these tools and ensure they don’t perpetuate bias against vulnerable racial and ethnic groups.


A Lack of Transparency/Biased Data

While the FDA suggests that device manufacturers test their devices for racial and ethnic biases before marketing to the general public, this step is not required. Perhaps more important than assessments after a device is developed is transparency during its development. A STAT+ News study found many AI tools approved or cleared by the FDA do not include information about the diversity of the data on which the AI was trained, and that the number of these tools being cleared is increasing rapidly. Another study found AI tools “consistently and selectively under-diagnosed under-served patient populations,” finding the under-diagnosis rate was higher for marginalized communities who disproportionately don’t have access to medical care. This is unacceptable when these tools may make decisions that have life or death consequences.


The Path Forward

Equitable treatment by the health care system is a civil rights issue. The COVID-19 pandemic has laid bare the many ways in which existing societal inequities produce health care inequities — a complex reality that humans can attempt to comprehend, but that is difficult to accurately reflect in an algorithm. The promise of AI in medicine was that it could help remove bias from a deeply biased institution and improve health care outcomes; instead, it threatens to automate this bias.

Policy changes and collaboration among key stakeholders, including state and federal regulators, medical, public health, and clinical advocacy groups and organizations, are needed to address these gaps and inefficiencies. To start, as detailed in a new ACLU white paper:

  • Public reporting of demographic information should be required.
  • The FDA should require an impact assessment of any differences in device performance by racial or ethnic subgroup as part of the clearance or approval process.
  • Device labels should reflect the results of this impact assessment.
  • The FTC should collaborate with HHS and other federal bodies to establish best practices that device manufacturers not under FDA regulation should follow to lessen the risk of racial or ethnic bias in their tools.

Rather than learn of racial and ethnic bias embedded in clinical and medical algorithms and devices from bombshell publications revealing what amounts to medical and clinical malpractice, the HHS and FDA and other stakeholders must work to ensure that medical racism becomes a relic of the past rather than a certainty of the future.

Date

Monday, October 3, 2022 - 4:15pm

Featured image

A Doctor points to AI biomedical algorithm screen.

Show featured image

Hide banner image

Override default banner image

A Doctor points to AI biomedical algorithm screen.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

52111

Menu parent dynamic listing

22

Imported from National VID

52130

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

Unclear regulation and a lack of transparency increase the risk that AI and algorithmic tools that exacerbate racial biases will be used in medical settings.

Show list numbers

Pages

Subscribe to ACLU of Florida RSS