Hina Naveed, (she/her), Aryeh Neier Fellow, ACLU Human Rights Program and Women’s Rights Project

This article was originally published by The Progressive Magazine.

When I graduated from nursing school five years ago, I worked for an agency in New York City’s foster care system. I believed I was helping families. But what I saw there was not a system working for children’s best interests, but one that was quick to separate children from their parents because they were living in poverty.

I’ve since gone to law school and now work as a human rights advocate. For the past year, as a fellow with Human Rights Watch and the ACLU, I have been investigating the system I once worked for — not just in New York, but across the country.

We found that child welfare systems punish families experiencing poverty by removing children and charging parents with “neglect.” Our analysis of nationwide child welfare data showed alarming racial and ethnic disparities. Black and Indigenous families are more likely to be investigated than white families. Single mothers of color are most frequently held responsible for neglect. Parents are often not told their rights or connected with an attorney early enough in the process.

Every year, more than three million children are subjected to a child welfare investigation. The process can be highly stressful and traumatic for families. Child welfare authorities may search the family’s home, interrogate neighbors, strip search and question children — sometimes based on anonymous or unfounded accusations.

Most referrals to the system do not involve abuse. The overwhelming majority of cases, nearly 75 percent in 2019, include allegations of state-defined neglect, which is inextricably linked to poverty. Parents struggling with limited resources, unable to pay rent or secure stable housing, or working long hours to make ends meet, are judged unfit and neglectful.

As a registered nurse in New York, I was required to report any concerns about child abuse or neglect to the state child protective services hotline, or risk losing my license and facing harsh criminal penalties. Every state has a similar requirement.

But broad and vague state definitions of abuse and neglect mean that teachers, social workers, and healthcare providers are required to report families out of an abundance of caution, even if our professional training and clinical judgment dictate otherwise.

Millions of reports are made every year, overwhelming an already burdened child welfare system. Most do not warrant an investigation.

Black children make up just 14 percent of the U.S. child population, but 24 percent of child abuse or neglect reports.

We found a clear correlation between child welfare investigations and poverty, as counties with more families living in poverty have higher rates of investigation. Black families, however, experience a high rate of maltreatment investigations even when living in counties where the poverty rate is low.

Black children make up just 14 percent of the U.S. child population but 24 percent of child abuse or neglect reports and 21 percent of children entering the foster system. Indigenous children are also disproportionately affected. They enter the foster system at nearly double the nationwide rate.

I’ve talked to parents who only learned about a child maltreatment allegation against them when a caseworker showed up on their doorstep. Often, the caseworker assigned to reunify a family is also responsible for making the case to terminate parental rights and place a child for adoption. These roles are inherently at odds. Caseworkers tasked with documenting parents’ struggles and shortcomings to build a case against them are, at the same time, expected to somehow support family reunification.

Caseworkers have significant influence in determining whether maltreatment occurred. If a caseworker “substantiates” an allegation, parents or caregivers are listed on a state central maltreatment registry, where they often remain for years, affecting job opportunities and perpetuating the cycle of poverty.

Of course, there are devastating cases where children face serious abuse and intervention is needed. The problem, however, is that the system we have now is not designed to effectively keep children safe. Instead, the system puts parents, especially single mothers of color, in the impossible situation of having to overcome poverty in order to stop being monitored and to reunite with their children, without providing them the resources necessary to do so.

The entire system needs an overhaul. Lawmakers should address the extreme economic hardship and systemic racism at the heart of many child welfare cases. Federal, state, and local governments should invest in community resources and support that addresses families’ needs instead of punishing and surveilling them.

Date

Thursday, December 8, 2022 - 10:00am

Featured image

An intake call screening center for the Allegheny County Children and Youth Services office.

Show featured image

Hide banner image

Override default banner image

An intake call screening center for the Allegheny County Children and Youth Services office.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Gender Equity & Reproductive Freedom Racial Justice

Show related content

Imported from National NID

53458

Menu parent dynamic listing

22

Imported from National VID

53512

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

I thought I would help families by working in the foster care system. Instead I found a system that was quick to separate children from their parents because they were living in poverty.

Show list numbers

Marissa Gerchick, she/her/hers, Data Scientist and Algorithmic Justice Specialist, ACLU

Automated decision-making systems are increasingly being used to make important decisions in key areas of people’s lives. The vast majority of large employers in the U.S. — including 99 percent of Fortune 500 companies — use software to evaluate job applicants, and once hired, similar software is increasingly used to continuously surveil employees. Ninety percent of landlords use screening services, which are often powered by automated systems, to evaluate potential tenants.

These automated tools are a prevalent and entrenched part of our modern economic and social systems, and there is clear evidence that they can create and enable pervasive harm. For example, the automated tools frequently used by landlords to screen potential tenants are notoriously riddled with errors, locking consumers — disproportionately Black and Latinx renters — out of accessing housing. In the context of employment, the ACLU and other civil rights and technology organizations have repeatedly raised concerns about hiring discrimination facilitated or exacerbated by these technologies. There are many examples of algorithmically driven or amplified racial discrimination, gender discrimination, and disability discrimination in this area. In financial services, discriminatory automated systems are regularly used in high-stakes areas, impacting people’s ability to access credit and mortgages.

Automated decision-making systems are increasingly being used to make important decisions in key areas of people’s lives.

It is clear that relying on voluntary efforts of companies building and deploying automated decision-making systems has not been and will not be sufficient to address these harms. That’s why last month, the ACLU responded to the Federal Trade Commission (FTC)’s request for comment on harms stemming from commercial surveillance and lax data security practices. We are calling on the commission to adopt binding rules to identify and prevent algorithmic discrimination.

These problems are multi-faceted. Automated decision-making systems are often built and deployed in ecosystems and institutions already marked by entrenched discrimination — including in health care, the criminal legal system, and the family regulation system. Built and evaluated by humans, automated decision-making tools are often developed using data that reflects systemic discrimination and abusive data collection practices. Attempting to predict outcomes based on this data can create feedback loops that further systemic discrimination. These compounding issues can rear their heads throughout an automated decision-making system’s design and deployment. Moreover, these systems are often operated and deployed in such a way that impacted individuals and communities might not even know that they are interacting with these systems, let alone how they work — yet could still be materially affected by the system’s decision-making process and errors.

These automated tools are a prevalent and entrenched part of our modern economic and social systems, and there is clear evidence that they can create and enable pervasive harm.

The commission should establish tailored requirements for companies to undergo independent external audits of their automated decision-making systems. As highlighted in our comment, the commission can adopt these rules to prevent discrimination without contravening the First Amendment or Section 230 of the Communications Decency Act. The commission should consider requiring companies to adopt a comprehensive auditing framework to govern the use of automated decision-making systems and set clear standards for that framework. That’s because the harms of automated decision-making systems are best assessed as part of holistic efforts to understand the selection, design, and deployment of such tools.

These efforts should include meaningful engagement with people directly impacted by the deployment of automated decision-making systems. Companies should be required to undergo evaluations of the potential risks and harms of their algorithmic systems both before they are built and continuously if they are deployed. When these evaluations demonstrate the potential for algorithmic bias or other harms, companies can and should decommission or terminate the tools. To promote objectivity in evaluating algorithmic systems, these assessments should be carried out by independent external auditors who are provided with meaningful access to internal company systems under appropriate privacy controls.

That’s why last month, the ACLU responded to the Federal Trade Commission (FTC)’s request for comment on harms stemming from commercial surveillance and lax data security practices.

To enact these new rules, the commission should also collaborate with external researchers, advocacy organizations, and other government agencies. For example, the ACLU has previously called for the commission to collaborate with other civil rights agencies to address technology-driven housing discrimination and employment discrimination. New requirements established by the commission can and should co-exist with standards and guidance currently being developed by other agencies, such as the National Institute of Standards and Technology’s (NIST) Artificial Intelligence (AI) Risk Management Framework. The ACLU recently also provided recommendations to NIST to ensure that the AI Risk Management Framework centers impacted communities in efforts to address the harms of AI systems.

In a digital age, protecting our civil rights and civil liberties demands that we address the harms of algorithmic systems. Strong protections that address algorithmic discrimination have the potential to benefit all people and can make AI systems work better for everyone. The commission should act now to provide much-needed protection from the harms of automated decision-making systems.

Date

Tuesday, December 6, 2022 - 4:00pm

Featured image

The Federal Trade Commission building in Washington.

Show featured image

Hide banner image

Override default banner image

The Federal Trade Commission building in Washington.

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

53445

Menu parent dynamic listing

22

Imported from National VID

53454

Imported from National Link

Show PDF in viewer on page

Style

Centered single-column (no sidebar)

Teaser subhead

The Federal Trade Commission should adopt binding rules to identify and prevent algorithmic discrimination.

Show list numbers

Pages

Subscribe to ACLU of Florida RSS