Emotion recognition is a hot new area, with numerous companies peddling products that claim to be able to read people’s internal emotional states, and AI researchers looking to improve computers’ ability to do so. This is done through voice analysis, body language analysis, gait analysis, eye tracking, and remote measurement of physiological signs like pulse and breathing rates. Most of all, though, it’s done through analysis of facial expressions.

A new study, however, strongly suggests that these products are built on a bed of intellectual quicksand.

The key question is whether human emotions can be reliably determined from facial expressions. “The topic of facial expressions of emotion — whether they’re universal, whether you can look at someone’s face and read emotion in their face — is a topic of great contention that scientists have been debating for at least 100 years,” Lisa Feldman Barrett, Professor of Psychology at Northeastern University and an expert on emotion, told me. Despite that long history, she said, a comprehensive assessment of all the emotion research that has been done over the past century had never been done. So, several years ago, the Association for Psychological Science brought together five distinguished scientists from various sides of the debate to conduct “a systematic review of the evidence testing the common view” that emotion can be reliably determined by external facial movements.

The five scientists “represented very different theoretical views,” according to Barrett, who was one of them. “We came to the project with very different expectations of what the data would show, and our job was to see if we could find consensus in what the data shows and how to best interpret it. We were not convinced we could, just because it’s such a contentious topic.” The process, expected to take a few months, ended up taking two years.

Nevertheless, in the end, after reviewing over 1,000 scientific papers in the psychological literature, these experts came to a unanimous conclusion: there is no scientific support for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.”

The scientists conclude that there are three specific misunderstandings “about how emotions are expressed and perceived in facial movements.” The link between facial expressions and emotions is not reliable (i.e., the same emotions are not always expressed in the same way), specific (the same facial expressions do not reliably indicate the same emotions), or generalizable (the effects of different cultures and contexts has not been sufficiently documented).

As Barrett put it to me, “A scowling face may or may not be an expression of anger. Sometimes people scowl in anger, sometimes you might smile, or cry, or just seethe with a neutral expression. Also, people scowl at other times — when they’re confused, when they’re concentrating, when they have gas.”

The scientists conclude:

These research findings do not imply that people move their faces randomly or that [facial expressions] have no psychological meaning. Instead, they reveal that the facial configurations in question are not “fingerprints” or diagnostic displays that reliably and specifically signal particular emotional states regardless of context, person, and culture. It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts.

This paper is significant because an entire industry of automated purported emotion-reading technologies is quickly emerging. As we wrote in our recent paper on “Robot Surveillance,” the market for emotion recognition software is forecast to reach at least $3.8 billion by 2025. Emotion recognition (aka “affect recognition” or “affective computing”) is already being incorporated into products for purposes such as marketing, robotics, driver safety, and (as we recently wrote about) audio “aggression detectors.”

Emotion recognition is based on the same underlying premise as polygraphs aka “lie detectors:” that physical body movements and conditions can be reliably correlated with a person’s internal mental state. They cannot — and that very much includes facial muscles. What is true of facial muscles, it stands to reason, would also be true of all the other methods of detecting emotion such as body language and gait.

The belief that such mind reading is possible, however, can do real harm. A jury’s cultural misunderstanding about what a foreign defendant’s facial expressions mean can lead them to sentence him to death, for example, rather than prison. Translated into automated systems, that belief could lead to other harms; a “smart” body camera falsely telling a police officer that someone is hostile and full of anger could contribute to an unnecessary shooting.

As Barrett put it to me, “there is no automated emotion recognition. The best algorithms can encounter a face — full frontal, no occlusions, ideal lighting — and those algorithms are very good at detecting facial movements. But they’re not equipped to infer what those facial movements mean.”

Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

Date

Thursday, July 18, 2019 - 11:30am

Featured image

Screens with AI assisted analysis and surveillance of individuals

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

Screens with AI assisted analysis and surveillance of individuals

Related issues

Privacy

Show related content

Imported from National NID

91478

Menu parent dynamic listing

22

Imported from National VID

148920

Show PDF in viewer on page

Style

Standard with sidebar

We are surrounded by surveillance cameras that record us at every turn. But for the most part, while those cameras are watching us, no one is watching what those cameras observe or record because no one will pay for the armies of security guards that would be required for such a time-consuming and monotonous task.

But imagine that all that video were being watched — that millions of security guards were monitoring them all 24/7. Imagine this army is made up of guards who don’t need to be paid, who never get bored, who never sleep, who never miss a detail, and who have total recall for everything they’ve seen. Such an army of watchers could scrutinize every person they see for signs of “suspicious” behavior. With unlimited time and attention, they could also record details about all of the people they see — their clothing, their expressions and emotions, their body language, the people they are with and how they relate to them, and their every activity and motion.

That scenario may seem far-fetched, but it’s a world that may soon be arriving. The guards won’t be human, of course — they’ll be AI agents.

Today we’re publishing a report on a $3.2 billion industry building a technology known as “video analytics,” which is starting to augment surveillance cameras around the world and has the potential to turn them into just that kind of nightmarish army of unblinking watchers.

Using cutting-edge, deep learning-based AI, the science is moving so fast that early versions of this technology are already starting to enter our lives. Some of our cars now come equipped with dashboard cameras that can sound alarms when a driver starts to look drowsy. Doorbell cameras today can alert us when a person appears on our doorstep. Cashier-less stores use AI-enabled cameras that monitor customers and automatically charge them when they pick items off the shelf.

In the report, we looked at where this technology has been deployed, and what capabilities companies are claiming they can offer. We also reviewed scores of papers by computer vision scientists and other researchers to see what kinds of capabilities are being envisioned and developed. What we found is that the capabilities that computer scientists are pursuing, if applied to surveillance and marketing, would create a world of frighteningly perceptive and insightful computer watchers monitoring our lives.

Cameras that collect and store video just in case it is needed are being transformed into devices that can actively watch us, often in real time. It is as if a great surveillance machine has been growing up around us, but largely dumb and inert — and is now, in a meaningful sense, “waking up.”

 

mytubethumbplay
%3Ciframe%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F1dDhqX3txf4%3Fautoplay%3D1%26autoplay%3D1%26ve...
Privacy statement. This embed will serve content from youtube.com.

Computers are getting better and better, for example, at what is called simply “human action recognition.” AI training datasets include thousands of actions that computers are being taught to recognize — things such as putting a hat on, taking glasses off, reaching into a pocket, and drinking beer.

Researchers are also pushing to create AI technologies that are ever-better at “anomaly detection” (sounding alarms at people who are “unusual,” “abnormal,” “deviant,” or “atypical”), emotion recognition, the perception of our attributes, the understanding of the physical and social contexts of our behaviors, and wide-area tracking of the patterns of our movements.

Think about some of the implications of such techniques, especially when combined with other technologies like face recognition. For example, it’s not hard to imagine some future corrupt mayor saying to an aide, “Here’s a list of enemies of my administration. Have the cameras send us all instances of these people kissing another person, and the IDs of who they’re kissing.” Government and companies could use AI agents to track who is “suspicious” based on such things as clothing, posture, unusual characteristics or behavior, and emotions. People who stand out in some way and attract the attention of such ever-vigilant cameras could find themselves hassled, interrogated, expelled from stores, or worse.

Many or most of these technologies will be somewhere between unreliable and utterly bogus. Based on experience, however, that often won’t stop them from being deployed — and from hurting innocent people. And, like so many technologies, the weight of these new surveillance powers will inevitably fall hardest on the shoulders of those who are already disadvantaged: people of color, the poor, and those with unpopular political views.

We are still in the early days of a revolution in computer vision, and we don’t know how AI will progress, but we need to keep in mind that progress in artificial intelligence may end up being extremely rapid. We could, in the not-so-distant future, end up living under armies of computerized watchers with intelligence at or near human levels.

These AI watchers, if unchecked, are likely to proliferate in American life until they number in the billions, representing an extension of corporate and bureaucratic power into the tendrils of our lives, watching over each of us and constantly shaping our behavior. In some cases, they will prove beneficial, but there is also a serious risk that they will chill the freedom of American life, create oppressively extreme enforcement of petty rules, amplify existing power disparities, disproportionately increase the monitoring of disadvantaged groups and political protesters, and open up new forms of abuse.

Policymakers must contend with this technology’s enormous power. They should prohibit its use for mass surveillance, narrow its deployments, and create rules to minimize abuse.

Read the full report here.

By Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project

 

Date

Thursday, June 13, 2019 - 10:45am

Featured image

A surveillance camera over a crowd

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

A surveillance camera over a crowd

Related issues

Privacy

Show related content

Imported from National NID

91134

Menu parent dynamic listing

22

Imported from National VID

147688

Show PDF in viewer on page

Style

Standard with sidebar

This week, Secretary of State Mike Pompeo formally announced the creation of a “Commission on Unalienable Rights.” Its stated purpose, according to a notice published in the Federal Register in May, is to provide “fresh thinking about human rights discourse where such discourse has departed from our nation’s founding principles of natural law and natural rights.”

The Trump administration’s actions and words — from threatening International Criminal Court judges and prosecutors, pulling out of the U.N. Human Rights Council and severing relations with its independent experts, to cozying up with authoritarian leaders and advancing xenophobic policies that defy international law — have made it abundantly clear that the administration has zero interest in being a global champion of human rights. This commission isn’t fooling anyone.

We know that references to “natural law and natural rights” are code words used by the religious right and social conservatives to advance anti-LGBTQ and anti-women’s rights agendas. We also know that members of the new commission have troubling anti-LGBTQ and abortion rights records. And based on the Trump administration’s record, there is good reason to believe the commission is intended to redefine universal human rights to fit the administration’s twisted and troubling worldview, with the clear and first target being the State Department’s long-standing work to advance the rights of LGBTQ people, women, and other vulnerable populations across the world. 

In defending the commission in a recent op-ed in the Wall Street Journal, Secretary Pompeo charged that human rights advocates have created “new categories of rights” that “blur the distinction between unalienable rights and ad hoc rights granted by governments.” And that the commission will “ground our discussion of human rights in America’s founding principles.”

That’s a load of nonsense. Secretary Pompeo speaks of longstanding international human rights norms as if he’s demonstrated a single iota of respect for them, and as if those norms are incongruent with defending human dignity and democratic values.

The Universal Declaration on Human Rights (UDHR) — which Secretary Pompeo names as a foundational document that will be examined by the commission — is grounded in democratic values of equal rights, justice, and the right to self-determination. It establishes the modern international human rights framework that provides the legal and moral authority to hold governments and other perpetrators accountable for human rights violations — a framework that the Trump administration seems bent on dismantling.

What Secretary Pompeo fails to understand, or perhaps acknowledge, is that this modern international human rights framework is made up of the very same traditions and values that also guided America’s democratic origins. In fact, all too often in our modern history, it is the U.S. — irrespective of the political party in power — that has failed to live up to the UDHR, including the UDHR’s promise of economic justice. Different groups throughout American history, including indigenous peoples, enslaved African people, and women, among others, have all been the victims of America’s double-standard.

When the United States has wavered on its commitment at home and abroad, it is the UDHR in many cases that has provided the framework to hold our country’s leaders accountable. That’s because the full spectrum of rights enshrined in the UDHR are preordained by well-recognized democratic values, traditions, and principles, including the founding principles of our democracy.  

The world has now witnessed the human costs of the Trump administration’s atrocious disregard for these basic human rights and democratic values: the inhumanity of family separation and detention, the discriminatory Muslim ban, the upended lives from the repeal of the Deferred Action for Childhood Arrivals (DACA) program, the revival of the racist ‘War on Drugs,’ numerous attempts to roll back advances in LGBTQ equality, trampling on the rights of women, and illegal restrictions on the rights of asylum seekers. Having had it with the world naming and shaming under the international human rights framework, the administration appears to be trying to find moral footing for President Trump’s discriminatory policies with the announcement of this commission.

Make no mistake: Pompeo’s commission is a dangerous initiative intended to redefine universal human rights and roll back decades of progress in achieving full rights for marginalized and historically oppressed communities. It is likely to use religion as grounding to deny human dignity and equality for all. It will undermine the existing State Department’s well respected and legally-mandated Bureau of Democracy, Human Rights and Labor Affairs. And it will be a waste of taxpayer dollars, which would be better spent on implementing U.S. human rights treaty obligations and putting an end to Trump’s era of human misery and assault on our humanity.

We won’t let him get away with it.

Jamil Dakwar, Director, ACLU Human Rights Program
& Sonia Gill, Senior Legislative Counsel, ACLU

Date

Friday, July 12, 2019 - 3:00pm

Featured image

Mike Pompeo

Show featured image

Hide banner image

Tweet Text

[node:title]

Share Image

Mike Pompeo

Related issues

Free Speech Immigrants' Rights LGBTQ+ Rights

Show related content

Imported from National NID

91428

Menu parent dynamic listing

22

Imported from National VID

148698

Show PDF in viewer on page

Style

Standard with sidebar

Pages

Subscribe to ACLU of Florida RSS