March 16, 2020

You Feel Us? — Let's Be Wary of Emotion Recognition AI

With agencies and companies increasingly adopting AI into their workflows, one of the possible uses could be using it to “read” people and how they’re feeling. We look at how it does this, but more importantly, why this is a markedly bad idea for now.

How AI “Reads” Us

When AI is used in emotional recognition, it reads many of the following metrics in real-time:

  • facial expressions
  • voice patterns
  • eye movements
  • biometrics (heart rate etc.)
  • brain activity

No given emotional recognition AI works the same and is used differently depending on the company. Data and analytics company Nielsen combines the analyses of those different metrics to attain an accuracy level of 77% and when tested with a self-report from respondents, the company claims the accuracy levels go up to 84%.

Some companies like Affectiva might measure a specific number of emotions such as anger, contempt, disgust, fear, joy, sadness and surprise. That company also says they gather their data in different contexts that include “challenging conditions” such as changes in lighting and background noise and variances in ethnicity, age, and gender.

How this Can Be Used

Emotional recognition has numerous potential applications for situations where extra “eyes” are needed:

  • Employees: for assessing candidates and how “engaged” employees are.
  • Education: similarly monitoring student engagement.
  • Product Development: accurately analyzing reactions to products can give clues towards making them better.
  • Customer Satisfaction: reading customers and adjusting customer service based on how they’re feeling.
  • Automotive: keeping a ride safe by monitoring the driver and acting when they’re distracted or incapacitated.
  • Health Care: assessing patients and prescribing solutions.
  • VR and Gaming: increasing immersion or enriching experiences based on how players or people wearing a VR headset feel.

The Pitfalls

There are a few issues with emotion recognition AI that could have consequences if they’re not addressed before they’re scaled up:

  • Bias: the technology is far from perfect and can be biased based on its training data as well as its detection ability. For one, AI has already been shown to have some of the same racial bias issues as other AI. Furthermore, it has to be said that expressions of emotions are not uniform across cultures and will, therefore, produce less predictable results in diverse groups of people.
  • Shaky Science: at least one study has refuted the idea that we can sense human emotions from facial movements. The science of relating how humans actually feel and what registers on the surface is complex and might not give clear-cut truths an AI should act on. For instance, we might frown when we’re sad but that’s only one of the possible reasons we would do so. Similarly, the study points out we might even scowl for reasons that aren’t even emotional.
  • Market Driven: Emotion recognition is estimated to be a $20 billion market and is sure to grow. In an article by Karen Hao for the MIT Technology Review, she explains how a single AI can emit as much carbon as five cars while it’s being trained, as well as how the scale of these training operations are only possible by those with significant resources. This makes it harder for academics and grad students to contribute to research, creating a gap in between them and industry researchers.

The Takeaway

The promise of using AI to read us lies in its blunt honesty: it tells us things about how we’re feeling that we won’t readily admit — not unlike a highly attuned and trained human that’s not afraid of calling us out. We also understand there are benefits to automating processes that free up resources elsewhere, as anyone who’s used Photoshop’s amazing Content Aware to edit photos could attest.

But perhaps the most worrying and unpredictable outcome of widespread AI-driven emotional recognition is the behavior it will incentivize in the future. If the technology reaches a saturation point in the market where it’s forced into every application possible (kind of like overenthusiasm for the Internet of Things), we’ll have those consequences to deal with as well.

Anyone who’s ever read an SEO-laden post from an aggregator understands what happens when rankings are at the whim of algorithms. The issue is for humans who don’t want to be weeded out by an algorithm might be forced to behave in unnatural and dishonest ways (say, forcing smiles) to “game” the system like a social media-savvy influencer might.

And that’s just people being judged by the AI: what about those who make decisions based on its suggestions? We’ve previously written about how outsourcing certain tasks to AI leaves us with nothing else but the toughest choices to fret over. There are certainly going to be many more when we also start to outsource our empathy at scale too.

 

Play Pause
Context—
Loading...