Emotional AI is no substitute for empathy

Spread the love

[ad_1]

In 2023, emotional AITechnology that can sense and communicate human emotions – will become one of the dominant applications of machine learning. For example, Hume AI, founded by ex-Google researcher Alan Cowen, develops tools to measure emotions from verbal, facial and vocal expressions. Swedish company Smart Eyes recently acquired Affectiva, an MIT Media Lab spinoff that developed the SoundNet neural network, an algorithm that classifies emotions, such as anger, from audio samples in under 1.2 seconds. Video platform Zoom is also introducing Zoom IQ, a feature that will soon provide users with real-time analysis of emotions and engagement during a virtual meeting.

By 2023, tech companies will release advanced chatbots that closely mimic human emotions, creating more empathetic interactions with users in banking, education, and healthcare. Microsoft’s chatbot Xiaoice is already a hit in China, with average users reportedly having “conversations with her” more than 60 times in a month. It also passed the Turing test, where users failed to recognize it as a bot for 10 minutes. Analysis by Juniper Research Consultancy predicts that chatbot interactions in healthcare will grow nearly 167 percent from 2018, reaching 2.8 billion annual interactions by 2023. .

By 2023, emotional AI will become commonplace in schools. In Hong Kong, some secondary schools are already using an artificial intelligence program developed by Find Solutions AI that measures the subtle movements of muscles in students’ faces and detects a range of negative and positive emotions. Teachers use this method to track students’ emotional changes, their motivation and attention, helping them make early interventions if a student loses interest.

The problem is that most sentient AI is based on flawed science. Emotional AI algorithms, although trained on large and diverse data sets, reduce facial and tonal expressions to an emotion without considering the social and cultural context of the person and situation. For example, while algorithms can recognize and report when a person is crying, it is not always possible to accurately detect the reason and meaning behind the tears. Similarly, a contorted face does not necessarily indicate an angry person, but rather a means to an end. Why? We all adapt our emotional displays to our social and cultural norms, so our expressions are not always true reflections of our inner states. Often people do “emotion work” to hide their true emotions, and how they express their emotions may be a learned response rather than a spontaneous expression. For example, women are more likely than men to modulate their emotions, especially those with negative values ​​such as anger, because they are expected to.

Therefore, AI technologies that make inferences about emotional states will exacerbate gender and racial inequalities in our society. For example, a 2019 UNESCO report showed the harmful impact of gendering AI technologies, with “feminine” voice assistance systems designed according to stereotypes of emotional passivity and slavery.

Facial recognition AI can even perpetuate racial disparities. An analysis of 400 NBA games using two popular emotion-recognition software programs, Face and Microsoft’s Face API, showed that black players on average presented more negative emotions, even when they smiled. These results reaffirm other research showing that black men express more positive emotions in the workplace because they are stereotyped as aggressive and threatening.

Emotional AI technologies will become more widespread in 2023, but if left unchallenged and unexamined, they will reinforce systemic racial and gender biases, reflect and reinforce inequalities in the world, and further disadvantage the already marginalized.

[ad_2]

Source link

Leave a Comment