Generative AI

Enhancing Emotional Intelligence in Bots with Generative AI: The Journey So Far


We’ve reached a crossroads in the history of artificial intelligence, a point at which technology has become more than theoretical, more than just another resource for enterprise organizations to streamline internal operations. 

The development of sophisticated machine learning models and deep learning processes 15 years ago led to significant leaps in natural language processing and emotionally intelligent chatbot interactions. Artificial intelligence has grown in leaps and bounds, becoming a valuable resource across dozens of fields, but it is only in the last year that we’ve seen the first glimpses of what an AI-driven future really looks like. What does it mean to engage with AI at scale, and how will the average consumer interact with the platforms that are set to reshape the economic and social landscape? 

Starting with DALL-E and Midjourney in 2021, an increasing number of people have had access to powerful generative tools that have developed an uncanny ability to engage with human operators and produce conversational responses on par with human intelligence. ChatGPT-4’s launch in March 2023 marked a major step toward true emotional intelligence from an AI system. 

Trends in AI Leading Into 2024

Generative AI use is not new. Generative Adversarial Networks have been used in a number of apps for years to support technologies like face swapping, deep fakes, and voice synthesis in ways that have been both accessible and of concern to the general public. But, in the last 13 months, Generative AI has become a flashpoint in the global economy. ChatGPT stole most of the headlines, but several other bots launched in 2023, including Google Bard, Meta LLaMA 2, Gemini, and Baidu’s Ernie Bot. 

The booming popularity of these tools has led to a wave of concerns socially and economically. Artists are concerned about the rampant use of their work as training data as we get a better idea of how AI-generated text and art are actually created. While OpenAI claims content regurgitation is a “rare bug,” lawsuits are developing from writers, comedians, visual artists, and many others directly impacted by the use of AI to produce and replicate their personal voices and art.

However, concerns over AI go beyond short-term economic impact, with some prominent minds in the space voicing existential concerns about the future of AI. A recent Harvard Business Review study found that 79% of Senior IT leaders are worried about security risks related to AI, and 73% cited biased outcomes as a potential issue. Businesses are rightly concerned about how the black box of AI will impact their operations. A Microsoft AI team last year released a paper stating that we were witnessing the beginning of Artificial General Intelligence with GPT-4, and in the Harvard International Review, Sam Meacham goes further, talking about the existential threat of an AGI arms race. 

As such, policymakers are finally paying attention, and a wave of new legislation and regulations were introduced in 2023. The EU identified clear risk levels in artificial intelligence implementation that was codified in the AI Act, a set of definitions that ban AI for use in cognitive manipulation, social scoring, biometric identification, and real-time recognition systems. The AI Act implements sweeping regulations on other uses, such as oversight of critical infrastructure and educational training. 

In the US, President Biden signed a broad Executive Order with more than a dozen directives to government agencies on how to research, approach, and consider new regulations around AI. While not rising to the level of legislative regulation being seen in Europe, it’s the first step in a long and winding path toward manageable AI regulation.

AI Grows More Emotionally Intelligent

One of the great barriers to broader AI implementation has been emotional intelligence — a machine’s ability to understand not only the quantitative meaning of human input but also to perceive the emotional cues underpinning those words. In any given sentence, humans imbue dozens of emotional signals in vocal patterns, speed, volume, timbre, pitch, and even the length of pauses between words or syllables. One of the reasons for such an explosive arms race among AI developers is that, to truly understand what a human operator is asking, or more importantly, what they need, AI must be emotionally intelligent in a way comparable to another human being. 

Generative AI is giving us the first mass-market look at what this looks like. ChatGPT is the best-known natural language processing engine, used by millions of people every day. It is so popular because asking a question feels like asking another person. In fact, a recent study in Frontiers in Psychology found that ChatGPT performed better than humans in assessing emotional awareness (using the Levels of Emotional Awareness Scale). 

Another study, conducted in late 2023, found that large language models like ChatGPT-4 not only understand emotional stimuli but can become more attuned to a user and provide better responses when emotional language is used in engaging with them. Simply by adding emotional cues to standard prompts, such as “this is very important to me” and “do your best,” researchers were able to see an 8% better response rate to their prompts. 

Emotional intelligence is a complex, multifaceted component of human communication and interaction and remains one of the most important qualifiers in both how effective and how comfortable AI is for the average user. Part of the rapid growth in GAI use can be attributed to the increased emotional intelligence of these bots. 

Developing More Intuitive AI Systems

As the industry continues to outpace academia (e.g., 32 machine learning models were produced in the industry in 2022 compared to just three in academia), the market will drive rapid innovation in AI. Many of these innovations will be focused on making systems more user-friendly, accessible, and ultimately more emotionally intelligent. 

But the transition to an emotionally intelligent AI future is not necessarily set to be a smooth one. Stanford’s 2023 AI Index reports that the number of AI incidents (such as deepfakes of world leaders, population monitoring, and other ethically questionable uses) has increased 26 times since 2012. At the same time, mentions of AI in legislation have increased by more than 600% since 2016 and will continue to grow as lawmakers are forced to address the economic and social fallout of such powerful systems. 

AI is poised to fundamentally change how society engages with technology and each other, and emotional intelligence will only grow as the systems become more sophisticated. Regulation, attention to standardized ethical guidelines, and simmering public discomfort with such a rapidly growing new technology are all potential barriers, but with the right approach, there are untold benefits to be had from a robust, emotionally intelligent AI ecosystem in our society.

Rana Gujral is a Grit Daily Leadership Network member and an entrepreneur, speaker, and CEO at Behavioral Signals, an enterprise software company that develops AI technology to analyze human behavior from voice data.



Source

Related Articles

Back to top button