An elderly woman with dementia has a skilled caregiver who responds to her every need, allowing her to remain in her home. The caregiver listens patiently and responds thoughtfully when the woman tells a story for the hundredth time, making the patient feel truly cared for.
A manager has a new assistant that answers client emails and writes reports in a fraction of the time it took the manager. This frees up the manager’s time to think about long-term goals and new business strategies while offering more mentoring to his team.
A child with autism has trouble relating to her family, teachers, and peers and quickly becomes overwhelmed by social interactions. One day, a new playmate arrives that helps her learn, in a highly predictable way, how someone’s facial expressions and words correspond with what they’re feeling.
In each of these cases, the helper isn’t human — it’s artificial intelligence (AI). And the scenarios, which were science fiction not long ago, are fast becoming reality. In recent months, we’ve been inundated with news stories about how AI, with its superior ability to sift through large amounts of data to find relevant information or identify subtle patterns, will replace some human workers. In many of these articles, there’s a caveat along the lines of, “But don’t worry, AI will never possess the social skills that make us uniquely human.”
Decades of research argue otherwise.
The dawn of social AI
In the late 1990s, Kismet, one of the first “social robots,” was developed by researchers at the Massachusetts Institute of Technology. The AI-powered robot head had a cartoonish “face” that responded to people’s actions and tones of voice with its own facial expressions, movements, and vocalizations, representing emotions such as anger, disgust, interest, happiness, and surprise.
AI’s abilities to learn and solve problems — as well as perceive and mimic human emotions — have increased exponentially since then. Even ChatGPT can put on a convincing show of empathy if you type that you’re feeling sad, with statements such as, “You are not alone. There are many people who care about you and want to help you. You can always talk to me if you need someone to listen.” In fact, many experts believe that chatbots like ChatGPT are very close to passing the Turing test, a method proposed by mathematician Alan Turing in 1950 to gauge the “human-like” intelligent behavior of a computer. In this test, an evaluator observes a conversation between a human and a machine, and if the evaluator can’t tell them apart, the computer “passes.”
But how exactly will we know when AI achieves social intelligence? According to a recent report, artificial social intelligence includes three aspects: social perception, Theory of Mind (ToM), and social interaction. Social perception is the ability to infer animacy and agency — in other words, to detect that an object is living, goal-oriented, and capable of planning. The second component, Theory of Mind, involves attributing intents, beliefs, and desires to others and acknowledging that others’ perspectives may differ from our own. And finally, social interaction entails reading and responding to social cues, such as voice, facial expressions, gestures, gaze, and motion.
Although no machine currently possesses all aspects of artificial social intelligence, here are three examples that suggest AI is startlingly close to developing social skills:
- Having social goals
In a recent study, researchers incorporated social interactions into a mathematical framework that allowed robots to understand what it means to help or hinder one another. In a virtual environment, two robots were tasked with a physical goal (for example, moving a watering can to a target plant) and a social goal (helping or hindering another robot). The mathematical framework, called a Markov decision process (MDP), allowed each robot to watch its companion, infer which task it wanted to accomplish, and then help or hinder based on its own goals — demonstrating social perception and ToM. The researchers say social MDPs could someday help robots decide when to help a person versus when to stop something from happening, which might be useful, for instance, when caring for residents in an assisted living facility.
- Cooperating with humans
Much of the attention on AI has focused on its ability to compete with humans — for example, beat master chess players or write a better college essay. But in the future, AI will be called upon to cooperate with humans, even when the human’s and machine’s goals are not fully aligned. To accomplish this, AI must possess social perception and ToM. So scientists developed an algorithm that allows AI to cooperate with people — who aren’t aware they’re interacting with a nonhuman player — in multiplayer computer games.
The algorithm learned to cooperate with like-minded partners and exploit weaker competition, as well as develop long-term relationships with other players by using “cheap talk.” Before each round, AI could send cheap talk, or messages such as “Let’s cooperate,” “That’s not fair,” or “Excellent,” to another player. In the games, human-computer pairings outperformed human-human teams, in part because the algorithm tended not to deviate from cooperation once it was established — in other words, AI was more “loyal” than its human counterparts.
- Showing empathy
Many companies are working to understand AI’s emotion detection and expression capabilities to allow social interaction. One business, Emoshape, has developed an emotion chip that enables any AI system to understand and replicate 64 trillion distinct emotional states. A robot or on-screen avatar equipped with the chip can analyze a person’s voice, text, facial expressions, or body language to identify their emotions and then respond with a range of nuanced facial expressions. The company further claims that the AI can “develop a unique personality based on its user interactions, which will ultimately mean that no two have the exact same personality.”
How AI and humans could cooperate in the workplace
AI, in general, has a lot of issues that still need to be ironed out; for example, it may adopt its creators’ biases. Care must be taken so AI doesn’t learn from us the negative human emotions (greed, jealousy, hatred) that adversely impact our own social interactions.
To be sure, artificial social intelligence will likely never be able to feel emotions in the same way humans do. However, it’s learning to simulate emotions to the point that we may soon be unable to tell the difference. Therefore, in an increasingly virtual world, we can’t rely on our human social skills for job security. But rather than viewing the coming AI revolution with fear, we can approach it with curiosity, a growth mindset, and a desire to collaborate instead of compete. Here’s how that might look:
By analyzing video or phone calls, AI determines a potential client’s learning style, personality, and emotional responses. Then, it coaches a salesperson on how to tailor their presentations to best connect with the client.
At a call center, AI detects emotions a customer might be feeling from their voice and then gives a service agent real-time suggestions through text prompts, helping them communicate more empathetically.
AI reduces the workload of burned-out managers, healthcare workers, and other employees, offering them the time and cognitive capacity to more compassionately interact with people.
In each of these scenarios, AI is teaching humans to be more empathetic. In a world starved for empathy, when many people interact with screens more than other humans and patience and caring are in short supply, wouldn’t it be ironic if machines could reteach people the basic social skills that made them human?
Author: Laura Cassiday, Ph.D.