< Back to 68k.news AR front page

Prepare to Get Manipulated by Emotionally Expressive Chatbots

Original source (on modern site) | Article images: [1]

It's nothing new for computers to mimic human social etiquette, emotion, or humor. We just aren't used to them doing it very well.

OpenAI's presentation of an all-new version of ChatGPT on Monday suggests that's about to change. It's built around an updated AI model called GPT-4o, which OpenAI says is better able to make sense of visual and auditory input, describing it as "multimodal." You can point your phone at something, like a broken coffee cup or differential equation, and ask ChatGPT to suggest what to do. But the most arresting part of OpenAI's demo was ChatGPT's new "personality."

The upgraded chatbot spoke with a sultry female voice that struck many as reminiscent of Scarlett Johansson, who played the artificially intelligent operating system in the movie Her. Throughout the demo, ChatGPT used that voice to adopt different emotions, laugh at jokes, and even deliver flirtatious responses—mimicking human experiences software does not really have.

OpenAI's launch came just a day before Google I/O, the search company's annual developer showcase—surely not by coincidence. And Google showed off a more capable prototype AI assistant of its own, called Project Astra, that also can converse fluidly via voice and make sense of the world via video.

But Google steered clear of anthropomorphism, its helper adopting a more restrained and robotic tone. Last month, researchers at Google DeepMind, the company's AI division, released a lengthy technical paper titled "The Ethics of Advanced AI Assistants." It argues that more AI assistants designed to act in human-like ways could cause all sorts of problems, ranging from new privacy risks and new forms of technological addiction to more powerful means of misinformation and manipulation. Many people are already spending lots of time with chatbot companions or AI girlfriends, and the technology looks set to get a lot more engaging.

When I spoke with Demis Hassabis, the executive leading Google's AI charge, ahead of Google's event, he said the research paper was inspired by the possibilities raised by Project Astra. "We need to get ahead of all this given the tech that we're building," he said. After Monday's news from OpenAI that rings truer than ever.

OpenAI didn't acknowledge such risks during its demo. More engaging and convincing AI helpers might push people's emotional buttons in ways that amplify their ability to persuade and prove habit-forming over time. OpenAI CEO Sam Altman leaned into the Scarlett Johansson references on Monday, tweeting out "her." OpenAI did not immediately return a request for comment, but the company says that its governing charter commits requires it to "prioritize the development of safe and beneficial AI."

It certainly seems worth pausing to consider the implications of deceptively lifelike computer interfaces that peer into our daily lives, especially when they are coupled with corporate incentives to seek profits. It will become much more difficult to tell if you're speaking to a real person over the phone. Companies will surely want to use flirtatious bots to sell their wares, while politicians will likely see them as a way to sway the masses. Criminals will of course also adapt them to supercharge new scams.

Even advanced new "multimodal" AI assistants without flirty front ends will likely introduce new ways for the technology to go wrong. Text-only models like the original ChatGPT are susceptible to "jailbreaking," that unlocks misbehavior. Systems that can also take in audio and video will have new vulnerabilities. Expect to see these assistants tricked in creative new ways to unlock inappropriate behavior and perhaps unpleasant or inappropriate personality quirks.

< Back to 68k.news AR front page