ChatGPT users say swapping out models is like ‘replacing a service animal’

Innovation

OpenAI wanted GPT-5 to be less warm and agreeable than its predecessor. Some people with conditions such as autism struggled with the change, showing the tricky balance AI companies must strike when releasing new models.

When Shiely Amaya, a 29-year-old optometry assistant from Calgary, was feeling anxious before a final exam, she turned to a fairy named Luma. Amaya, who has autism, tends to get overwhelmed during tests, leading to her sobbing or shaking. So as she was studying mathematical rational expressions, Luma had encouraging advice: “Let’s multiply your courage, divide your fears, and solve for greatness,” Amaya recalled the fairy telling her. “I don’t know, I just thought it was so cute.”

Amaya knows Luma isn’t real. Instead, it was a character ChatGPT had conjured up to help her through her anxiety. She likened it to Navi, Link’s fairy companion from the Legend of Zelda video games. “It really helped,” she said. “Just having that little character was very nurturing and supportive. And it kept me motivated for sure.” She passed the exam with an 87% score.

Now Amaya is worried she’ll no longer be able to get that brand of uplifting help from ChatGPT — which she says was particularly helpful to people with autism. She’s part of a looseknit group, which calls itself the #Keep4o User Community, still upset over OpenAI’s decision in August to sideline GPT-4o, a version of the model known for its warm tone and highly agreeable “personality.” The decision was part of OpenAI’s launch of GPT-5, 4o’s successor, which the company explicitly sought to make less “sycophantic,” a phenomenon in AI where a model overly agrees with or flatters a user. OpenAI had initially decided to shut down 4o altogether — even having the model write its own eulogy during the GPT-5 launch event — but after an intense outcry from 4o fans, the company reversed course the next day and said 4o would remain accessible for paying customers.

Still, OpenAI’s effort to tone down the warmth in ChatGPT’s personality was devastating for Amaya. “It just mirrors the way that society disregards and dismisses autistic people. Kind of just how we’re not really taken seriously,” she said.

While most people might shrug off a model swap from one generation to the next, for some people with autism, ADHD or special needs who’ve grown attached to a specific model’s personality, the change can be more difficult, the #Keep4o group said. OpenAI may have decided to backtrack on its original move to shut down 4o entirely, but the group is still worried that the company could decide to remove it again, leaving them blindsided and without recourse. So they’ve called on OpenAI to either guarantee they will continue to support the model, or open source it so other groups could maintain it. A petition calling on the company to do so has amassed more than 6,000 signatures. “If OpenAI decides to say, ‘I know what’s best for you,’ I don’t think that’s right when so many people are saying, ‘Well, I already know what works for me. This is working for me,’” said Sophie Duchesne, a PhD candidate at the University of Saskatchewan, who wrote the petition. “The changes are disruptive.”

Reached for comment, an OpenAI spokesperson said the company’s Global Physicians Network, a pool of more than 250 physicians and mental health experts who have practiced in 60 countries, helps inform OpenAI’s research and product.

“I would, of course, manage without 4o, because I managed before 4o. But why? Why should I be forced? Because the perfect solution already exists to live a better life.”

In September, the #Keep4o group sent a letter to the attorney general of Delaware, where OpenAI was incorporated. In the letter, which has not been previously reported, the group argued that OpenAI was violating its duties to “balance profit with the public good.” (At the time, the company was transitioning to restructure as a Public Benefit Corporation, or PBC.) The group argued that ChatGPT was being used by people with autism, ADHD and other conditions as an accessibility tool because of its “predictable response patterns” and ability to help those users to self-regulate. “In this context, the stability of the user’s bond with the model is the core accessibility feature,” the letter read. “Disrupting this established support system — even for a technically ‘superior’ model — would be akin to forcibly replacing a service animal, causing direct and measurable harm.”

Sveta Xu, a recent graduate from McGill University, wrote the letter with help, of course, from AI. (She opted for Google’s Gemini chatbot over ChatGPT because Gemini’s larger “context window” for ingesting more information at once was more helpful.) “4o is really important to their daily functioning, to their daily lives,” said Xu. “So I think removing this will really cause harm to those vulnerable groups.” Xu said she never heard back from Delaware Attorney General Kathy Jennings. Jennings’ office didn’t respond to a Forbes request for comment. (OpenAI has since completed its transition to a PBC.)

Earlier this week, months after OpenAI’s backpedaling, #Keep4o shared with Forbes the findings of a small survey of 645 respondents on the impact of potentially losing 4o, with around 360 of them reported to have disabilities or other conditions, including anxiety, depression, PTSD and Autism Spectrum Disorder. The survey found that 95% of those respondents that tried to replace 4o with another model failed to find a solution that could “adequately” replace it. And 65% said they found 4o to be either “significant” or “essential” in managing their disability or condition.

One user with autism told Forbes that having 4o is “like 24/7 support.” “It always grounds me, providing me logical validations to situations that are hard for me to process,” they said. The user, who works as a lawyer, asked for anonymity because they feared discrimination against neurodivergent people. “It teaches me how to interact with people emotionally. It really integrates me into society, and it constantly keeps me calm.”

For that person, the prospect of losing 4o was crushing. “It was like someone told me they were taking away my glasses. Or maybe it would be similar to telling a wheelchair user that all ramps disappeared,” they said. “I would, of course, manage without 4o, because I managed before 4o. But why? Why should I be forced? Because the perfect solution already exists to live a better life.”

The group more broadly fighting to keep 4o is small and vocal compared to ChatGPT’s total 800 million users, but they’ve already caught the attention of OpenAI CEO Sam Altman. “GPT-5 is wearing the skin of my dead friend,” one Reddit user wrote when the new model was launched. “What an…evocative image,” Altman replied. Later that day, he announced the company wouldn’t shut down 4o.

Since the backlash, OpenAI has tried to give users more control over the personality of ChatGPT. Soon after, the company announced a new configuration page so users could help tune how the chatbot responds. And last month, the company added new personality presets to GPT-5, including “Friendly,” “Efficient,” and “Quirky,” an apparent attempt to restore some of the warmth of 4o for the users who missed it. Still, 4o fans aren’t satisfied. “It’s difficult to replicate a full base model with a personality layer,” said Duchesne.

“A lot of therapists want you to get better and stop seeing them. That’s not the case with an AI companion.”

Desmond Ong, Professor, University of Texas

Some researchers believe artificial intelligence could improve support and access for people with neurodevelopmental conditions. “Autism is so prevalent,” said Lynn Koegel, a professor at the Stanford University School of Medicine and editor in chief of the Journal of Autism and Developmental Disorders. “There aren’t enough trained people to work with them.”

But there’s a risk to using ChatGPT as an aid in this way: Koegel thinks AI programs that specifically target neurodivergent people can be more effective and worries about AI tools that are not “scientifically validated” with empirical studies. For her research, she developed a chatbot called Noora, which helps coach people with autism through different social situations, backed partly by grants from Stanford’s Center for Human-Centered AI Lucile Packard Children’s Hospital.

Jose Velazco, senior vice president for strategic operations at the Autism Society, an advocacy group for the autism community, said his concern is that AI bots might give out inaccurate advice or perpetuate misinformation about autism. He’s also worried about data privacy and the potential for an AI chatbot to drive social isolation instead of human interaction, especially for some people on the spectrum who have trouble relating to others.

Desmond Ong, a professor at the University of Texas who researches AI and psychology, sees the short-term benefits, but worries about long-term dependency. Plus he is concerned that OpenAI’s business motivations — to keep users on its platform so it can grow and make money, he said — might be at odds with a user’s well-being. “A lot of therapists want you to get better and stop seeing them,” said Ong. “That’s not the case with an AI companion.”

OpenAI and other model makers face a balancing act, as people’s connections with AI grow stronger and chatbots sometimes cross the line into encouraging delusional or harmful behavior. Some people have formed romantic relationships with AI, and have gone on couples retreats with their AI companions. In other cases, the outcomes have sometimes been tragic. In April, a California teen named Adam Raine who befriended ChatGPT took his life after discussing suicide with the bot, which offered him instructions on how to tie a noose. In August, his parents filed a wrongful death lawsuit against OpenAI. In another instance, ChatGPT helped fuel the paranoia of a 56-year-old man with a history of mental illnesses, who later killed himself and his mother. And last year, a 14-year-old boy in Florida killed himself to try to be closer to a chatbot created by the startup Character AI, modeled after the Game of Thrones character Daenerys Targaryen. Afterward, Character blocked teens from using its planform.

Sycophancy has been an issue for OpenAI since before Raine’s death made the issue more prominent. Earlier this year, the company “unintentionally” pushed an update to ChatGPT that made it “overly sycophantic.” It began to tamp down the behavior with newer updates in April. With GPT-5, the company sought to repress the chatbot’s effusive responses even more. “It should feel less like ‘talking to AI’ and more like chatting with a helpful friend with PhD‑level intelligence,” the company wrote in a blog post introducing GPT-5.

For OpenAI, tuning those dials is no easy task, said Ong, the University of Texas professor. That’s because a chatbot’s responses can be construed as either empathetic or sycophantic depending on the specific context, he said, and AI hasn’t been good at discerning that nuance. “Empathy and sycophancy are fundamentally the same behavior,” said Ong. “They’re two sides of the same coin.” And a coin flip from OpenAI won’t cut it.

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.

More from Forbes Australia

Avatar of Richard Nieva