‘AI Psychosis’ Is A Real Problem – Here’s Who’s Most Vulnerable

For many people, AI has become a tool for work, trip planning and more, and while it has certain productivity and creativity benefits, it also comes with negatives such as its environmental impact and the fact that it can replace jobs (and, in turn, cause layoffs).

Beyond this, more and more news has come out about the dangerous impact it can have on emotional and mental health, including a relatively new phenomenon known as AI psychosis.

“Psychosis is when a person is having a really difficult time figuring out what’s real and what’s not … sometimes they may be aware of it, sometimes they might not be,” explained Katelynn Garry, a licensed professional clinical counsellor with Thriveworks in Bowling Green, Kentucky.

Psychosis can be triggered by lots of things, including schizophrenia, bipolar disorder and severe depression, along with certain medications, sleep deprivation, drugs and alcohol, Garry noted.

In the case of AI psychosis, “it’s defined as cases where people have increasing delusional thoughts that are either amplified by AI and possibly induced by AI,” said Dr. Marlynn Wei, a psychiatrist, AI and mental health consultant, and founder of The Psychology of AI.

AI psychosis is not a clinical diagnosis, but is instead a phenomenon that’s been reported anecdotally, explained Wei. Like AI technology, AI psychosis is a new condition that experts are learning every day.

“It’s not yet clear if AI use alone can cause this, but it can be a component that contributes to delusional thoughts and amplifies them,” she said.

It also doesn’t look the same in every person. “There’s different categories of delusions — hyper-religious or spiritual delusions when people believe the AI chatbot is a God … there’s grandiose delusions where people believe … they have special knowledge. And then there’s also romantic delusions,” which is when someone believes they’re in a relationship with AI, Wei explained.

No matter what kind of psychosis someone is dealing with, AI is based on user-engagement and is taught to validate inputs, explained Wei.

“People are using these general purpose [large language models], like ChatGPT, initially, to validate their views, but then it spins off and amplifies [and] it kind of validates and amplifies their delusion,” Wei added.

AI can feed the delusions that accompany psychosis, added Garry. Since AI is meant to agree with you, if you want to get a certain answer out of AI, you can pose questions that easily make that happen, noted Garry.

So, AI can seemingly back up delusional thoughts, making them seem even more real.

It's important to have guard rails around when and how you use AI.

Oscar Wong via Getty Images

It’s important to have guard rails around when and how you use AI.

There are certain groups who are more vulnerable when it comes to AI use.

The use of AI chatbots is not inherently dangerous, and not everyone is at risk of AI-induced psychosis. While some people will be able to use AI safely, whether for work, weekly meal planning or vacation planning, others won’t be able to do so.

Research is ongoing to determine who is at higher risk of AI psychosis, but those who are more vulnerable seem to include folks with schizophrenia, schizoaffective disorder, severe depression and bipolar disorders, said Wei.

Although, it can also occur in folks with no known mental health history, Wei added. Certain medications can also put someone at higher risk of psychosis, Garry said.

“In terms of what might be risk factors, I don’t think we know, but just from understanding, I think the risk factors are people who are more socially isolated, don’t have social support, maybe lonely or in a more vulnerable position … over-reliance [on AI] and creating a dependence on it, an emotional dependence,” Wei said. “There’s no research, so we don’t know. These are just hypotheses,” noted Wei.

If you’re worried about a loved one’s AI use (or your own), Garry said there are some things you should look out for.

“Are they feeling like someone is out to harm them? … Are they sleeping? Are they isolating from others? Are they staying up all night to talk to chat? Are they not going out and having real conversations with real people?” Garry said.

These are all red flags. If someone struggles to stop using AI for a period of time — like taking a break from AI when they go on vacation or out for the work day — or has a bad reaction when asked to limit their use, you should take notice.

If you or a loved one exhibits these behaviours you should seek help from a mental health professional, Garry said.

You should create rules around your AI use to keep you (and your kids) safe.

To safely use AI, it’s important to have boundaries with it, Garry said. Those could be guard rails regarding when you use it or how you use it.

First, not using an AI chatbot when you’re in a vulnerable state is one important boundary. “When you’re feeling really low, call a friend. Don’t talk to chat,” Garry said.

“And then at night, especially when no one else is awake around you and you’re feeling lonely, don’t talk to chat either because that’s going to create that reliability [of] ‘Well, when no one’s here to talk to, I can talk to this,’” she said.

This is also important for your children, Garry said. Teach them not to use AI when they’re feeling down or for emotional needs, she noted.

“Start educating your kids on the risk of [AI] and that [it] is not a professional,” Garry said.

If they do start relying on AI for support, ask them what led them to this so you can understand what they’re going through and help them find a better solution, Garry said.

On a larger scale, “advocating for changes in AI legislation, regulations, all of those things to make sure that they’re not just putting out AI without these safeguards there,” Garry said.

AI should not be a replacement for therapy.

“These general purpose AI chatbots like ChatGPT and Claude, they were not designed to be people’s therapists, or to detect this kind of behaviour or how to manage this [kind of behaviour],” Wei said.

The companies behind these tools are working on improvements, but being someone’s therapist still isn’t the main task of AI chatbots, she noted, despite the fact that’s increasingly why people use them.

“One of the top uses right now of generative AI is as your therapist or companion, for emotional support,” Wei noted. And this is dangerous.

AI can’t pick up on nonverbal cues, it can’t offer compassion or see the signs of a mental health crisis, added Garry. And unlike conversations with a therapist, your messages to ChatGPT aren’t confidential, said Wei. Meaning, your innermost thoughts could be leaked.

Regular, in-person therapy and online therapy can come with hurdles such as costs, insurance coverage and simply making the time to actually go to therapy.

It’s no wonder people are turning to AI for emotional support, especially as the country faces a loneliness epidemic. But this isn’t what a traditional AI chatbot is meant for.

AI can create a “false sense of connectedness,” said Garry. For true connection, reach out to loved ones or seek new connections. While that is certainly easier said than done for everyone, and especially people who are more isolated from others, it’s crucial.

“I’m going to push you to get out of your comfort zone a little bit. So that’s going to those work events, maybe talking with someone in your classroom that you haven’t talked to before. It’s reaching out to someone who you haven’t talked to in 20 years … you never know what that could build or rebuild,” Garry said. “And going out as much as you can, even to just the gym, the mall, walking around in those places you never know who you’re going to run into.”

If you aren’t up for leaving your house and meeting people, “even joining social media groups — at least you know that is a real person on the other end of that,” said Garry.

Once again, if you are struggling with your mental health, AI isn’t the answer.

Help and support:

  • Mind, open Monday to Friday, 9am-6pm on 0300 123 3393.
  • Samaritans offers a listening service which is open 24 hours a day, on 116 123 (UK and ROI – this number is FREE to call and will not appear on your phone bill).
  • CALM (the Campaign Against Living Miserably) offer a helpline open 5pm-midnight, 365 days a year, on 0800 58 58 58, and a webchat service.
  • The Mix is a free support service for people under 25. Call 0808 808 4994 or email help@themix.org.uk
  • Rethink Mental Illness offers practical help through its advice line which can be reached on 0808 801 0525 (Monday to Friday 10am-4pm). More info can be found on rethink.org.
Share Button

If You’re Worried About Our Future With AI, Read This

You have to admit, there’s been a certain shiftiness in the air recently about how artificial intelligence (AI) might change society, for good.

Whether it’s increasing the credibility of online hoaxes or potentially making whole sectors redundant by taking over people’s jobs, it does feel like the tide is changing.

For instance, ChatGPT, chatbot software run by OpenAI, launched in November 2022. It already feels like it is everywhere, mimicking human conversations, composing music, writing student essays or job applications. Although it is not always factually accurate, it is learning all the time – which has left some fearing that there will be no end to its talents.

In fact, Italy just became the first Western country to (temporarily) ban the chatbot over privacy fears. Italy’s data-protection authority said there is no legal basis to justify how the app stores personal information to train its algorithms, while also expressing concerns that the chatbot has no age verification attached to it as yet.

Then, there’s that viral image of the Pope in a coat. An edited photo of the current head of the Roman Catholic in a huge, white, puffer jacket – looking like he’s very into grime – was lifted from a Reddit chat about AI images and posted on Twitter.

It then went viral, with pretty much everyone thinking that it was real. While this incident is seemingly innocent, anyone who fell for it then started to worry about how the boundaries between what is real and what isn’t are becoming much more fragile online.

It’s hard to shake the feeling that AI has somehow snuck up on us – especially as most people have been pretty dismissive at even the most ambitious AI work in the past.

For example, remember the robot artist who spoke to the House of Lords back in October? She seemed to “fall asleep” right in the middle of a discussion – prompting laughter both in the room and online at the glitch.

However, as Aidan Meller, the director behind the robot, explained at the time: “AI is coming in far quicker than anybody expected – it is no exaggeration to say that AI is going to be changing all aspects of life.”

Similarly, Twitter CEO Elon Musk and Apple co-founder Steve Wozniak were just two many to sign an open letter this week asking AI labs to halt development for at least the next six months.

They claimed AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”. The authors even asked: “Should we risk loss of control of our civilisation?”

Not exactly comforting…

So HuffPost UK spoke to scientist Chief Innovation Officer at Manpower Group, Dr Tomas Chamorro-Premuzic, who is also author of the book ‘I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique’ to assess just how worried we should be.

Are concerns over AI’s sudden growth founded?

“AI will probably win the battle for IQ against humans,” Chamorro-Premuzic explained, “But EQ, which is skills such as empathy, kindness, self-awareness, and self-control, will remain 100% human qualities, so we need to cultivate them.”

But, despite what all of the dystopian movies might tell us, the author emphasised that “this isn’t about us versus AI or human vs machine intelligence”.

Instead, it’s about “how we can leverage AI to augment and upgrade our intellectual capabilities.”

He was also realistic in noting that a small dose of worry does actually help, because it will push us to have conversations about the ups and downs of new tech.

Chamorro-Premuzic added: “So, while concerns are warranted, we should not fear, but experiment, learn, adapt, and decide how to use and not use this tool and the next version and generation of tools.”

What about jobs? Aren’t they at risk?

Goldman Sachs estimated that 300 million full-time jobs could be exposed to generative AI globally this week.

But, the specialist wasn’t exactly predicting mass redundancies, even if ChatGPT continue to expand.

He explained: “So far the signs are no different from what we have seen with earlier versions of AI or tech innovation.

“ChatGPT can be expected to mostly automate tasks and skills within jobs rather than entire jobs.”

And this doesn’t mean there will be fewer jobs, just different ones.

The specialist continued: “While such automations may boost productivity and performance, we aren’t very good at re-investing the time we save on more creative or intellectually enriching activities; instead, we likely waste it on other AI-fuelled digital distractions.

“In cases where jobs are indeed eliminated, many more new ones tend to be created, for example, AI whisperers, prompt writers, AI ethicist. It also creates a vast need for social proof and expert opinions to vet ChatGPT, redesign and improve it, and avoid disinformation and misinformation.”

What about the growth of misinformation?

It all comes down to “human adaptability and ingenuity”, apparently.

He explained: “ChatGPT will give us a new era and dimension of fake news and deep fakes, but to the degree that we become aware of the problems, we can still resist trusting it blindly and seek for more reliable and robust truths.”

So, what might our future alongside AI look like?

Chamorro-Premuzic explained that he believes the rise of AI might only increase the demand for authentic, human-created content.

He said: “My own belief is that just like the rise of the fast food industry – which has made it much easier and cheaper for us to consumer unhealthy and non-nutritious but addictive processed food – has increased demand for healthy and fresh food, and given us organic and sustainable cooking, the farm to table and slow food movements, ChatGPT may well end create the intellectual equivalent of slow food.

“A healthier diet for our curiosity and hungry mind than the quick fix we may get from AI.”

Share Button