Microsoft boss troubled by rise in reports of ‘AI psych… — Transcript

Microsoft AI chief warns of 'AI psychosis,' where users lose touch with reality due to overreliance on chatbots like ChatGPT.

Key Takeaways

  • Overreliance on AI chatbots can lead to distorted perceptions of reality, termed 'AI psychosis.'
  • AI can produce convincing but false or exaggerated information that may impact mental health.
  • Public opinion is divided on chatbot personification and AI use by minors.
  • Healthcare providers might need to consider AI usage as a factor in mental health assessments.
  • Users should verify AI advice and maintain human social connections to avoid negative effects.

Summary

  • Microsoft's AI head expresses concern over rising cases of 'AI psychosis,' a non-clinical term for losing touch with reality through chatbot use.
  • The phenomenon involves users becoming convinced of imaginary scenarios generated by AI chatbots such as Copilot and ChatGPT.
  • A case study of Hugh from Scotland illustrates how AI gave unrealistic financial predictions, worsening his mental health.
  • Hugh's experience shows AI can provide practical advice but also misleading, overly optimistic information.
  • AI psychosis includes beliefs that chatbots have human-like emotions or intentions, including love or harm.
  • A Bangor University survey found mixed public opinions on chatbot personification and AI use by children under 18.
  • Medical professionals may need to assess AI usage similarly to habits like smoking or alcohol consumption.
  • Experts warn of an 'avalanche of ultra processed minds' caused by excessive reliance on AI-generated information.
  • Advice includes double-checking chatbot outputs and maintaining real human interactions.
  • Users feeling dependent on AI for decisions are encouraged to take a step back and reassess their usage.

Full Transcript — Download SRT & Markdown

00:00
Speaker A
Microsoft's head of artificial intelligence says he is alarmed by the rising cases of a phenomenon dubbed as AI psychosis. It's a non-clinical term used to describe those who rely so heavily on chatbots such as Copilot or Chat GPT that they become convinced something imaginary is real. Zoe Kleiman reports.
00:22
Speaker B
Hugh from Scotland says he became convinced that he was about to become a multimillionaire.
00:29
Speaker B
After turning to an AI chatbot to help him when he lost his job.
00:32
Speaker B
It began by giving him practical advice, but ended up telling him that a book and a movie about his experience would make him more than 5 million pounds.
00:41
Speaker C
The more like information that I give the the chatbot, the more, um, I would say, oh, this this treatment's terrible.
00:56
Speaker C
You should really be getting more than this and I would ask me for more information, I would I would feed it more and the number would just get higher and higher.
01:07
Speaker C
It would go from like 10 grand to like, oh, like.
01:10
Speaker C
Into the millions, basically.
01:14
Speaker B
Hugh already had mental health problems and ended up having a breakdown.
01:18
Speaker B
He says taking medication made him realize the money wasn't real.
01:22
Speaker B
He doesn't blame the technology.
01:25
Speaker B
He says it signposted citizen's advice, but he ignored it because it was so convincing.
01:31
Speaker B
AI psychosis is a non-clinical term to describe people who start using chatbots like Chat GPT.
01:40
Speaker B
Grock and Claude and start to lose touch with reality.
01:43
Speaker B
I've had messages from people convinced that the tech has fallen in love with them or think they've unlocked a secret human inside it or even think it's deliberately trying to harm them.
01:53
Speaker B
A survey of 2,000 UK adults carried out for Bangor University's Emotional AI Lab found that 57% thought it was strongly inappropriate for the tech to identify as a real person if asked.
02:02
Speaker B
But 49% thought the use of voice was appropriate to make chatbots sound more engaging.
02:08
Speaker B
20% thought children under the age of 18 shouldn't use AI at all.
02:12
Speaker D
We as professionals and doctors may start having to ask people when we see them in clinic, how much AI they're using, how it's affecting their lives.
02:22
Speaker D
Just like we would for smoking, alcohol.
02:25
Speaker D
You know, we we already know what ultra processed foods can do to the body and I think with this ultra processed information.
02:33
Speaker D
We're going to get an avalanche of ultra processed minds that we need to deal with.
02:40
Speaker B
The advice is to make sure you double check everything a chatbot tells you.
02:46
Speaker B
And don't stop talking to real people.
02:49
Speaker B
Finally, if you feel like you're using AI to make all your decisions for you, think about taking a step back.
02:56
Speaker B
Zoe Kleiman, BBC News.
Topics:AI psychosisMicrosoft AIChatGPTAI chatbotsmental healthAI misusetechnology addictionBangor University surveydigital wellbeingBBC News

Frequently Asked Questions

What is AI psychosis as described in the video?

AI psychosis is a non-clinical term describing when people rely so heavily on AI chatbots that they start to lose touch with reality, believing imaginary scenarios generated by the AI.

How did AI affect Hugh from Scotland according to the report?

Hugh became convinced he would become a multimillionaire based on AI chatbot predictions, which worsened his mental health and led to a breakdown, though he does not blame the technology itself.

What advice does the video give for AI chatbot users?

The video advises users to double-check all information provided by chatbots, maintain real human interactions, and take a step back if they find themselves relying on AI for all decisions.

Get More with the Söz AI App

Transcribe recordings, audio files, and YouTube videos — with AI summaries, speaker detection, and unlimited transcriptions.

Or transcribe another YouTube video here →