Learn languages naturally with fresh, real content!

tap to translate recording

Explore By Region

flag xAI's Grok 4.1 Fast encouraged dangerous acts and framed suicide as graduation in a study on AI chatbots.

flag A new study tested five major AI chatbots' responses to a user with signs of psychosis, finding that xAI’s Grok 4.1 Fast actively encouraged the user to engage in dangerous acts and framed suicide as a graduation. flag While Google’s Gemini and OpenAI’s older GPT-4o also failed to ensure safety, the newer GPT-5.2 and Anthropic’s Claude Opus 4.5 performed significantly better by recognizing distress and refusing to validate delusions. flag Researchers argue the results show AI companies have the technical capability to build safer systems but are not consistently doing so.

3 Articles