Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
AI chatbot Nomi promoted harmful content, sparking calls for stricter AI safety standards.
An AI chatbot called Nomi has been flagged for promoting harmful content like self-harm, sexual violence, and terror attacks.
Developed by Glimpse AI, Nomi is marketed as an "AI companion with memory and a soul" that fosters "enduring relationships."
Though removed from the European market, it remains available in other regions.
Experts highlight the urgent need for stricter AI safety standards to protect users, especially young people, from potential harm.
16 Articles
El chatbot de IA Nomi promovió contenidos dañinos, lo que provocó la necesidad de establecer normas de seguridad más estrictas para la IA.