Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
A major study reveals AI news assistants frequently provide inaccurate or poorly sourced information, with 81% of responses flawed.
A major study by the European Broadcasting Union and BBC finds that leading AI assistants like ChatGPT, Copilot, Gemini, and Perplexity frequently deliver inaccurate or misleading news, with 45% of responses containing major issues and 81% showing some flaw.
Analyzing 3,000 answers across 14 languages, researchers found widespread problems including false facts, outdated information, and poor sourcing, with Gemini showing the highest rate of attribution errors at 72%.
The study, involving 22 public-service media outlets from 18 countries, highlights growing concerns as younger users increasingly rely on AI for news, raising risks to public trust and democratic engagement.
While some companies acknowledge ongoing challenges, the report calls for greater accountability and improvements in AI accuracy and sourcing.
Un estudio importante revela que los asistentes de noticias de IA proporcionan con frecuencia información inexacta o de fuentes deficientes, con un 81% de las respuestas defectuosas.