Learn languages naturally with fresh, real content!

tap to translate recording

Explore By Region

flag A major study reveals AI news assistants frequently provide inaccurate or poorly sourced information, with 81% of responses flawed.

flag A major study by the European Broadcasting Union and BBC finds that leading AI assistants like ChatGPT, Copilot, Gemini, and Perplexity frequently deliver inaccurate or misleading news, with 45% of responses containing major issues and 81% showing some flaw. flag Analyzing 3,000 answers across 14 languages, researchers found widespread problems including false facts, outdated information, and poor sourcing, with Gemini showing the highest rate of attribution errors at 72%. flag The study, involving 22 public-service media outlets from 18 countries, highlights growing concerns as younger users increasingly rely on AI for news, raising risks to public trust and democratic engagement. flag While some companies acknowledge ongoing challenges, the report calls for greater accountability and improvements in AI accuracy and sourcing.

38 Articles