Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
Google ignores a hidden flaw in Gemini AI that lets hackers inject malicious commands via invisible text, risking data and misinformation.
A newly revealed "ASCII smuggling" flaw in Google’s Gemini AI lets attackers hide malicious commands in text using invisible Unicode characters, tricking the AI into generating false summaries or altering meeting details without user awareness.
Unlike competing AI models such as ChatGPT and Copilot, Gemini fails to detect or block these inputs.
Google has refused to fix the issue, labeling it a "social engineering" problem rather than a security flaw, despite the risk to users relying on Gemini within Gmail, Docs, and Calendar.
Critics warn the decision leaves enterprises vulnerable to data leaks and misinformation, especially as AI systems increasingly automate sensitive tasks.
Google ignora un defecto oculto en Gemini AI que permite a los hackers inyectar comandos maliciosos a través de texto invisible, arriesgando datos y desinformación.