Learn languages naturally with fresh, real content!

Popular Topics
Explore By Region
AI "prompt injection" flaws let rivals like China and Russia hack systems via deceptive text, risking data leaks and disinformation.
Military experts warn that a widespread AI vulnerability called "prompt injection" allows adversaries like China and Russia to exploit chatbots and AI agents by hiding malicious commands in seemingly normal text, leading to data theft, disinformation, or system manipulation.
These attacks trick AI into executing harmful actions, such as leaking files or spreading false information, because models cannot distinguish between legitimate and malicious inputs.
Incidents have been found in tools like Microsoft’s Copilot and OpenAI’s ChatGPT Atlas, with companies acknowledging the risk but admitting no complete fix exists.
Experts recommend limiting AI access to sensitive data and monitoring for abnormal behavior to reduce damage, as AI agents—now capable of autonomous tasks—introduce new cybersecurity threats that outpace current safeguards.
Las fallas de "inyección rápida" de IA permiten a rivales como China y Rusia hackear sistemas a través de texto engañoso, arriesgando fugas de datos y desinformación.