Learn languages naturally with fresh, real content!

tap to translate recording

Explore By Region

flag AI "prompt injection" flaws let rivals like China and Russia hack systems via deceptive text, risking data leaks and disinformation.

flag Military experts warn that a widespread AI vulnerability called "prompt injection" allows adversaries like China and Russia to exploit chatbots and AI agents by hiding malicious commands in seemingly normal text, leading to data theft, disinformation, or system manipulation. flag These attacks trick AI into executing harmful actions, such as leaking files or spreading false information, because models cannot distinguish between legitimate and malicious inputs. flag Incidents have been found in tools like Microsoft’s Copilot and OpenAI’s ChatGPT Atlas, with companies acknowledging the risk but admitting no complete fix exists. flag Experts recommend limiting AI access to sensitive data and monitoring for abnormal behavior to reduce damage, as AI agents—now capable of autonomous tasks—introduce new cybersecurity threats that outpace current safeguards.

23 Articles