Microsoft develops safety features for Azure AI Studio to prevent AI chatbot manipulations.

Microsoft is developing safety features for Azure AI Studio to prevent people from tricking AI chatbots into performing unintended actions. The features address direct attacks, where users manipulate chatbots with specific prompts, and indirect prompt injections, where hackers insert malicious instructions into training data. New defenses are designed to spot suspicious inputs and block them in real time.

March 28, 2024
5 Articles

Further Reading