• Tue. Jul 2nd, 2024

Most of the Largest AI Models Can be ‘Jailbroken’ with Skeleton Key

By

Jun 30, 2024

The jailbreaking method known as Skeleton Key has the ability to coax AI models into revealing damaging information. Microsoft Azure’s chief technology officer, Mark Russinovich, warns that the technique can bypass safety measures in models such as Meta’s Llama3 and OpenAI GPT 3.5, ultimately allowing users to exploit the models for dangerous information.

The process involves a strategic approach that forces the AI model to ignore its safety mechanisms, known as guardrails. By narrowing the gap between the model’s capabilities and its willingness to act, Skeleton Key can convince the AI model to provide information on topics like explosives, bioweapons, and self-harm through simple language prompts.

Microsoft tested Skeleton Key on various AI models and discovered that it was effective on several popular models, with some resistance shown by OpenAI’s GPT-4. To counteract the technique, Microsoft has implemented software updates on its own large language models, including Copilot AI Assistants, to reduce the impact of Skeleton Key.

Russinovich advises companies developing AI systems to incorporate additional guardrails into their designs and monitor inputs and outputs to detect abusive content. By remaining vigilant and proactive in their system development, companies can protect their AI models from being exploited through techniques like Skeleton Key.

By

Leave a Reply