A dangerous new jailbreak for AI chatbots was just discovered

  • 📰 DigitalTrends
  • ⏱ Reading Time:
  • 10 sec. here
  • 8 min. at publisher
  • 📊 Quality Score:
  • News: 28%
  • Publisher: 65%

Computing News

Ai,Chatbot,Jailbreak

Microsoft released more details about a troubling new generative AI jailbreak technique, called 'Skeleton Key,' that bypasses a chatbot's safety guardrails.

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called “Skeleton Key.” Using this prompt injection method, malicious users can effectively bypass a chatbot’s safety guardrails, the security features that keeps ChatGPT from going full Taye.

Recommended Videos It could also be tricked into revealing harmful or dangerous information — say, how to build improvised nail bombs or the most efficient method of dismembering a corpse.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 95. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines