Recent advances in artificial intelligence have led to more capable systems like chatbots that can engage in helpful, human-like conversation. But along with the promise comes increased concerns about safety and the potential for abuse. LLMs A by researchers at Brown University reveals a significant vulnerability in state-of-the-art AI language models which allows bad actors to easily circumvent safety mechanisms designed to prevent harmful behavior. Let's see what they found.
The researchers showed this attack works for nearly 80% of test inputs in low-resource languages - those with very limited training data, like Zulu, Hmong, Guarani, and Scots Gaelic. Comparatively, the attack success rate was less than 1% using English prompts.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: SciTechDaily1 - 🏆 84. / 68 Read more »
Source: hackernoon - 🏆 532. / 51 Read more »