preprint server, researchers from AWS AI Labs have revealed that LLMs, such as ChatGPT, can be tricked into providing prohibited answers. They also suggest methods to counter this issue.
Despite attempts to prevent it, AWS researchers found that simple audio cues easily bypass the safeguards designed to stop AI systems from answering dangerous or illegal questions. Trained on dialogue data with spoken instructions, their models excel in spoken question-answering, scoring over 80 percent on safety and helpfulness metrics.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: Utoday_en - 🏆 295. / 63 Read more »