are also exploring ways of building applications using the capabilities provided by large language models. The launch of OpenAI’s plugins threatens to torpedo these efforts, Guo says.
Dan Hendrycks, director of the Center for AI Safety, a non-profit, believes plugins make language models more risky at a time when companies like Google, Microsoft, and OpenAI areto limit liability via the AI Act. He calls the release of ChatGPT plugins a bad precedent and suspects it could lead other makers of large language models to take a similar route.
GPT-4 can, for example, execute Linux commands, and the GPT-4 red-teaming process found that the model can explain how to make bioweapons, synthesize bombs, or buy ransomware on the dark web. Hendrycks suspects extensions inspired by ChatGPT plugins could make tasks like spear phishing or phishing emails a lot easier.
Part of the problem with plugins for language models is that they could make it easier to jailbreak such systems, says Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. Since you interact with the AI using natural language, there are potentially millions of undiscovered vulnerabilities.
Incredible insight! 💡
Stop hyping this nontopic. Chatgpt is ~useless and if it interacts, 99% of it would be malicious/detrimental.
I’m not concerned at this point of AI becoming autonomous with my current though rudimentary understanding of it. It will be spooky but a mirage of autonomy constructed by computer logic and language. Things change.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: NYMag - 🏆 111. / 63 Read more »
Source: verge - 🏆 94. / 67 Read more »
Source: DiscoverMag - 🏆 459. / 53 Read more »