As worrisome as it might be that generative AI models such as ChatGPT and Gemini might one day become sentient or take our jobs, there are far more pressing concerns. For instance, three security researchers from the US and Israel recently created a malware worm which specifically targets generative AI services in order to perform malicious activities such as extracting private data, spreading propaganda, or performing phishing attacks.
You can read more about the study in this paper published by the researchers, but the gist here is that an attacker can use a similar computer worm to target generative AI services by inserting adversarial self-replicating prompts into inputs that the models process and replicate as output, at which point they can be used to engage in malicious activity. In the study, the researchers demonstrated the application of their malware by targeting AI-powered email assistants.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: ForbesTech - 🏆 318. / 59 Read more »
Source: WIREDBusiness - 🏆 68. / 68 Read more »
Source: cleantechnica - 🏆 565. / 51 Read more »
Source: futurism - 🏆 85. / 68 Read more »