ChatGPT can be tricked into producing malicious code that could be used to launch cyberattacks, a study has found. OpenAI's tool and similar chatbots can create written content based on user commands, having been trained on enormous amounts of text data from across the internet. They are designed with protections in place to prevent their misuse, along with address issues such as biases.
' Read more:Martin Lewis warns against 'frightening' AI scam videoAI 'doesn't have capability to take over', says Microsoft boss AI-generated code 'can be harmful' Much like these generative AI tools can inadvertently get their facts wrong when answering questions, they can also create potentially damaging computer code without realising. Mr Peng suggested a nurse could use ChatGPT to write code for navigating a database of patient records.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: Metro Newspaper UK - 🏆 61. / 63 Read more »