If you’re an employer tempted to experiment with generative AI tools like ChatGPT, there are certain data protection pitfalls that you’ll need to consider. With an increase in privacy and data protection legislation in recent years – in the US, Europe and around the world – you can’t simply feed human resources data into a generative AI tool. After all, personnel data is often highly sensitive, including performance data, financial information, and even health data.
And this means it’s extremely risky to feed that data into a generative AI tool. Why? Because many generative AI tools use the information given to them for fine-tuning the underlying language model. In other words, it could use the information you feed into it for training purposes – and could potentially disclose that information to other users in the future. So, let’s say you use a generative AI tool to create a report on employee compensation based on internal employee data.
As an example, let’s say you ask a generative AI tool to generate a report on typical IT salaries for your local area. There’s a risk that the tool could scrape personal data from the internet – without consent, in violation of data protection laws – and then serve that information up to you. Employers who use any personal data offered up by a generative AI tool could potentially bear some liability for the data protection violation.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: ForbesTech - 🏆 318. / 59 Read more »
Source: ForbesTech - 🏆 318. / 59 Read more »
Source: verge - 🏆 94. / 67 Read more »
Source: ForbesTech - 🏆 318. / 59 Read more »