after using ChatGPT to research non-existent legal cases, the implications are far-reaching.
The case also calls into question whether businesses are legally liable for information that their AI systems produce that is false or defamatory. In the U.S., Section 230 of the Communications Decency Act has historically protected internet companies from legal responsibility for content created by a third party and hosted on their platforms.
In the case of Walters, ChatGPT was asked to provide an online PDF summary of a real federal court case. In response, the AI fabricated a false case summary, including untrue accusations against Walters. The journalist who requested the summary decided not to publish it and instead double-checked the facts, discovering the falsified information. It is still unknown how Walters learned of this false information.
Law professor Eugene Volokh offered his thoughts on the situation after writing extensively on the subject of AI systems’ legal liability. Volokh stated that while “such libel claims [against AI companies] are in principle legally viable,” this particular lawsuit “should be hard to maintain.” He pointed out that there have been no actual damages as a result of ChatGPT’s output and that Walters failed to inform OpenAI about these false statements, giving them a chance to take them down.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: Variety - 🏆 108. / 63 Read more »
Source: FoxBusiness - 🏆 458. / 53 Read more »