recently, where he put the onus on the product's safety on the companies building it.
In the post, Open AI states that it rigorously tests any new system before it is introduced to the public using external experts and uses reinforcement learning with human feedback to make improvements. The company claims that it tested its recent model, GPT-4, for six months before it was released publicly and called for regulation to ensure that the industry adopts such practices at large.
By releasing its latest and most capable models through its services or an Application Programming Interface , OpenAI says it can monitor the misuse of services and take action immediately based on real-world data. This has enabled OpenAI to develop nuanced policies against genuine risks from its technology while still allowing people to use it for many beneficial purposes. The company also said it is evaluating verification options to ensure that users above 18 or those above 13 years and with parental approval are accessing its services.Open AI said that its newer model, GPT-4, is 82 percent less likely to respond to requests for hateful, harassing, and violent content.
Additionally, the company is also making progress in ensuring that the content provided by its chatbot is factually correct. GPT-4 is 40 percent more likely to produce factual content than its predecessor, the blog post said while adding that there was much work to do in this area to make AI safer for all.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: Gizmodo - 🏆 556. / 51 Read more »