GPT-4 — the large language model that powers ChatGPT Plus — may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”
For example, the blog post explains that moderation teams could assign labels to content to explain whether it falls within or outside a given platform’s rules. GPT-4 could then take the same data set and assign its own labels, without knowing the answers beforehand. Recommended Videos The human toll Right now, content moderation on various websites is performed by humans, which exposes them to potentially illegal, violent, or otherwise harmful content on a regular basis. We’ve repeatedly seen the awful toll that content moderation can take on people, with Facebook paying $52 million to moderators who suffered from PTSD due to the traumas of their job.
However, it does raise the question of whether using AI in this manner would result in job losses. Content moderation is not always a fun job, but it is a job nonetheless, and if GPT-4 takes over from humans in this area, there will likely be concern that former content moderators will simply be made redundant rather than reassigned to other roles.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: engadget - 🏆 276. / 63 Read more »
Source: futurism - 🏆 85. / 68 Read more »
Source: DigitalTrends - 🏆 95. / 65 Read more »