that all of its AI-generated articles are “reviewed, fact-checked and edited” by real, human staff. And each post has an editor’s name attached to it in the byline. But clearly, that alleged oversight isn’t enough to stop ChatGPT’sUsually, when an editor approaches an article , it’s safe to assume that the writer has done their best to provide accurate information. But with AI, there is no intent, only the product.
It’s easy to understand that when sifting through piles of AI-generated posts, an editor could miss an error about the nature of interest rates among the authoritative-sounding string of statements. When writing gets outsourced to AI, editors end up bearing the burden, and their failure seems inevitable.And the failures are almost certainly not just limited to the one article.
From a financial perspective, you can’t beat AI: there’s no overhead cost and there’s no human limit to how much can be produced in a day. But from a journalistic view point, AI-generation is a looming crisis, wherein accuracy becomes entirely secondary to SEO and volume. Click-based revenue doesn’t incentivize thorough reporting or well-put explanation. And in a world where AI-posts become an accepted norm, the computer will only know how to reward itself.
'An editor evaluating an AI-generated text cannot assume anything, and instead has to take an exacting, critical eye to every phrase, **world**, and punctuation mark.' This is too funny to me.
thethoroughbred AI written articles, where are we heading?