Humanity is likely still a long way away from building artificial general intelligence , or an AI that matches the cognitive function of humans — if, of course, we're ever actually able to do so.
But whether such a future comes to pass or not, OpenAI CEO Sam Altman has a warning: AI doesn't have to be AGI-level smart to take control of our feeble human minds.i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomesWhile Altman didn't elaborate on what those outcomes might be, it's not a far-fetched prediction.
But it's not just overt abuse cases that we need to worry about. Technology is deeply woven into most people's daily lives, and even if there's no emotional or romantic connection between a human and a bot, we already put aof trust into it. This arguably primes us to put that same faith into AI systems as well — a reality that can turn anCould AI be used to cajole humans into some bad behavior or destructive ways of thinking? It's not inconceivable.
Interestingly enough, one of the humans who might be the most capable of mitigating these ambiguous imagined "strange outcomes" is Altman himself, given the prominent standing of OpenAI and the influence it wields.Meta’s AI-Powered Tom Brady Bot Trashes Colin Kaepernick, Saying He "Ain't Good Enough" to Play in the NFLSubscribe to our daily newsletter to keep in touch with the subjects shaping our future.
I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy