If the individuals who are at the very forefront of artificial intelligence technology are commenting about the potential catastrophic effects of highly intelligent AI systems, then it’s probably wise to sit up and take notice.
Related Altman is so concerned that on Wednesday his company announced it’s setting up a new unit called Superalignment aimed at ensuring that superintelligent AI doesn’t end up causing chaos or something far worse. To deal with the situation, OpenAI wants to build a “roughly human-level automated alignment researcher” that would perform safety checks on a superintelligent AI, adding that managing these risks will also require new institutions for governance and solving the problem of superintelligence alignment.