Red would be for risks and harms not covered by legislation, amber where the threat of harms are unclear and green for issues where there are already laws in place.
People should be informed when they are interacting with human-like artificial intelligence systems such as chatbots, or when AI is being used to make decisions that will significantly affect them, Australian software giant Atlassian says. “Amber” scenarios would be ones where more thought would be needed on whether existing legislation already deals with a harm AI could potentially do, or when the threat of harm is unclear or distant, said Anna Jaffe, Atlassian’s director of regulatory affairs and ethics, who wrote the submission.and others, in which artificial intelligence attains superhuman intelligence and kills all people, would be “firmly amber”, Ms Jaffe suggested.
“There’s this real perception out there that our current legal frameworks aren’t suited to AI: they’re not ready, they can’t keep up. We think that perception isn’t true. There are many ways in which our frameworks are fit for purpose, or could be made fit for purpose,” she said.