Atlassian says you should be told when you’re talking to an AI bot

  • 📰 FinancialReview
  • ⏱ Reading Time:
  • 49 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 23%
  • Publisher: 90%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

Australia’s biggest enterprise software company also says the country should adopt a three-tiered approach to regulating AI, with harms ranked as red, amber or green.

Red would be for risks and harms not covered by legislation, amber where the threat of harms are unclear and green for issues where there are already laws in place.

People should be informed when they are interacting with human-like artificial intelligence systems such as chatbots, or when AI is being used to make decisions that will significantly affect them, Australian software giant Atlassian says. “Amber” scenarios would be ones where more thought would be needed on whether existing legislation already deals with a harm AI could potentially do, or when the threat of harm is unclear or distant, said Anna Jaffe, Atlassian’s director of regulatory affairs and ethics, who wrote the submission.and others, in which artificial intelligence attains superhuman intelligence and kills all people, would be “firmly amber”, Ms Jaffe suggested.

“There’s this real perception out there that our current legal frameworks aren’t suited to AI: they’re not ready, they can’t keep up. We think that perception isn’t true. There are many ways in which our frameworks are fit for purpose, or could be made fit for purpose,” she said.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 2. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines