AI Ethics Skeptical About Establishing So-Called Red Flag AI Laws For Calling Out Biased Algorithms In Autonomous AI Systems

  • 📰 ForbesTech
  • ⏱ Reading Time:
  • 114 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 49%
  • Publisher: 59%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

AI Ethics is questioning whether having Red Flag AI Laws would be viable as to coping with biases in AI, including amid the advent of autonomous AI systems.

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making of AI axiomatically becomes laden with inequities.The underlying concept is that people would be able to raise a red flag whenever they believed that an AI system was operating in an unduly biased or discriminatory fashion.

A national Red Flag AI Law would seemingly be established by Congress. The law would spell out what an AI-pertinent red flag is. The law would describe how these AI grousing red flags are raised. And so on. It could also be the case that individual states might also opt to craft their own Red Flag AI Laws. Perhaps they do so in lieu of a national initiative, or they do so to amplify particulars that are especially appealing to their specific state.

Hey, the proponents of the private sector approach sound off, this would be akin to national Yelp-like service. Consumers could look at the red flags and decide for themselves whether they want to do business with companies that have racked up a slew of AI-oriented red flags. A bank that was getting tons of red flags about their AI would have to pay attention and revamp their AI systems, so the logic goes, else consumers would avoid the firm like the plague.

If you were to say that anyone registering or reporting a red flag about AI has to pay a fee, you’ve entered into a murky and insidious realm. The concern would be that only the wealthy would be able to afford to raise red flags. This in turn implies that the impoverished would not be able to equally participate in the red flag activities and essentially have no venue for warning about adverse AI.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 318. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines