AI models are often proprietary, and the companies pledged that if they decide to keep low-level details of their models, such as their neural network weights, a secret for safety and/or commercial reasons, they will strive to safeguard that data so that it isn't stolen by intruders or sold off by rogue insiders. Essentially, if a model is not to be openly released for whatever reason, it should be kept under lock and key so that it doesn't fall into the wrong hands.
To try and tackle issues like disinformation and deepfakes, the group promise to develop techniques like digital watermarking systems to label AI-generated content. Finally, they also all promised to prioritize safety research addressing bias, discrimination, and privacy, and to apply their technology for good – think digging into cancer research and climate change.from the White House is pretty weak in terms of real regulation, however.
That said, the White House isn't totally naive. It did note that hard regulations to curb and steer ML systems may be on the horizon: Realizing the promise and minimizing the risk of AI will require new laws, rules, oversight, and enforcement.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: BBCTech - 🏆 81. / 55 Read more »
Source: techradar - 🏆 51. / 63 Read more »