Those using AI systems which interact with humans, are used for surveillance purposes, or can be used to generate “deepfake” content face strong transparency obligations.A number of AI tools may be considered high risk, such as those used in critical infrastructure, law enforcement, or education. They are one level below “unacceptable,” and therefore are not banned outright.
Instead, those using high-risk AIs will likely be obliged to complete rigorous risk assessments, log their activities, and make data available to authorities to scrutinise. That would be likely to increase compliance costs for companies. Many of the “high risk” categories where AI use will be strictly controlled would be areas such as law enforcement, migration, infrastructure, product safety and administration of justice.A GPAIS is a category proposed by lawmakers to account for AI tools with more than one application, such as generative AI models like ChatGPT.
Lawmakers are currently debating whether all forms of GPAIS will be designated high risk, and what that would mean for technology companies looking to adopt AI into their products. The draft does not clarify what obligations AI system manufacturers would be subject to.The proposals say those found in breach of the AI Act face fines of up to 30 million euros or 6 percent of global profits, whichever is higher.
After the terms are finalized, there would be a grace period of around two years to allow affected parties to comply with the regulations.Your subscription has been successful.