Image
Views from the Lab banner

EU AI Act: What it is and why should you care?

One trend we have been monitoring at the MHR labs has been the growing calls for the regulation of AI. 

Last week saw some huge developments in this area, with EU lawmakers reaching a deal on their proposed artificial intelligence regulation. Known as the AI Act, these laws are designed to protect users of AI products and systems from misuse, with many considering it to be the “GDPR of AI”.  

This is a huge step towards creating one of the first comprehensive set of regulations of AI in the world. While the law still needs to be finalised, voted on, and then won’t take effect until at least 2025, it is important to start preparing for its impact now.  

What is the AI act? 

Within the AI act, systems would be regulated based on their application and the level of risk posed to the safety and rights of users. Risks are split into four categories, ranging from unacceptable to minimal, with different rules applying to each.  

Unacceptable risk, which includes areas such as social scoring and live tracking through facial recognition, would be banned in all but a select few circumstances. High risk applications cover a broad range of sensitive areas such as education, employment, and law enforcement. Systems in this category would need to undergo assessment to justify their application and how they minimise risk before being allowed to enter the market. 

Finally, low/minimal risk categories would be subject to low amounts of regulation but require transparency and possible voluntary codes of conduct.  

Importantly for us at MHR, one application classified as high risk is those involving employment and worker management. Under the act, providers of high-risk AI systems would be required to include appropriate risk management, model transparency and interpretability, as well as human oversight within their AI systems.  

Thankfully, these are all standards we have set for ourselves in our ethical use of AI and have been working with for many years.  

Our take 

I have high hopes that these protections will be very helpful. Some are worried that introducing regulations and red tape will be hard on small businesses and stifle innovation, especially in newer areas like generative AI. However, I believe that regulating many of these emerging technologies should help prevent abuse and protect consumers.  

GDPR has set a global standard for how data is handled by companies, and many are hoping the AI Act will do the same for AI. The UK government has been looking into regulation for AI as well, and while currently they appear to be taking a less strict approach (disappointingly with mostly voluntary guidelines rather than hard enforceable laws), seeing Europe deploy these regulations may force them into more action to fully protect consumers of these products.  

Further, anyone whose services are used in Europe or by European citizens will have to comply with these rules eventually, so now may be the time to consider their effects.  

You can find more information on this here: https://artificialintelligenceact.eu/ 

By Chris Judd, Data Scientist 

Read more from MHR labs