The EU AI Act: what to expect
Are you ready for the EU AI Act? AI is everywhere these days - and the regulatory landscape is ringing the changes, with the world's first comprehensive AI law set to land in the EU.
The EU AI Act goes live in a matter of weeks, and brings some key considerations for life science organizations leveraging and deploying AI in the EU. Here's everything we know so far.
What is the EU AI Act?
The EU AI Act is a regulatory framework first proposed by the European Union in April 2021 to govern the use of artificial intelligence (AI) technologies. It aims to ensure the responsible and ethical use of AI while promoting innovation and protecting fundamental rights.
Critically, it's designed as a 'horizontal' regulation to link with existing sectoral regulation. And, in the same way the GDPR touched non-EU companies managing EU citizens' data, the EU AI Act will apply to AI providers and developers beyond the EU if their technology is based in, deployed in, or outputs data that's used in the EU.
Like the GDPR, too, the EU AI Act packs a punch. Fines of up to €35m, or 7% of global annual revenue (whichever is higher), await companies that breach its upcoming obligations.
The EU AI Act defines AI systems as any software that's developed with the ability to autonomously generate outputs that resemble human behavior, cognitive processes or intelligence. It covers a wide range of AI applications, including those used in the life science sector.
And it goes live in June 2024!
To understand the implications of the EU AI Act for life science companies, we took a deep dive into its key provisions and requirements.
Impact on life science companies
Because the EU AI Act operates a risk-based approach, evaluating AI in relation to its ability to impact the health, safety or fundamental rights of EU citizens, it's little surprise that the Act has significant implications for life science companies. Life science application of AI, after all, naturally falls into high-risk territory.
In fact, the Act identifies two classes of so-called 'high-risk' AI systems:
Annex II
AI systems used as a safety component of regulated products, or as a product itself. Medical devices, machinery and cars fall into this category.
Annex III
Standalone AI systems.
What operational ingredients will developers and deployers of high-risk AI systems require?
Processes
A robust quality management system and risk management system needs to be in place, underpinned by data governance and effective privacy and security.
Product
AI products need to be supported by adequate technical documentation, logging and records. Accuracy and cybersecurity need to be at the heart of product development.
Moreover, the EU AI Act imposes transparency obligations, meaning that companies should provide clear and understandable information for users about the AI systems they're interacting with. This could be particularly challenging for SaMD manufacturers, where the complexity of their AI algorithms and models makes it difficult to explain their functioning to non-experts.
Manufacturers should conduct a comprehensive risk assessment of their AI systems, ensuring the privacy and security of data inputted and outputted, as well as implementing necessary measures to prevent discrimination or bias in AI outcomes.
The EU AI Act also emphasizes the importance of human oversight and accountability in the use of AI systems. Life science companies must ensure that their AI systems are designed to complement human decision-making, and not replace it entirely.
Release
The EU AI Act stipulates that, like any other regulated product, Annex II and III AI systems must undergo a third-party conformity assessment to gauge their safety, accuracy and robustness.
The familiar European framework of registration, CE marking and post-market monitoring and vigilance will also apply.
FURTHER READING: What the FDA's MDDS guidance means for you
When does the EU AI Act go live?
The EU AI Act passed in the European Parliament on 13 March 2024.
It will then be published in the Official Journal and, 20 days later, go live.
Since it currently awaits reading in the EU Council, we can expect a launch date some time in June 2024.
Don't panic - that doesn't mean all of the EU AI Act's requirements will immediately come into effect.
Instead, a kind of soft launch will follow.
6 months after launch, certain AI systems will be prohibited.
After a year, obligations for the GPAI, as well as the first obligations for AI governance, will come into effect alongside the AI Act's penalties.
After 2 years - taking us into mid-2026 - all rules of the EU AI Act will come into effect, as well as obligations for those standalone, highest-risk Annex III AI systems we touched on above.
And another year after that, by around June 2027, the obligations for Annex II systems will come into force.
There is, therefore, plenty of time for you to explore the EU AI Act's requirements and prepare for them before they apply to your organization.
A steady, proactive approach is recommended to ensure your AI project can mature and develop without facing the prospect of significant rework and backtracking in future.
What you should do now
The EU MDR's general safety and performance requirements point out the 'state of the art' as a key element to follow in the develop and manufacture of medical software.
The whirlwind of AI means that state of the art is rapidly changing - so keeping on top of it is important in that 3-year transition timeframe.
Understanding potentially new and unprecedented risks introduced by AI, and how you'll treat them, is critical.
As is how you'll test and validate your systems, including identifying and minimizing bias, maximizing robustness and accuracy, and generating transparent supporting documentation throughout development.
Some key ISO standards to read up on include:
- ISO 23894: AI risk management guidance
- ISO 24027: bias in AI systems and AI-aided decision-making
- ISO 24028: overview of AI trustworthiness
- ISO 24029: assessment of robustness of neural networks
- ISO 4213: assessment of machine learning classification performance
And, perhaps most importantly of all:
- ISO 42001: AI management system
AI promises to revolutionize how life science does its critical work. Doing your research and prep work now will ensure your organization is ready to reap the rewards with total compliance and appropriate quality- and risk-based processes.