Summary of the EU AI Act: the First Regulation on Artificial Intelligence
- Legalease
- 2 days ago
- 2 min read
The first piece of legislation governing AI is in force. This milestone is important for at least 2 reasons. Firstly, EU law is influential and often sets a global benchmark so other regions may follow this approach. Secondly, any company using AI – and operating within the EU – will need to make changes to reflect the EU AI Act.
So, to keep you in the know, this article summarises the 7 foundational principle of trustworthy AI.
Human Agency and Oversight: AI should support, not replace, human decision-making.
Technical Robustness and Safety: AI must be resilient to errors and attacks, and always have a way to backtrack if something goes wrong.
Privacy and Data Governance: AI must respect privacy and ensure that data is used fairly.
Transparency: Users should know when they are interacting with AI, and it should be explainable.
Diversity, Non-discrimination, and Fairness: AI must be free from bias, ensuring fairness and equality for all.
Societal and Environmental Well-being: AI should contribute to societal good, considering long-term impacts on people and the planet.
Accountability: AI systems must be traceable, with clear mechanisms for accountability and redress.
How does this relate to other jurisdictions?
In short, it does not… yet. Whether other jurisdictions will follow suit is uncertain at the moment. One of the reasons being cited for jurisdictions not adopting this approach (like the UK) is that the EU AI Act governs the use of a specific piece of technology – as a whole – rather than a specific harm or negative outcome. The argument there is that an AI act is not needed because other legislation already addresses the associated negative outcomes.
Comments