The AI Act 2025 is a proposed regulation by the European Union aimed at establishing a legal framework for artificial intelligence (AI) within the EU. Its primary objectives include ensuring the safe and ethical development and deployment of AI technologies, promoting innovation, and protecting fundamental rights.
I’m proud that Europe is taking an examplar role in passing comprehensive AI regulation with the AI Act, coming into effect in January 2025. The biggest impact is the complete ban on systems that are considered a threat to people, such as social scoring and real-time facial recognition. Less threatening systems are categorised based on risk and have to comply with increasingly strict codes of practice.
A downside of the act is that other geopolitical players might nog pass similar regulations and gain a competitive advantage develop possibly threatening AI enabled systems.
Key features of the AI Act 2025 include:
- Prohibited Practices: Certain AI applications deemed to pose an unacceptable risk, such as social scoring by governments, are banned outright.
- Risk-Based Approach: AI systems are categorized based on the risk they pose. High-risk AI systems face stricter requirements, including rigorous testing and documentation.
- Transparency and Accountability: AI developers and users must provide clear information about the use and functioning of AI systems. High-risk AI systems require human oversight to prevent harmful outcomes.
- Compliance and Enforcement: The Act establishes mechanisms for monitoring compliance, including fines and penalties for non-compliance. National authorities and a new European Artificial Intelligence Board will oversee enforcement.
- Innovation and Support: The Act encourages innovation by providing support for AI development and ensuring that regulatory requirements are proportionate to the level of risk.