The European Union has given the final green
light to the AI Act, marking a historic step in regulating artificial
intelligence. This groundbreaking
legislation establishes comprehensive rules to ensure trust, transparency, and
accountability in AI technologies while fostering innovation within Europe.
The EU Commission will enforce the AI Act with the
authority to impose fines of up to €35 million ($38 million) or 7% of a
company’s annual global revenue, whichever is higher. This stringent measure
underscores the EU’s commitment to robust AI regulation.
The AI Act categorizes AI applications based on their risk levels. It bans “unacceptable” applications such as social
scoring, predictive policing, and emotional recognition in sensitive
environments like workplaces and schools. High-risk AI systems, including
autonomous vehicles and medical devices, are subject to rigorous evaluation to
protect health, safety, and fundamental rights. The Act also addresses AI applications in finance and education to
prevent bias.
U.S. technology firms are closely monitoring the AI
Act, given its unique and detailed regulatory framework. These companies, especially those involved in generative AI, must
ensure compliance with the new law, which includes adhering to EU copyright
rules, transparency in model training, and maintaining cybersecurity standards.
Although the AI Act introduces tough restrictions,
these will not be immediate. There is a 12-month delay before the requirements
take effect. Existing generative AI systems, like OpenAI’s ChatGPT and Google’s
Gemini, have a 36-month transition period to achieve full compliance.
The AI Act’s agreement signals a new regulatory
reality. The next crucial step is its
effective implementation and enforcement, ensuring that the legislative
framework translates into practical and beneficial outcomes for AI technology
and its users.