A key phase of the European Union’s Artificial Intelligence Act came into force on August 2, imposing new rules on providers of general-purpose AI models such as large language systems and generative AI tools.
Under the law, companies must now meet strict transparency and governance obligations, maintain technical documentation, and implement safeguards to prevent illegal or harmful use of their AI systems. Violations could bring fines of up to €35 million or 7% of a company’s global annual revenue.
The EU’s AI Act — the first comprehensive AI law in the world — began its rollout in August 2024, with bans on “unacceptable risk” AI systems like real-time biometric surveillance taking effect earlier this year.
The rules introduced this weekend target foundation models used to power a wide range of applications. Major tech companies including Google and OpenAI have signed onto a voluntary code of practice to guide compliance, while Meta declined to join, citing legal concerns. Elon Musk’s xAI agreed to support only the code’s safety provisions.
More rules will follow: in August 2026, high-risk AI systems used in areas like health care, policing, and employment will face full compliance obligations, and by 2027, the law will apply across all categories of AI risk.
The European Commission says the act is designed to balance innovation and safety while setting a global benchmark for AI governance — one that U.S. tech companies doing business in Europe will now have to follow.