The European Union has reaffirmed its intention to implement comprehensive regulation in the field of artificial intelligence, despite mounting pressure from leading global tech companies. The AI Act, developed by the European Commission and approved in spring 2024, is being introduced gradually and is expected to be fully enforced by mid-2026. Its provisions have triggered a strong response from more than a hundred IT firms, including Alphabet (Google’s parent company), Meta, Mistral AI, and ASML, wh…
According to European Commission spokesperson Thomas Renier, despite numerous appeals and public statements, there will be no suspension of the process or grace period. “There will be no pause, no delay. We are proceeding as planned,” he stated.
What the EU AI Act Includes
The regulation is based on a risk-tiered classification of AI systems. All AI applications are divided into four categories, ranging from minimal to unacceptable risk. Technologies considered ethically or socially dangerous—such as behavioral manipulation or social scoring systems—are outright banned.
“High-risk” systems include facial recognition, automated decision-making in education, employment, and healthcare. These technologies must undergo rigorous assessment, registration, and comply with a full set of requirements including risk management, transparency, and quality assurance.
“Limited risk” systems—such as chatbots and generative AI models—must adhere to basic transparency rules. Developers are required to disclose training data sources, notify users when interacting with AI, and report risk mitigation measures.
Why the Law Faces Backlash
Under the new legal framework, the law applies to any company providing AI-based services in the EU, regardless of its location. This means global players like Google and Meta must adapt their technologies to comply with European standards, including system audits, quality tracking, and risk documentation. Non-compliance could lead to fines of up to €35 million or 7% of global annual turnover.
Tech giants have voiced concerns over a lack of detailed implementation guidance, especially concerning general-purpose AI (GPAI) models like GPT and Gemini. The long-awaited Code of Practice—a voluntary compliance guide—is not expected until the end of 2025, just a year before full implementation. Companies say this legal uncertainty forces them to operate “in the dark.”
What’s Next
The EU insists that transparency, consumer protection, and preventing AI abuse outweigh short-term industry concerns. Implementation has already begun: core provisions took effect in August 2024, a full ban on “unacceptable risk” applications is due in 2025, and full oversight of high-risk systems will be enforced by 2026.
For major tech firms, this means urgent adaptation to a new regulatory environment—from redesigning product infrastructure to adjusting business models. Failure to comply could result in exclusion from one of the world’s largest and most tightly regulated digital markets.

EU Enforces Stricter AI Regulation: Google, Meta, and Others Under Pressure
Popular Categories