In November 2025 the European Commission proposed a series of amendments to the EU AI Act in response to implementation concerns raised by industry and Member States. Citing regulatory complexity and readiness challenges, the Commission is signaling a willingness to recalibrate certain provisions before full enforcement.
For employers using AI in workforce decision-making — including compensation, pay equity analysis, and transparency reporting — these proposed changes are significant.
While the amendments are not yet final and must proceed through the European Parliament and Council, where the amendments are already facing some institutional resistance, they offer insight into how the EU may balance regulatory oversight with economic competitiveness in the years ahead.
Key Proposed Amendments
The Commission’s proposal includes several substantial adjustments:
Deferral of High-Risk Provisions
Enforcement of “high-risk” AI system requirements would be postponed to:
- Dec. 2, 2027, or
- Aug. 2, 2028 (for certain systems covered by EU harmonization legislation).
This is particularly relevant for employers, as AI used in “employment, workers management and access to self-employment” is classified as high risk under the Act.
AI Literacy Responsibility Realignment
Rather than placing the burden solely on providers and deployers (including employers), the proposal would shift responsibility toward the European Commission and Member States to promote AI literacy.
The literacy requirement is already in effect, so this represents a meaningful redistribution of compliance responsibility.
Use of Special Category Data for Bias Detection
The amendment would permit processing of sensitive personal data for bias detection and mitigation — but only when strictly necessary and subject to strong safeguards. Employers must first explore alternatives, including synthetic data.
Removal of Certain Registration Obligations
Providers that determine an AI system listed in Annex III is not high-risk would no longer face mandatory registration obligations.
Simplified Requirements for small and medium-sized enterprises (SMEs) and small mid-caps (SMCs)
Quality-management obligations would be streamlined for smaller enterprises.
Expanded Regulatory Sandboxes
The Commission would facilitate regulatory sandboxes, prioritizing SMEs and adopting EU-wide implementing acts.
Why This Matters for Employers
The proposed deferral of high-risk enforcement is the most consequential development for HR and compensation leaders.
Under the current AI Act framework, AI systems used to make decisions in:
- recruitment,
- performance evaluation,
- compensation modeling,
- promotion decisions, and
- workforce management
are considered high risk.
This classification carries obligations around:
- risk management,
- documentation,
- transparency,
- human oversight,
- bias mitigation, and
- post-market monitoring.
A delay would not eliminate these requirements — but it may provide employers additional time to operationalize compliant AI governance frameworks. This is especially true as the Commission has delayed publication of further guidance specifying the practical implementation of provisions for high-risk AI systems and including a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk. That guidance was due to be published on Feb. 2, 2026.
The proposed clarification around sensitive data processing for bias detection is also notable. For pay equity and compensation analytics, the ability to use protected characteristic data in a structured, safeguarded way is essential to identifying and correcting disparities. The proposed amendment recognizes that eliminating bias sometimes requires carefully controlled access to the very data that defines it.
Intersection with the EU Pay Transparency Directive
The AI Act and the EU Pay Transparency Directive (EUPTD) are distinct legal frameworks but they increasingly intersect in practice.
The EUPTD requires employers to:
- analyze and report gender pay gaps,
- remediate unjustified disparities,
- respond to Right-to-Information (RTI) requests,
- and maintain defensible pay systems.
AI systems are often used to support these activities. When those systems fall into the AI Act’s high-risk category, employers must ensure compliance with both regimes.
Even if high-risk enforcement is deferred, employers operating in the EU will still need to:
- conduct rigorous pay equity analysis,
- document methodologies,
- maintain human oversight of compensation decisions,
- and ensure bias mitigation practices align with EU standards.
In other words, regulatory delay does not eliminate accountability.
Implications for Global Employers
For multinational employers, the proposed amendments reinforce a familiar reality: EU regulatory frameworks continue to set a high standard for AI governance.
Organizations that build systems capable of meeting EU AI Act requirements — even if enforcement is postponed — will likely be well-positioned for future regulatory convergence in other jurisdictions.
The deferral may provide breathing room. It does not reduce the importance of structured, defensible AI governance in employment contexts.
Responsible AI as a Strategic Imperative
As the EU recalibrates aspects of the AI Act, employers should resist the temptation to interpret regulatory deferrals as signals to delay governance efforts.
AI used in pay equity, compensation, and workforce decision-making remains sensitive, regulated, and high-stakes.
At Trusaic, we believe that AI in employment contexts must be:
- grounded in accurate, unbiased data,
- built on transparent, auditable logic,
- designed for bias detection and mitigation,
- and implemented with human-in-the-loop oversight.
Our AI capabilities — including regression-based pay equity analysis, our remediation optimization engine, and AI-powered Pay Decisions — are structured around these principles. AI accelerates insight and optimization, but accountability remains with employers and decision-makers.
As proposed amendments move through Parliament and Council — and further changes are expected — the regulatory landscape may shift. But the core expectation for employers will not: AI influencing employment decisions must be responsible, defensible, and aligned with fundamental rights.
Organizations that embed Responsible AI now will meet regulatory requirements and build more resilient, transparent compensation systems for the future.