On October 30, 2023, the Biden administration issued its highly anticipated AI Executive Order (EO) on artificial intelligence (AI). 

Focused on creating a framework for “responsible AI”, the EO expands on the Blueprint for an AI Bill of Rights. Key elements covered in its scope include safety, privacy, equity and civil rights, healthcare and education, workplace fairness, and innovation and competition.

In this article we explore what it means for workplace fairness and how employers can implement “responsible AI”.

AI Executive Order: Department of Labor responsibilities

In supporting workers, the AI Executive Order directs the Department of Labor to develop and publish principles and best practices for employers. Best practices should mitigate AI’s potential harms to employee well-being, while maximizing its potential benefits. They specifically include: 

“…..job-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers.”

Part of that guidance, detailed in an accompanying Fact Sheet, will be aimed at preventing employers from undercompensating workers and unfairly evaluating job applicants. The Department of Labor is also directed to address job displacement, labor standards, workplace equity, health and safety, and data collection. 

Labor-market impacts of AI

In addition, the EO mandates that the Council of Economic Advisers must report on AI’s labor-market impacts. A glimpse into the extent of those impacts is given below:

  • Goldman Sachs research suggests that around two-thirds of current jobs could be partially automated by AI. Generative AI, such as ChatGPT, could replace up to 25% of current work. That equates to 300 million full-time jobs globally.
  • People in low-wage jobs are also more susceptible to job displacement. Low-wage earners are 14 times more likely to lose their jobs to AI.  
  • Black and Hispanic workers are overrepresented in the 30 occupations that are most at risk of automation, and underrepresented in those least at risk. 
  • Based on Goldman Sachs’ analysis, it’s also anticipated that 21% more women are at risk of job displacement from automation than men. 8 out of 10 are “highly exposed” to AI automation. That means 25-50% of their tasks could be automated by generative AI, such as ChatGPT. 

While AI also leads to the creation of jobs, it may be detrimental to workplace fairness. For instance, one study suggests AI may lead to an increase in demand for STEM skills. But Pew Research shows uneven progress in STEM towards increasing gender, racial and ethnic diversity. 

As part of its investigations, the EO also requires the Council of Economic Advisers to identify options to strengthen federal support for workers who face labor disruptions. It could follow the lead of the Employment and Equal Opportunities Commission (EEOC).

AI Executive Order and EEOC Title VII

Title VII prohibits discrimination in the workplace based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity) or national origin. It also aims to address workplace fairness and potential labor-market impacts of AI.

Issued in May, EEOC Title VII guidance requires employers to review their use of AI and aligns with the aims of the AI executive order. Its technical assistance document contains implications for employers regarding potential “disparate impact” or “adverse impact” cases, and discrimination which could violate employment law. 

“Disparate impact” applies to all employment decisions, including those on compensation. It also affects employers using algorithmic decision-making tools, such as pay equity software. 

Workplace fairness: best practices for AI

As we noted above, the Biden Administration requires the Department of Labor to provide best practices for employers to mitigate the potential harms of AI on employee well-being. Employers can act now to ensure workplace fairness. Here’s how:

Conduct regular pay equity analyses: Analyzing pay disparities enables organizations to identify areas where issues exist with equality. If identified, employers should then audit their automated employment decision tools (AEDTs) to assess the potential for a disparate impact. 

Evaluate your software vendor: Employers cannot rely on software vendors to make employment-related decisions on their behalf. Title VII makes it clear that “in many cases”, an employer is responsible for its use of algorithmic decision-making tools. That applies even if those tools are designed or administered by a software vendor, “if the employer has given them authority to act on the employer’s behalf.” 

In evaluating your vendor, inquire what actions have been taken to evaluate whether their software may have a disparate impact and if they relied on the four-fifths rule. The rule determines whether the selection rate for one group is “substantially” different compared to the selection rate of another group. 

Can the vendor demonstrate that its software has been audited to eliminate any potential bias?

Assess all employment-related AI tools: When used responsibly, AI in HR can help in analyzing pay disparities, ensuring diversity in pay and creating equal pay practices. Challenges arise when employers don’t act to eliminate bias. A 2022 IBM study found that nearly three quarters of organizations are failing to reduce unintentional bias in their AI solutions. Furthermore, 60% are not developing ethical AI policies. 

Pay equity software like Trusaic PayParity can help if organizations want confidence in complying with legislation on the use of AI tools. 

Has the Biden Administration gone far enough?

Some experts, while welcoming the AI executive order as a “meaningful step,” feel it is not enough to address key concerns. They include calls for more enforcement requirements to evaluate and mitigate AI bias and discriminatory algorithms. 

A further issue is the broad time frames contained in the AI executive order requirements which range from 90 to 365 days. Many of the initiatives outlined will also require congressional action before they come into effect. 

Prepare for the EU’s AI Act

Concerns over AI are not limited to the US. On June 14, the European Parliament approved the text of draft legislation for the world’s first EU AI Act. This act would consider all AEDTS “high risk applications,” subject to strict legislation. It is expected to come into force during 2025 or 2026. Given its far-reaching changes, employers are encouraged to act now to implement responsible AI policies, especially in all areas relating to employee well-being and workplace fairness. 

Adopt best principles and practices on the use of AI. Start with a pay equity audit. Speak to one of our experts.

Download: Pay Equity Definitive Guide