Without effective safeguards, employers are at risk of violating Title VII during hiring, when managing performance and in determining fair pay. The rationale behind the document was explained by EEOC Chair, Charlotte A. Burrows:
“As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality.”
What does EEOC Title VII mean for employers?
EEOC Title VII prohibits discrimination in the workplace based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity) or national origin. The technical assistance guidance contains implications for employers regarding potential “disparate impact” or “adverse impact” cases, and discrimination which could violate employment law.
“Disparate impact” refers to discrimination that occurs “when a policy or practice has a significant negative impact on members of a Title VII-protected group but is not job-related and consistent with business necessity,” according to EEOC guidance. It applies to all employment decisions, including those on compensation, and affects employers using algorithmic decision-making tools, such as pay equity software, to identify and remedy pay disparities across the organization.
“Disparate impact” analysis is the focus of the technical assistance document.
How does AI develop bias?
AI bias arises when the data used by algorithms is chosen by humans, who also decide how the results of the algorithms are applied, perpetuating biased models. For instance, in 2018, Amazon scrapped a recruiting tool that demonstrated bias against women. Its computer models were trained to vet applicants by observing patterns in resumes over a 10 year period. Effectively, the system taught itself to prioritize male candidates.
EEOC Title VII guidance is part of a wider trend surrounding AI
On April 25th, the Consumer Financial Protection Bureau (CFPB), Department of Justice (DOJ) Civil Rights Division, the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC) issued a joint statement confirm their intention to work collaboratively to “monitor the development of automated systems.” That includes AI as well as other software and algorithmic processes. The intention is to prevent any form of bias which results in discrimination against protected classes.
The EEOC made it clear that employers cannot delegate responsibility for discrimination to a third party software provider, nor on their vendor’s assurance that its software complies with EEOC Title VII. If your pay equity software violates workplace laws you, as an employer, may be held liable.
EEOC guidance states that “in many cases”, an employer is responsible under Title VII for its use of algorithmic decision-making tools even if the tools are designed or administered by a software vendor, “if the employer has given them authority to act on the employer’s behalf.”
Partnering with a pay equity software vendor
Selecting the right pay equity software vendor is more critical than ever for employers. When choosing a partner, the EEOC recommends that organizations:
Ask vendors what steps have been taken to evaluate whether their software might cause an adverse disparate impact
Carry out an evaluation of employment-related AI tools to ensure compliance with workplace laws. This includes carrying out an auditing process on all AI functions.
Conduct an ongoing self-analysis to determine whether your use of technology could result in discrimination
That applies to pay equity software. If the vendor’s assessment of its software is inaccurate, and results in disparate impact discrimination relating to pay transparency, for example, employers could still be liable.
Partner with a trusted pay equity software provider
Companies that rely on AI and automated systems should prepare for increased scrutiny, and not just in the US.
On May 11th, 2023, the EU adopted a draft negotiating mandate on the “first ever rules for Artificial Intelligence.” If passed, it will become the world’s first Artificial Intelligence Act to impose transparency requirements on AI-decision making. The aim is to promote transparency and reduce risk, and it will also include the right to complain about AI systems. Like the Pay Transparency Directive, it seems highly probable that the EU will lead the way in creating a blueprint for legislation related to transparency in the use of AI.
Partnering with a trusted pay equity software provider to eliminate bias is essential for all employers. Trusaic works with organizations to minimize risk, increase compliance, and achieve authentic change:
Identify and remedy potential bias:Trusaic’s PayParity identifies and corrects the root causes of pay disparities by utilizing advanced analytics and algorithms that pinpoint biases, faulty systemic processes, and other factors, providing you with pay ranges to create a robust compensation strategy, remain competitive and avoid the creation of future disparities. It also carries out an equity audit across your workforce in a single statistical regression analysis delivering a clear picture of pay gaps and risk areas across groups of employees, at every level. Pay equity software can also help your organization to comply with pay transparency legislation affecting job listings.
Stay up to date with pay transparency legislation: Working with a trusted pay equity software provider ensures employers not only comply with EEOC Title VII and the EU’s Pay Transparency Directive, but with regular updates to pay transparency and pay equity laws. Most recently, these include expanded SB 1162 pay data reporting requirements in California.
With pressure mounting for organizations to prove they foster diverse, equitable, and inclusive workforces, employers must commit to prioritizing workplace equity. Our research report, Creating a Culture of Diversity, Equity, and Inclusion, details how organizations can ensure their successful with their efforts. Download it now to learn more.