AI

Department of Labor, Including OFCCP, Continues Work on Guidance and “Promising Practices” Regarding Artificial Intelligence | Seyfarth Shaw LLP


Seyfarth Synopsis: The Acting Director of OFCCP and the Solicitor of Labor indicated that they are moving full speed ahead on developing guidance regarding employers’ use of artificial intelligence, and that the Department of Labor is working on a “broader value-based document” that contains “principles and best practices” for both employers using AI and developers of the AI tools. OFCCP is working on “promising practices” regarding AI selection tools. Additionally, leaders from the EEOC, NLRB, and the Department of Justice continue to emphasize their commitment to using their existing enforcement authority regarding AI issues.

On March 20, 2024, at an American Bar Association meeting held in Boston, Solicitor of Labor Seema Nanda and the Acting Director of OFCCP, Michelle Hodge, indicated that the Department of Labor is moving full speed ahead on issuing multiple guidance documents regarding artificial intelligence, as directed by President Biden’s expansive AI executive order, signed on October 30, 2023. On the same panel, independent agency heads, notably Chair Charlotte Burrows of the EEOC and NLRB General Counsel Jennifer Abruzzo, emphasized their ongoing commitment to use their agencies’ existing enforcement authority to address concerns with employers’ use of AI.

1. The Department of Labor and OFCCP Are Working on AI Guidance and Promising Practices

President Biden’s EO on artificial intelligence initiated a comprehensive government-wide approach to AI regulation, and set in motion actions across multiple executive agencies. One of the Secretary of Labor’s many deliverables under the EO is to develop and publish, by the end of April 2024, “principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” In her remarks, Solicitor Nanda shed light on the Department of Labor’s approach towards this guidance. Notably, she said that the Department of Labor will be issuing a “broader value-based document” that contained “principles and best practices” for both employers using AI and developers of the AI tools.

Among the “principles and best practices” the Solicitor discussed were the need for employers using AI to engage with their workers, especially if their workers were unionized. She asserted, “If you are developing AI without engagement of workers, you’re going to be missing a key component.” The need for stakeholder engagement is not a new concept in AI risk management, and the inclusion of this principle should not come as a surprise to observers paying attention to other federal government efforts on AI, as the concept of stakeholder engagement was explored as part of NIST’s AI Risk Management Framework issued in January 2023.

Other principles that Solicitor Nanda mentioned were the need for human oversight of AI systems and the need for validation and monitoring. “You have to make sure it [the AI] is working,” she said. Solicitor Nanda also discussed the need for greater transparency regarding the AI systems employers were using, inviting employers and AI developers to consider whether they were creating an avenue for the public or job applicants to know that AI is being used to make an employment selection decision that might be screening out candidates.

President Biden’s executive order also specifically directs the Secretary of Labor’s report due at the end of April 2024 to cover “job-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers.” In her remarks, Solicitor Nanda confirmed that the Department of Labor was also working on this job-displacement report.

Solicitor Nanda also confirmed that the Department of Labor’s Wage and Hour Division is working on guidance that would make it clear that employers who deploy AI systems to monitor workers “must make sure workers are compensated.” While Solicitor Nanda did not provide further details about this guidance, President Biden’s executive order specifically directs that the Secretary of Labor “shall issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated for their hours worked” in order to “support employees whose work is monitored or augmented by AI in being compensated appropriately for all of their work time.”

Both Solicitor Nanda and OFCCP Acting Director Michelle Hodge discussed the progress of OFCCP’s work on additional AI guidance. (President Biden’s executive order directs the Department of Labor to “publish guidance for Federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems” within one year – i.e., by the end of October 2024.) Acting Director Hodge specifically confirmed, “We are working on new FAQs and promising practices.” In this context, we expect that Acting Director Hodge was referring to updates to OFCCP’s FAQ on Validation of Employee Selection Procedures from 2019, that touches on AI issues.

Consistent with OFCCP’s existing FAQ, Acting Director Hodge emphasized that an employment selection procedure that includes AI “still is a selection procedure” that federal contractors must audit in order “to ensure [they] are not creating barriers to equal employment.” She reiterated OFCCP’s expectation that federal contractors “can’t just pull something off the shelf and decide to use it” and that OFCCP expected employers to “drill down, under the Uniform Guidelines.” She also discussed OFCCP’s recent changes to its scheduling letter, describing these amendments as coming from OFCCP’s desire “to know at the beginning of a compliance process, are you using AI or algorithms in your screening or hiring process.”[1]

Regarding Acting Director Hodge’s reference to “promising practices” documents we note that such documents issued by federal agencies do not establish new mandatory legal requirements under federal law. Thus, failing to follow a federal agency’s “promising practices” recommendations will not independently result in an enforcement action, and conversely, demonstrating compliance with them does not independently insulate an employer from liability.[2] Nevertheless, any AI “promising practices” document issued by OFCCP has the potential to reflect the developing consensus on best practices and industry standards for managing AI risk. Employers using AI selection tools, especially federal contractors, will be well-advised to carefully consider what OFCCP will say in this area, in order to achieve compliant and responsible AI usage.

2. EEOC and NLRB Agency Heads, and the Department of Justice, Emphasized Their Existing Authority

On the same panel, leaders from the EEOC and NLRB, along with a representative of the Department of Justice’s Civil Rights Division, all agreed that using their existing enforcement authorities as employers implement AI practices is an ongoing priority for their agencies.

Johnathan Smith, Deputy Assistant Attorney General and Acting Chief of Staff of the Department of Justice’s Civil Rights Division, briefly described how the Civil Rights Division’s convened federal government officials to discuss how they could work together on AI issues. He cited challenges faced by government agencies in hiring people with technical expertise and integrating them into their legal investigation and enforcement processes. He predicted, “AI issues are going to be something that we’re all going to be grappling with for many years to come.”

EEOC Chair Charlotte Burrows concurred, observing, “This is really an area where we are taking a whole of government approach…We are talking to each other and other agencies at the principal levels and up and down at all levels.”

Chair Burrows emphasized that the existing laws the EEOC enforces “really do give a lot of applicable tools to protect against harms” from AI, and that it was important for technology creators to understand that. She mentioned the EEOC’s prior technical assistance documents regarding the applicability of the Americans With Disabilities Act and Title VII to AI tools.

In response to a question regarding whether the EEOC was considering updating the 1978 Uniform Guidelines on Employee Selection Procedures, Chair Burrows suggested that while there might be some “clarification” forthcoming, she felt the Uniform Guidelines were “fairly clear” and the problem was not with the guidelines themselves, but rather the “way people are thinking” about them, which included “an over-simplification” of the 4/5ths test[3] set forth in the Uniform Guidelines. In her remarks, Chair Burrows emphasized, “I think the Uniform Guidelines are pretty clear that this is a rule of thumb, not an active bright-line,” and that the EEOC’s statistical cases involving discrimination were “more-sophisticated than the 4/5ths test.”

In additional comments, Chair Burrows identified two distinct issues regarding AI.

First, consistent with remarks that Chair Burrows has made in the past, she cited concerns that people with protected characteristics were disproportionately over-represented in “bad” data sets that were being used as AI training data. She also cited concerns that these same categories of people were disproportionately under-represented in “good” data sets being used to train AI.  

Second, Chair Burrows expressed concern about algorithms and AI being used to monitor employees, especially monitoring that might be happening without employees knowing about it or understanding enough about it in order to exercise their rights under the Americans With Disabilities Act.  She also expressed concern that an employee could be “stuck in a feedback loop” without being able to “find a person to talk to” to request an accommodation, in violation of the interactive process requirements in the ADA.

NLRB General Counsel Jennifer Abruzzo also spoke at length about her assertions that an employer’s electronic surveillance and algorithmic monitoring of workers could violate Section 7 of the National Labor Relations Act. She cited her prior GC memo regarding electronic monitoring, issued in October 2022.

GC Abruzzo specifically expressed concerns regarding electronic monitoring or algorithmic management that prevented workers from taking breaks together, or which forced them to increase the pace of their work. She asserted that these practices were “interfering with [workers’ rights] … to engage together to address areas of mutual concern” and that workers were entitled to do so under the NLRA. She concluded her remarks on AI emphasizing, “AI can be a great thing in so many ways, but it also can interfere with workers’ rights.” 

Implications for Employers

President Biden’s directive that the federal government adopt a “whole of government” approach to regulating AI is in full-swing, as evidenced by recent comments by multiple agency heads. Employers should expect the heightened enforcement scrutiny on AI issues from OFCCP, EEOC, the NLRB, and the Department of Justice to continue. We will continue to monitor these developing issues, especially as the Department of Labor and other agencies continue their work to meet the multiple deadlines for issuing AI-related guidance set forth in President Biden’s executive order on AI.

[1] Contractors who are scheduled for a compliance audit are now required to “identify and provide information and documentation of policies, practices, or systems used to recruit, screen, and hire, including the use of artificial intelligence, algorithms, automated systems or other technology-based selection procedures” at the time of the initial desk audit submission. 

[2] In fact, the EEOC’s “promising practices” document regarding the prevention of harassment observes that while the practices described in the document are not legal requirements, “refraining from taking certain actions recommended here as promising practices may increase an employer’s liability risk in certain circumstances.” [3] As described in the EEOC’s technical assistance on Title VII and AI, issued in 2023, “The four-fifths rule, referenced in the [Uniform] Guidelines, is a general rule of thumb for determining whether the selection rate for one group is “substantially” different than the selection rate of another group. The rule states that one rate is substantially different than another if their ratio is less than four-fifths (or 80%).”



Source

Related Articles

Back to top button