DOL Issues Artificial Intelligence Principles | Littler
|
On May 16, 2024, the U.S. Department of Labor (DOL) released a document entitled, “Department of Labor’s Artificial Intelligence and Worker Well-being: Principles for Developers and Employers.” The document outlines several artificial intelligence principles (“AI Principles”) to provide employers and developers that create and deploy AI with guidance for designing and implementing these emerging technologies in ways that enhance job quality and protect workers’ rights. The DOL’s AI Principles emphasize ethical development; transparency and meaningful worker engagement in AI system design, use, governance, and oversight; protection of workers’ rights; and use of AI to enhance work. This document was issued in response to President Biden’s Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (“AI Executive Order”), issued on October 30, 2023, which directed the DOL to develop best practices for employers, agencies, and federal contractors.
The document states that the DOL’s AI Principles apply during the entire lifecycle of AI, from development, testing and deployment of AI systems in the workplace, to oversight, use and auditing. The document further states that the AI Principles are applicable to all sectors and intended to be mutually reinforcing, though not all principles will apply to the same extent in every industry or workplace. Importantly, the DOL states that “[t]he Principles are not intended to be an exhaustive list but instead a guiding framework for businesses. AI developers and employers should review and customize the best practices based on their own context and with input from workers.” This statement underscores that the AI Principles are not prescriptive. The AI Principles include:
- [North Star] Centering Worker Empowerment: The DOL’s document states that workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace. This principle reflects a strong emphasis on empowering workers and their representatives, which is aligned with President Biden’s historically pro-union position. Employers should follow related developments at the National Labor Relations Board (NLRB), which may rely on this principle as the basis for issuing unfair labor practice complaints alleging violations of the National Labor Relations Act (NLRA).
- Ethically Developing AI: The document emphasizes that AI systems should be designed, developed, and trained in a way that protects workers. This echoes guidance from the Blueprint for an AI Bill of Rights, which provides that automated systems should be developed in consultation from experts and include pre-deployment testing, identification of risks, mitigating efforts, and ongoing monitoring.
- Establishing AI Governance and Human Oversight: The DOL’s AI Principles also stress that organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace.
- Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace. This principle is similar to guidance in the Blueprint for an AI Bill of Rights regarding notice, explanation and, insofar as possible, the grant of agency to those affected, regarding the collection, use, access, transfer, and deletion of their data in automated systems. Technical guidance issued from the U.S. Equal Employment Opportunity Commission (EEOC) also contains a similar provision regarding notice of use and function of AI tools in employment decisions. The EEOC also recommends providing alternatives unless this would cause undue hardship.
- Protecting Labor and Employment Rights: AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.
- Using AI to Enable Workers: AI systems should assist, complement, and enable workers, and improve job quality.
- Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions related to AI. The DOL provides no details on how employers can support workers displaced by AI. In fact, the AI Executive Order directed the DOL to issue a report on how the government can support workers displaced by AI by the end of April of 2024, but the DOL has not yet issued the report.
- Ensuring Responsible Use of Worker Data: Workers’ data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly. This is similar to the Blueprint for an AI Bill of Rights guidance that people should, to the maximum extent possible, have agency regarding the collection, use, access, transfer, and deletion of their data in automated systems.
The same day the DOL issued its AI Principles, the White House issued a fact sheet concerning “critical steps to protect workers from risks of artificial intelligence.” In the fact sheet, the Biden administration outlined additional principles targeting the use of AI in the workplace, including not simply informing workers of the use of AI in the workplace, but also ensuring workers “have genuine input in the design, development, testing, training, use, and oversight” of such AI. Additionally, the principles include informing applicants of AI systems used by employers in hiring, ensuring AI systems do “not violate or undermine workers’ right to organize” and seeing that AI systems “improve job quality” and “support or upskill workers.” The Biden administration directed that employees’ data “should be limited” and used only “to support legitimate business aims.” These principles, like the DOL’s AI Principles, do not specify what may run afoul of the documents.
AI Principle Key Takeaways
First, the Biden administration and the DOL’s AI Principles make clear that the federal government is taking a comprehensive approach to AI in the workplace, from hiring through leaves and accommodation, day-to-day performance tools, wage and hour policies, and worker organization under the NLRA. While the DOL has no jurisdiction to enforce the NLRA, we anticipate this “whole of government” approach will bring additional scrutiny from other applicable agencies, such as the EEOC and NLRB. Indeed, the EEOC has already issued its technical assistance document on “assessing impact in software, algorithms, and artificial intelligence” for issues that could arise under Title VII of the Civil Rights Act of 1964. Moreover, NLRB General Counsel Jennifer Abruzzo has stated her concern that “employers could use these technologies to interfere with the exercise of Section 7 rights under the National Labor Relations Act by significantly impairing or negating employees’ ability to engage in protected activity.”
Second, the AI Principles from the DOL are merely one of the latest in a string of guidance, statements, and other resources issued by federal agencies targeting AI. The DOL’s AI Principles underscore that much of this guidance is descriptive to protect against some of the potential harmful risks of AI, but not federally mandated or binding. Indeed, the DOL emphasizes that its AI Principles are not “intended to be an exhaustive list but instead a guiding framework for businesses.”
Third, the AI Principles state that the DOL developed the document “with input from workers, unions, researchers, academics, employers, and developers, among others, and through public listening sessions,” without acknowledging any input from employers or developers regarding AI in the workplace related to the AI Executive Order.
Fourth, while the document is newly issued, many of the DOL’s AI Principles recycle key principles from earlier federal guidance documents and are in some cases considerably weaker. For example, when compared to the Blueprint for the AI Bill of Rights the White House issued in October of 2022, the document’s use of the word “should” reiterates that these best practices are not prescriptive.
Fifth, DOL’s AI Principles track recent federal legislative developments related to AI. Most notably, on May 15, 2024, a bipartisan group of U.S. senators released their legislative plan for AI called, “Driving U.S. Innovation in Artificial Intelligence.” The plan calls for annual spending of $32 billion by 2026 for research and development of AI, creation of a federal data privacy law, and efforts to prevent deepfakes in elections. However, the plan did not offer more specific information, instead calling for Congress and federal agencies to regulate AI because, Senator Chuck Schumer stated, “It’s very hard to do regulations because A.I. is changing too quickly.” And while the Senate’s legislative plan is based on a belief that AI changes too quickly and should be addressed by regulation, such regulation would involve a notice and comment period, delaying implementation of any substantive regulatory guidance. The European Union (EU), on the other hand, was able to enact its AI Act, which sets forth specific requirements for any U.S. companies using AI systems in the EU market, including prohibitions on risks the AI Act deems unacceptable, such as systems that manipulate an individual’s behavior, systems that infer characteristics, such as religious beliefs or sexual orientation, and systems that are used in the workplace for emotion recognition. Employers with operations in the EU should pay particular attention to the EU’s AI Act and consult with counsel if they are currently using AI systems.
Conclusion
Ultimately, the DOL’s AI Principles show how the Department is attempting to regulate AI without new legislation from Congress. More AI guidance is expected in the future. The press release accompanying the DOL’s AI Principles notes that DOL “will soon provide employers and developers with best practices to consider as they implement the AI principles.” Ultimately, the rapidly evolving regulatory landscape requires that employers and their counsel pay close attention to current and developing legal authority concerning AI.