AI

Atkinson, Andelson, Loya, Ruud & Romo


You may have heard the new trendy buzzwords circulating in education: “ChatGPT” and “generative artificial intelligence” (“AI”)[1].  Similar to when the internet first became mainstream and heavily used in our society, AI is here to stay, and AI will become more and more integrated in our society.  Most if not all classrooms, whether it be students or teachers, have had some exposure to AI.  Students are using AI to write essays or other types of written assignments.  Teachers are using AI to create lesson plans, create personalized learning plans for students, develop adaptive tutoring, provide homework assistance, and explain complex concepts in simple terms within seconds. 

While there are many benefits that AI creates for local educational agencies (“LEA”), there are many legal issues that AI can create for them.  As discussed further below, some of those legal issues include the following: (1) algorithmic bias and discrimination; (2) accessibility for disabled employees; (3) privacy in student records; (4) content moderation for explicit material for minors; and (5) bargaining implications for job replacement.

1. Algorithmic Bias and Discrimination

AI is based on algorithms.  An algorithm is a set of rules or instructions that is to be followed by computers in problem-solving operations to achieve an intended end goal.  (Akgun S, Greenhow C. Artificial intelligence in Education: Addressing ethical challenges in K-12 settings, AI Ethics. 2022.)  AI can be useful, and/or discriminatory, based on the information that is inputted into the AI.  If a LEA uses AI as a tool for recruitment purposes, there is a risk that AI can engage in discrimination against protected classes of individuals.  For example, in 2015 Amazon realized that its algorithm used for hiring employees was found to be biased against women.  (https://www.reuters.com/article/idUSKCN1MK0AG/.)  The reason for this bias was due to the fact that the algorithm was based on the number of resumes submitted over the past ten years, and since most of the applicants were men, it was trained to favor men over women.  LEAs should be wary of using AI for recruitment purposes.

2. Accessibility

In order for users to use some AI platforms, they must be able to physically type out a response in an AI platform on a computer and read whatever response is provided to them by an AI platform.  A disabled employee who is not able to physically use a computer to type out prompts and read responses may be severely limited from using AI, which can cause accessibility issues under various state and federal laws.  There are several laws that protect employees against disability discrimination such as Title II of the Americans with Disabilities Act, the California Unruh Civil Rights Act, and Section 504 of the Rehabilitation Act of 1973, to name a few.  To the extent that LEAs adopt the use of AI in the workplace and they have disabled employees who are not physically able to use and read a computer, LEAs will have legal obligations to accommodate such employees and a failure to do so can result in liability for disability discrimination claims.

3. Privacy in Student Records

Some AI platforms are “open source” (e.g., ChatGPT).  “Open source” refers to technologies where the source of code is freely available in a public domain for anyone to use, modify, and/or distribute.  This means that anyone from the public can use open-source AI to freely obtain information that they can use to modify or distribute.  Disclosing private information in an open-source AI platform is somewhat analogous to disclosing information in a public domain like social media.  If private information is disclosed in an open-source AI platform (e.g., ChatGPT), other users may be able to see that private information in a response provided by the platform because AI recycles information it receives from other prompts and responses.  Also, in an open-source AI platform like ChatGPT (which is owned and operated by OpenAI LP), employees at OpenAI LP have the ability to access and see prompts inputted by users and responses generated by ChatGPT.   

In the context of a school setting, some teachers, administrators, or administrative staff may use AI to process private information about students.  We caution LEAs in using open source-AI to process, draft, develop, or summarize records that contain personal identifying information of students, e.g., special education records, since state and federal law (i.e., the Family Educational Rights and Privacy Act) require LEAs to protect the privacy of student records.  Given how fast AI technology is developing, we expect that AI technology will develop to provide privacy protections in information inputted into an AI platform.  Currently, many open-source AI platforms do not provide sufficient privacy protections.  When utilizing AI in any context related to student information, we recommend a detailed review of the privacy policy of the AI software and an analysis of what information is being stored, and who that information is being disclosed to.

4. Content Moderation for Minors

The Children’s Internet Protection Act (“CIPA”) forbids LEAs from receiving federal assistance for internet access unless they block or filter internet access to visual depictions that are obscene, pornographic, or harmful to minors, which include individuals who are less than 17 years old, and prevent minors from accessing such content.  (47 U.S.C. § 254.)  Since generative AI has the capability of creating obscene, pornographic, or harmful content, this tool gives minors the ability to create such content.  Earlier this year, several students at Beverly Hills Unified School District were expelled for sharing fake AI-generated nude photos of some of their classmates.  (https://www.foxla.com/news/beverly-hills-unified-expels-five-students-after-sharing-ai-generated-nude-photos.) 

To the extent that LEAs allow students to use generative AI tools in school, they need to be mindful of their obligations for blocking or filtering out content prohibited under CIPA.

5. Bargaining Implications

As the State of California continues to experience severe budget deficits, which impacts funding that LEAS receive, some LEAs may experience increased pressure to cut down on costs.  In light of this pressure, some LEAs may consider generative AI to replace bargaining unit work.  For example, AI may be able to perform some basic tasks in an automated fashion such as payroll and accounting functions.  If a LEA decides to use AI to replace certain jobs, this decision is considered contracting out bargaining unit work, which would trigger a legal obligation to bargain this decision with exclusive representatives.  As such, LEAs should be mindful of bargaining implications if they decide to use AI to perform work customarily done by employees.  

This Alert cannot summarize the myriad legal issues that AI is creating for LEAs as many of these issues are novel, complex, and challenging to navigate.  As AI technology develops at an exponential pace, so will the legal issues that come with it.  Should you have any questions concerning the topic of this Alert, please do not hesitate to contact the authors or your usual counsel at AALRR for guidance.  


[1] Generative AI is a type of artificial intelligence technology that can produce various types of content such as text, imagery, graphics, audio, and videos within seconds. 

This AALRR publication is intended for informational purposes only and should not be relied upon in reaching a conclusion in a particular area of law. Applicability of the legal principles discussed may differ substantially in individual situations. Receipt of this or any other AALRR publication does not create an attorney-client relationship. The Firm is not responsible for inadvertent errors that may occur in the publishing process.

© 2024 Atkinson, Andelson, Loya, Ruud & Romo



Source

Related Articles

Back to top button