Generative AI is incompatible with the goals of education | Opinions
Generative AI is here to stay. The initial fanfare that accompanied that launch of ChatGPT — a mix of caution, optimism and apocalyptic fears — has settled down as AI continues to make inroads in art, industry and education. As a demographic, students are perhaps most acquainted with generative AI, its strengths and limitations. A temptation for the short-on-time and sleep-deprived, avid users of Chat GPT rely on a few common defenses when faced with questions of academic ethics: something along the lines of “I just use it for inspiration,” “it’s just a tool, like spellcheck or a calculator” or “it’s just a smarter Google.”
But these convenient rationales leave much to be desired. Generative AI may not be the technological villain it’s made out to be, but it’s more than a tool in any familiar sense of the word. It stands to change the way we live and learn — for the worse if we let it. The same features which have captured the public imagination, its human qualities and its knack for higher order thinking, are the same traits which make generative AI incompatible with the fundamental purpose of education.
At all levels of schooling, today’s students have access to a wealth of electronic tools. Calculators perform the basic operations once done by hand. Spellcheck and search engines have automated much of the manual work of proofreading and research. Proponents of generative AI might consider ChatGPT to be another such tool — a logical extension of automation in the classroom. AI, however, differs from existing tools in meaningful ways.
Our existing academic tools allow students to access the known. Calculators, spellcheck and search engines all serve essentially as reference guides. They bring information within reach, whether it be the sum of two numbers, the correct spelling of the word “Wednesday” or the current scientific consensus on man-made climate change. Most of us can agree that the value of learning lies not solely in the memorization of discrete facts, but in the use of those facts towards more complex and novel ends — in other words, in higher-level thinking. Good students analyze their world, linking separate points of knowledge into an integrated whole. They think, not only because they are told to do so, but because it brings fulfillment, a feeling of usefulness and the prospect of an improved human condition. These tools facilitate the retrieval of knowledge. It’s up to students to interpret, analyze and apply it in the ways they see fit. At least that’s how it was before the rise of generative AI.
ChatGPT, at least in a superficial sense, can perform many of the higher-level thinking skills that were once the domain of human beings alone. With minimal student direction, it can create, from its endless wealth of human inputs, a perfect summary of “Jane Eyre” and craft an essay on the novel’s major themes in a pleasant, disembodied voice. Naturally, these capabilities carry the potential for misuse.
Amanda Walters, a former K-12 teacher and a graduate teaching assistant in the Educational Psychology department at Virginia Tech, commented on the ethical implications of generative AI.
“It complicates the topic of plagiarism. It creates a new grey area, so I think it can be used in an unethical way,” Walters said. “Generative AI can definitely be used as a crutch for writing… it can hinder research skills if students are not willing to look deeply and analyze what they get as a result and could hinder the production of coherent paragraph writing.”
Search engines can do some of these same things. If reading the book in full is off the table, a few Google searches could also yield a brief summary, a list of themes and a few sample essays. However, even a student bent on doing the least would still have to integrate all these pieces into a coherent whole. Even a blatant plagiarist would have to select a source to copy.
What about students who use generative AI for inspiration rather than full on content-generation? After all, we might ask a friend for a good title for a project or a good ending to a piece of fiction. In this scenario, however, both the student and the friend are human beings, endowed with unique tastes, preferences and experiences which no machine, no matter how complex, can faithfully measure or account for. Each of them can share in the satisfaction of creating, and each gets a chance to hone their own sense of good taste. To young learners, with little confidence in their own voice and preferences, generative AI might seem like a flawless tool, endlessly more capable and consistent in expression than they could ever be. But remove the impulse to ask AI and students will find real living role models to inspire them. They will learn to trust their aesthetic instincts and acknowledge their own creative potential.
Moreover, the line between borrowing and stealing is notoriously thin. With little oversight or incentive to create of their own accord, the temptation is far too much for some students. And for all its capabilities, AI is not perfect. The wealth of data on which ChatGPT is trained is not screened for bias or accuracy, and when asked to cite its responses, the sources it generates are entirely fabricated.
Despite the prevalence of AI and mounting ethical concerns, many public schools lack a clear course of action. A survey from Education Week showed that as of February 2024, 79% of educators say their school districts have issued no guidance on generative AI. For many students, especially young learners who’ve yet to discover that the value of education is far greater than a passing grade, generative AI will be an eternal temptation. It is an efficient substitute worker whose authority and speed condition students to favor the words of a machine over their own burgeoning voices. Of all the limits an educator can impose on their students, a ban on generative AI in the classroom will do the greatest good.