Generative AI

Generative AI in Education: Another Mindless Mistake?


Picture the scene: A new technology has been introduced that is unlike anything we’ve seen before. This technology creates a new means of sharing information that is both interesting and entertaining and promises to generate new forms of knowledge on a regular basis. Indeed, this new creation appears so transformative, it leads one of the world’s most prominent entrepreneurs to predict that the method of transmitting knowledge to students will be radically altered in just a few years.

I’m referring, of course, to 1913 and the introduction of motion-picture technology—movies—which led Thomas Edison to predict that books would be obsolete in schools within a decade.

What, did you have something else in mind?

Here we go again. The introduction of generative artificial intelligence, in particular “large-language model” (LLM) chatbots such as ChatGPT, has led many to predict that this new technology will transform education as we know it. Bill Gates, for example, predicted in April 2023 that within 18 months—by October of this year—generative AI “will be as good a tutor as any human could be.” Not to be outdone, Sal Khan, the founder of Khan Academy, thinks AI will be “probably the biggest transformation that education has ever seen.” His organization is marketing the education-focused chatbot Khanmigo to schools right now.

Why do we keep making this mistake? Why do we seem doomed to repeat the endless cycle of ed-tech enthusiasts vastly overpromising and radically underdelivering on results? There are many reasons, but I’ll focus here on just one: we keep misunderstanding the role that technology can play in education because we’ve failed to understand properly the science of how we humans think and learn.

Here’s one example. Cognitive scientists use the term “theory of mind” to describe our capacity to ascribe mental states to ourselves and to others. In education, teachers must have a rough understanding of what’s happening in the minds of each of their students so that they can make inferences about what misconceptions a particular student may have, or what existing knowledge they are drawing upon when trying to learn a new concept. Importantly, emerging science suggests that we develop this ability through explicit cultural practices—our conversations and cooperative interactions with other humans.

You know, like what happens every day in school.

Neither ChatGPT nor Khanmigo nor any other existing LLM-based technology is presently capable of developing anything close to a robust theory of mind regarding what’s happening in the minds of their human users. That’s just not how the technology works. Instead, these models are essentially next-word prediction engines, meaning that after they’ve been prompted by human-generated text, they run a complicated set of statistical algorithms to predict what text to generate as output. This often feels like human conversation, as if there is another mind operating behind the machine, but feelings can be deceiving.

To demonstrate this deficiency, let’s do some algebra together (sorry). Consider this simple problem: If a pen and crayon together cost $2.50, and the crayon costs $2 less than the pen, how much do the pen and crayon each cost? I’ll spare you the mental effort: if P = cost of the pen, then the cost of the crayon is (P–2), which means that P + (P–2) = $2.50.

What happens when we ask Khanmigo to help us simplify the equation from here? The specifics of the chatbot response will vary, but here’s one example of how it can go (with my prompts on the right, Khanmigo’s responses on the left):



Source

Related Articles

Back to top button