Generative AI

Students use generative AI to write essays and solve problems: Should we be worried?


Generative AI has entered the mainstream in recent years, prompting reactions ranging from cautious fascination to dystopic fear. These ever-improving programs are being used for various creative and academic purposes.

While AI programs that fabricate digital art like Dall-E have raised questions of ethics and plagiarism, one of the most well-known AI models currently in use is Open AI’s ChatGPT. 

According to the Pew Research Center, among the two-thirds of teenagers who are aware of ChatGPT, 19% have used it for schoolwork but 57% think using the software to write an essay is unacceptable.

This statistic significantly changes when discussing mathematics and research; 39% and 69% of people think it’s okay to use it for these purposes, respectively.

ChatGPT is a large language model chatbot that Open AI claims can “help with writing, learning, brainstorming and more.” (Madelyn Schneider)

Vaughan James is a science communication specialist at the University of Florida’s Biodiversity Institute with experience teaching students. He views this technology as a double-edged sword.

“I do think [ChatGPT is] useful for rote writing tasks,” James said. “A lot of writing is formulaic. These styles lend themselves well to AI intervention, at least in terms of structure.”

“[But] content is a whole other story,” he said. 

To James, while these AI tools can aid in some tedious academic tasks, checking AI-generated content for accuracy is still incredibly important.

“While I think that it’s fine for people to use the tools at their disposal to do the work they need to do, that only really applies if the tools work correctly and consistently,” he said. “I’m not at all convinced that ChatGPT does either.”

These concerns are not unfounded, especially since ChatGPT tends to generate false information, further complicating its regulation. 

“Language models don’t provide facts; they provide language,” James said. “The consequences are people believing they’re getting facts and just trusting what they read—unproductive at best and dangerous at worst.”  

Josh Gellers, a University of North Florida professor and member of its Generative AI Working Group, believes that using AI in academia can have benefits and risks.

“It’s a massive leap forward in technology,” Gellers said.

However, he said this new software comes with significant drawbacks. ChatGPT is an example of a large language model, a program that gathers and analyzes the content of already-published internet data.

This data generation and aggregation method led to the ethics and plagiarism debates plaguing the current conversation around AI.

“[These AIs] take a lot of data, and they use parameters [to] try and establish connections,” Gellers said. “It predicts what it thinks you’re looking for.” 

However, because of how these AIs gather information, the information they generate cannot be considered the work of the person using the program. 

“It’s not the work of the student,” Gellers said.

The areas where AI may be the most controversial are creative fields like writing, art, and music production. 

For 25-year-old Noah Studeman, a Jacksonville University graduate who majored in theater, AI models like ChatGPT can be helpful tools in the creative process. But they can also be a crutch.

“In terms of using [ChatGPT] for academic and creative work, it’s good to help,” Studeman said. “I think it can be helpful, but you have to make sure that you don’t just use the AI model word-for-word. Otherwise, you’re not putting in any actual effort yourself.”

Despite the controversy, Studeman thinks AI can be helpful in the creative industry if used carefully and in moderation. 

“I think AI is great for giving you ideas and maybe helping with world-building,” he said. “[But] it’s easier to not use it at all and do [the work] yourself.”

To Studeman, the value of human nuance is far more important in creative endeavors than AI programs that have “no capability of understanding emotion.” 

There are currently no formal rules regulating the use of generative AI at UNF but there are resources for professors such as online guides to ChatGPT-proof” assignments and tools to help detect AI.

___

For more information or news tips, or if you see an error in this story or have any compliments or concerns, contact [email protected].



Source

Related Articles

Back to top button