The Daily Tar Heel
Printing news. Raising hell. Since 1893.
Wednesday, Feb. 28, 2024 Newsletters Latest print issue

We keep you informed.

Help us keep going. Donate Today.
The Daily Tar Heel

If you ask ChatGPT, the popular generative artificial intelligence chatbot, about how students should use it properly, it says it “serves as a learning aid and not a shortcut to the learning process.” If you ask a room full of people though, they probably won't be able to agree on how ChatGPT should fit into education and the classroom.

Currently, the UNC Honor Code does not mention proper conduct around the use of large language models and generative AI. But the University is making efforts toward more clarity, with the UNC Generative AI Committee commissioned to provide guidance and resources.

In the meantime, it seems that AI-related policies have varied across departments and course syllabi. If a policy is outlined, some instructors ask students to disclose their use of AI in submitted work. Others have completely banned its use. This lack of universal policy may confuse students on what is and isn’t appropriate when using AI technology. 

A BestColleges survey of 1,000 current undergraduate and graduate students shows that one in five college students have used AI tools like ChatGPT — though 51% of them believe using these tools constitutes cheating or plagiarism.

This has led many educators to examine how students can maintain academic integrity and productive learning. 

“I'm concerned that students are going to use it for shortcuts that are going to leave them with fewer skills and lower confidence in their own ability to write code or do problem sets.” William Goldsmith, historian and UNC professor of public policy, said.

AI can be a productive tool for students if used properly; it can be a great resource for checking code or looking for grammar mistakes that the human eye may miss. However, models like ChatGPT are not completely reliable and don’t always provide the most factual information.

We are still early in the process of making AI policies. The most challenging part of this is that what works for one department may not work for another department. This may be why we have seen variation in approaches to its usage. 

The ambiguity about generative AI can be compared to the emergence of Wikipedia and calculators. 

“Calculators called into question how much students really needed to memorize when it came to the relationship between numbers,” Goldsmith said. 

 “When the calculator was invented, we had a much longer timeline between when it was invented and when it became ubiquitous. But ChatGPT was a product launched in November 2022, and it was the quickest product to go to 100 million users we’ve ever seen. It has basically gone to something that is all over this campus in the course of a semester.” 

We simply have not had enough time to process this technological change. Goldsmith said that besides students, faculty is also confused and frankly tired of having to constantly adapt to the technological change in their pedagogy since the COVID-19 pandemic.  

Already three weeks into the new semester, students still have not gained clarity on when and how to use AI — despite the explosion of ChatGPT.  

“Only one professor has brought it up,” Cassie Andradottir, a junior at UNC, said. She said her professor is not against students using AI as long as they are transparent about how they use it in their work. 

But the “how” is usually left ambiguous. 

“I think some people like the confusion. They can take advantage of it and have a little more leeway for how much they can use AI for,” Andradottir said. 

We are still a ways away from the creation of AI rules, and no one knows what they will look like. Unlike the universal policies against cheating, academic violations through AI will be much more contextual, and the policies to evaluate such behaviors will most likely be customized by different academic disciplines. 

As for now, creating AI policies that are applicable to all is an unrealistic vision. But this also opens up space to have healthy dialogue about the ethics and integrity of AI. Students, instructors and university leadership should take this opportunity to think critically and reflect on navigating honor code policy with the emergence of new technology.    

To get the day's news and headlines in your inbox each morning, sign up for our email newsletters.