After listening to his peers rave about the generative AI tool ChatGPT, Cobbs decided to toy around with the chatbot while writing an essay on the history of capitalism. Best known for its ability to generate long-form written content in response to user input prompts, Cobbs expected the tool to produce a nuanced and thoughtful response to his specific research directions. Instead, his screen produced a generic, poorly written paper he’d never dare to claim as his own. “The quality of writing was appalling. The phrasing was awkward and it lacked complexity,” Cobbs says. “I just logically can’t imagine a student using writing that was generated through ChatGPT for a paper or anything when the content is just plain bad.” Not everyone shares Cobbs’ disdain. Ever since OpenAI launched the chatbot in November, educators have been struggling with how to handle a new wave of student work produced with the help of artificial intelligence. While some public school systems, like New York City’s, have banned the use of ChatGPT on school devices and networks to curb cheating, universities have been reluctant to follow suit. In higher education, the introduction of generative AI has raised thorny questions about the definition of plagiarism and academic integrity on campuses where new digital research tools come into play all the time. Make no mistake, the birth of ChatGPT does not mark the emergence of concerns relating to the improper use of the internet in academia. When Wikipedia launched in 2001, universities nationwide were scrambling to decipher their own research philosophies and understandings of honest academic work, expanding policy boundaries to match pace with technological innovation. Now, the stakes are a little more complex, as schools figure out how to treat bot-produced work rather than weird attributional logistics. The world of higher education is playing a familiar game of catch-up, adjusting their rules, expectations, and perceptions as other professions adjust, too. The only difference now is that the internet can think for itself. According to ChatGPT, the definition of plagiarism is the act of using someone else’s work or ideas without giving proper credit to the original author. But when the work is generated by something rather than someone, this definition is tricky to apply. As Emily Hipchen, a board member of Brown University’s Academic Code Committee, puts it, the use of generative AI by students leads to a critical point of contention. “If [plagiarism] is stealing from a person,” she says, “then I don’t know that we have a person who is being stolen from.” Hipchen is not alone in her speculation. Alice Dailey, chair of the Academic Integrity Program at Villanova University, is also grappling with the idea of classifying an algorithm as a person, specifically if the algorithm involves text generation. Although Dailey acknowledges that this technological growth incites new concerns in the world of academia, she doesn’t find it to be a realm entirely unexplored. “I think we’ve been in a version of this territory for a while already,” Dailey says. “Students who commit plagiarism often borrow material from a ‘somewhere’—a website, for example, that doesn’t have clear authorial attribution. I suspect the definition of plagiarism will expand to include things that produce.” Eventually, Dailey believes, a student who uses text from ChatGPT will be seen as no different than one that copies and pastes chunks of text from Wikipedia without attribution. Students’ views on ChatGPT are another issue entirely. There are those, like Cobbs, who can’t imagine putting their name on anything bot-generated, but there are others who see it as just another tool, like spellcheck or even a calculator. For Brown University sophomore Jacob Gelman, ChatGPT exists merely as a convenient research assistant and nothing more. “Calling the use of ChatGPT to pull reliable sources from the internet ‘cheating’ is absurd. It’s like saying using the internet to conduct research is unethical,” Gelman says. “To me, ChatGPT is the research equivalent of [typing assistant] Grammarly. I use it out of practicality and that’s really all.” Cobbs expressed similar sentiment, comparing the AI bot to “an online encyclopedia.” But while students like Gelman use the bot to speed up research, others take advantage of the high-capacity prompt input feature to generate completed works for submission. It might seem obvious what qualifies as cheating here, but different schools across the country offer contrasting takes. According to Carlee Warfield, chair of Bryn Mawr College’s Student Honor Board, the school considers any use of these AI platforms as plagiarism. The tool’s popularization just calls for greater focus in evaluating the intent behind students’ violations. Warfield explains that students who turn in essays entirely produced by AI are categorically different from those who borrow from online tools without knowledge of standard citations. Because the ChatGPT phenomenon is still new, students’ confusion surrounding the ethics is understandable. And it’s unclear what policies will remain in place once the dust settles—at any school. In the midst of fundamental change in both the academic and technological spheres, universities are forced to reconsider their definitions of academic integrity to reasonably reflect the circumstances of society. The only problem is, society shows no stagnance. “Villanova’s current academic integrity code will be updated to include language that prohibits the use of these tools to generate text that then students represent as text they generated independently,” Dailey explained. “But I think it’s an evolving thing. And what it can do and what we will then need in order to keep an eye on will also be kind of a moving target.” Ultimately, Dailey says, schools may need rules that reflect a range of variables. “My guess is that there will be the development of some broad blanket policies that essentially say, unless you have permission from a professor to use AI tools, using them will be considered a violation of the academic integrity code,” Dailey says. “That then gives faculty broad latitude to use it in their teaching or in their assignments, as long as they are stipulating explicitly that they are allowing it.” As for ChatGTP, the program agrees. “Advances in fields such as artificial intelligence are expected to drive significant innovation in the coming years,” it says, when asked how schools can combat academic dishonesty. “Schools should constantly review and update their academic honor codes as technology evolves to ensure they are addressing the current ways in which technology is being used in academic settings.” But, a bot would say that.