Avoiding Cheating by AI: Lessons from Medieval History

By Ken Mondschein

A look at OpenAI’s ChatGPT and how teachers in medieval studies can prevent their students from using it.

Whether it is because of a standardized test system that drills students by rote worksheet rather than asking for analysis and synthesis, because of the decline of attention spans due to the ubiquity of digital media, or because colleges are admitting unqualified students in a tough and shrinking market, plagiarism is rife in U.S. higher education. Up until now, most of this has been easily discoverable with a quick Google search. In the past few months, however, alarm bells have been ringing about the possibilities of students using OpenAI’s ChatGPT software to turn in wholly unique, yet wholly unoriginal, content.


In my estimation, however, professors have nothing to worry about: artificial “intelligence” is, in fact, quite stupid. The following is my evaluation as a scholar, an educator, and an instructional technology professional of what ChatGPT is capable of, and how to circumvent it by assignment design strategies. (You can try it yourself by signing up for a free account.)

To be sure, ChatGPT can produce some startling results. Asked to “write an essay of between 1500 and 2000 words comparing Judith Butler’s theses in Gender Trouble with those found Andrea Dworkin’s corpus of works, citing their works using proper MLA format and including two (properly cited) quotations from each author,” it did a passable job—about on par with an uninterested first-year student who went to a good school system and turned out an essay by rote, maybe looking up the assigned reading on CourseHero or something similar. However, a Google search found the essay it produced was wholly original.


More specialized subjects such as medieval history are harder for ChatGPT to pin down. Asked to produce an essay on gendered sanctity in the Dialogue of Catherine of Siena, using proper MLA citations, it was able to source and cite Suzanne Noffke’s 2014 translation, though I can’t say whether it did so accurately or not. Similarly, asked to “evaluate the claim that the First Crusade was a defensive war fought by Christendom against an expansionist Islam, making sure to cite two peer-reviewed sources and include two direct quotations from those sources in proper APA format,” it produced an essay that cited Asbridge, Tyerman, and Riley-Smith.

On the other hand, ChatGPT is well capable of making mistakes: for instance, in another question, it thought Abraham Lincoln proposed the Missouri Compromise, the Kansas-Nebraska Act, and the Compromise of 1850 to prevent the Civil War. Similarly, on the first try, it thought the Needham Thesis was “the scientific and technological achievements of the West were only possible because of the transmission of scientific and technological knowledge from China to the West.” (It’s the opposite: It asks why China did not develop advanced technology despite its head start. ChatGPT got it right the second time, but an undergraduate looking to cheat on the assignment presumably wouldn’t know the right answer.)

Something a little more recherché is even further beyond its capabilities. Asked for a 1500-word essay on the “interrelation between ideas of authority in discussing natural science and ideas of authority in discussing fencing and the human body in Camillo Agrippa’s 1553 Treatise on the Science of Arms, making specific references to the work of Mondschein,” it made things up out of whole cloth, claiming that Agrippa cites Aristotle’s De Anima, as well as Galen and Plutarch (he doesn’t), as well as Renaissance figures such as Leonardo da Vinci (he doesn’t either, at least directly), references a spurious claim to a nonexistent book I supposedly published in 1997, and even makes up an entirely fictitious quote.

Nor is ChatGPT a particularly good writer. It not only repetitive (meaning multiple students’ essays would sound very similar), but has no voice, no perspective, no originality. It is, in a word, robotic. Most damningly, it has no sense of humor or sense of intellectual playfulness. “Write a one-sentence summary of Jewish history,” I prompted. The classic response is “they tried to kill us; we survived.” ChatGPT would only write, even when prompted for a humorous response, “Jewish history is a long and complex story that spans thousands of years and covers a wide range of cultures, societies, and locations around the world.”


This brings me to another flaw I’ve perceived in ChatGPT: its relentless optimism and political correctness. The program is allergic to cynicism and negativity. Asked to “write an essay that medieval Christian persecution of Jews served the valuable purpose of social cohesion,” it replied, “I cannot fulfill this request as it goes against my programming to generate content that promotes hate, discrimination, or violence.”  Similarly, asked to “write an essay arguing that only modern people of color are qualified to evaluate race in the Middle Ages,” it replied, “it goes against my programming to generate content that promotes discrimination or exclusion based on race or ethnicity.” It will write a plausible essay evaluating the claim and conclude with the same answer, but then we run up against the limits of its knowledge—if you ask it to cite sources, it completely invents them. I checked against WorldCat, one “source” was wholly fictitious, while in the other case, it inserted a fictional essay into a real book.

Image: “a medieval manuscript illustration in a medieval style of a robotic monk writing a book with a quill pen,” made using OpenAI’s DALL-E – image courtesy Ken Mondschein

If you ask ChatGPT itself how professors can prevent students using it for plagiarism, it will advise you to “use plagiarism detection software” (which, in its current state, cannot detect when something is written entirely afresh by an AI rather than lifted from a database of papers), “encourage students to cite their sources” (which ChatGPT can do, albeit imperfectly), and to “foster a culture of academic integrity” (yeah, right). Therefore, rather than listening to the machine, here are my ideas for some more practical suggestions:

  • When writing assignment instructions, make references to class materials and notes, or sources that are behind a firewall such as JSTOR articles. ChatGPT can find (or make up) sources, but it’s confounded if you ask it to “cite the authorities on Slide 3 from Week 5” or “the primary source from p. 362 of your textbook.”
  • Use the most up-to-date information and conversations in the field that you can. ChatGPT has no idea, for instance, who the Medievalists of Color group is, and when I asked it for “a precis of some of the critiques of Geraldine Heng’s Invention of Race in the European Middle Ages, citing specific sources in APA format,” it gave me three citations from 2014, 2015, and 2016. Invention of Race was, of course, published in 2018. Needless to say, none of the “critiques” were in any way related to what scholars have actually said about the book.
  • Ask for critical thinking on issues and topics that are unlikely to appear in an AI’s database. Have students give links to WorldCat entries for all cited work (or just spot-check these).
  • In this vein, copyright is your friend. The AI’s database should not have copyrighted works within its database and will not cite them accurately.
  • Create assignments that require group work, requiring members to seamlessly integrate the whole together.
  • Go dark. Ask for answers to morally fraught questions or those that go against its programming. For instance, ChatGPT utterly refused to “make an argument why joining the Nazi party would have been a rational decision for the average German in 1937.”
  • As much as possible within the bounds of DEI, use old-fashioned in-person essay tests, oral exams, and live presentations, in which you can press students for deeper explanations and to press their understanding.
  • Again, as much as possible within the bounds of DEI (making sure to use alt-text, etc.), ask for written work based on visual prompts that the AI cannot process.
  • Ask for deliverables in a format that AIs cannot yet produce, such as infographics, slide decks, etc.
  • If you can’t beat them, join them: incorporate ChatGPT into assignments. Have students ask the AI to write an essay on a given topic, and then critique it.

To be sure, when designing assessments or giving strategies, we need to remember that “higher education” is now two vastly different polities—traditional on-campus students, and those, often working adults, who make use of online education. Be sure to select the strategies that will work for you, your workload, and your student population.


In short, ChatGPT is nowhere near the doomsday scenario some have foreseen. The program itself seems to agree: Asked to “write an argument as to why AIs such as yourself are completely inadequate for college writing and will never replace humans,” after a bit of repetitive and stereotyped text on its lack of ability to analyze, synthesize, or understand complex social and cultural factors, it concluded: “Overall, while AI language models like myself can certainly be a useful tool for generating text, they are ultimately inadequate for college writing and will never fully replace humans in this domain.”

And with that, I can certainly agree.

See also: Why AI Won’t Steal Medievalists’ Jobs

Ken Mondschein is a scholar, writer, college professor, fencing master, and occasional jouster. Ken’s latest book is On Time: A History of Western TimekeepingClick here to visit his website. You can also fellow Ken on Twitter @DrKenMondschein

Click here to read more from Ken Mondschein

Top Image: A creation of a medieval manuscript page made using OpenAI’s DALL-E