Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

Policy on the use of Generative Artificial Intelligence (AI) such as ChatGPT

Posted: Sep 05, 2023 11:09;
Last Modified: Jan 02, 2024 17:01


What are chatbots?

“Chatbots” such as OpenAI’s ChatGPT represent a new type of aid for research and writing. The bots combine the power of a search engine with the ability to produce human-sounding text based on “Large Language Models.”

They are particularly powerful at synthesising search results, mimicking specific styles of writing, and editing, each of which has a place in the research workflow. They are also at the moment extremely prone to producing profound errors (known as “hallucinations”) when asked to do complex tasks: creating false events, people, books, and quotations while sounding extremely authoritative.

Can they be used for research?

I use such bots (e.g. Bing and ChatGPT) in my own research and writing and I assume that many students will find them to be useful as well. At the same time, I am also responsible for the content of my research and writing — meaning that I am responsible for ensuring no hallucinations enter into my final drafts due to errors on the part of any tool or resource. This is also true for students.

What is an appropriate use of Chat Bots?

Exactly what use of AI is appropriate and what use is inappropriate in an undergraduate assignment is still a topic for discussion in the larger world. At the one extreme, for example, I imagine few people would argue that an essay produced by ChatGPT to a prompt “Write me an essay on Beowulf” would be any less cheating than buying an essay from an essay site. But at the other, we do not generally forbid students from using writing tools such as spelling and grammar checkers (both of which involve the use of a different kind of AI application, and, in the case of grammar checkers can result in AI detectors returning a positive score). So what if you ask AI to reorganise a paragraph for you or reduce the number of words in something that is too long? Or what if you ask a Chatbot to report to you what is known about a particular subject (beware: this is a place where hallucinations are frequent)?

What are your rules?

Given the current uncertainty as to best practice in the use of generative AI in student work, my policy at the moment is to rely on the requirement that all work in my classes is submitted in good faith as the product of the student’s own intellectual efforts — i.e. the same rules I have been using with regard to all my assignments for several years. This continues to mean that

If you are in any doubt as to whether a specific use is appropriate, please feel free to ask.

Can you tell if I use a Chat Bot?

Without special instructions, work by Chat Bots currently is not too difficult to recognise in student writing. This is for a variety of reasons having to do with how they are trained and how the technology works. AI detectors do a reasonably good job of assessing the likelihood that a given piece of writing was generated by AI.

The university subscribes to Turnitin which has an AI detector. I also use GPT-zero and other tools for assessing whether or not work has been written by AI. Student work that appears to use AI beyond what seems appropriate may be penalised or reported to the dean (subject to all the protections and processes found in the Academic Calendar).

Can my writing be flagged incorrectly as AI-generated?

For most people (especially writers working in their first language), the likelihood that a piece of writing that you did without any computational assistance will be falsely flagged as being AI-generated is currently relatively low. This is because humans don’t compose like large language model text generators (or rather, LLM generators don’t compose like humans) and human-generated texts is really quite different from LLM-generated text that hasn’t been specially manipulated. It is very possible that small sections of your text will be flagged as potentially AI-generated in this case (particularly at the beginnings and endings of texts and paragraphs where we all write most formulaically). But unless something strange is going on, you should come nowhere near the threshold at which a detector thinks the text was AI-generated.

The likelihood that your writing will be flagged as AI-generated goes up, however, if you are a second-language speaker (this has a slight effect) or if you use computational tools such as grammar checkers in your work. This is because these tools are themselves increasingly AI generated and, as a result show the same signs. If you use a grammar checker, the flagging should still be relatively sporadic (i.e. corresponding with some of the places where the checker-approved text was accepted), unless of course your grammar was so bad that everything was fixed. If you use other tools (e.g. editors or summarisers), the likelihood goes up further, as these use more AI to generate your text.

The best way of protecting yourself if you are worried about it is to keep a version history of your writing. This is still a new area, however, so you may have to be creative.





Textile help

Back to content

Search my site


Current teaching

Recent changes to this site


anglo-saxon studies, caedmon, citation practice, composition, computers, digital humanities, digital pedagogy, grammar, history, moodle, old english, pedagogy, research, students, study tips, teaching, tips, tutorials, unessay, universities

See all...

Follow me on Twitter