Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

Policy on the use of Generative Artificial Intelligence (AI) such as ChatGPT

Posted: Sep 05, 2023 11:09;
Last Modified: Sep 07, 2025 21:09
Keywords:

---

Contents

What is Artificial Intelligence (AI)?

Artificial Intelligence (AI) is a term we use for computer programs that try to do tasks that in the past only human beings could do reliably: writing, editing, finding patterns, or making decisions.

Students usually encounter AI in two ways. The first is through writing tools such as grammar checkers built into word processors and apps like Grammarly. The second is through chatbots such as ChatGPT, Copilot, or Gemini that can create text, code, images, or music in response to prompts.

These are both relatively new. The first widely available chatbot, ChatGPT, was released in November 2022. But the impact of these tools has been immense. Both kinds are now widely used in business, academia, and fields like law and healthcare to support research, writing, editing, coding, data analysis, brainstorming, and more. The day is rapidly coming (if it isn’t here already) in which a knowledge of how to use AI appropriately and ethically will be expected of every university graduate.

AI tools are also controversial, however. In some fields, their use is required or widely expected and already integrated into daily practice. In others, they are viewed with suspicion, and people are still deciding how — or whether — they should be used. The problem is especially acute in education, where, with the right prompts, chatbots can produce work that could pass for something written by an undergraduate. They also, we should never forget, carry an immense environmental cost: a search on ChatGPT can involve tens of times more energy than the same search on a non-AI enabled search platform.

This document sets out my policies on the use of AI in assignments. Under the new Student Code of Conduct policy, all students submitting coursework to me are required to be familiar with and follow them. They are provisional and may change, however. We are still feeling our way forward in English studies when it comes to AI, and there is not yet a clear consensus. If you are unsure whether you are breaking the rules I lay out here, please talk to me. I am also willing to grant exceptions to students who want to experiment with AI-based tools, provided they speak with me in advance.

The basic rules

All assignments should be submitted in good faith

The first rule of all coursework in my classes is that it should be submitted in good faith. That means the student submitting the work:

  1. takes responsibility for its completeness and accuracy; and
  2. certifies that there is nothing misleading about its presentation, content, or origins.

I assign work because I believe it will help you progress in your education. When I grade work, I am assessing how well you have done that work and provide advice for how you can improve. For this to succeed, the work must represent a good-faith effort to complete the assignment (meaning it is by you and is completed to the best of your ability under the circumstances). If it isn’t a good faith effort, you are wasting my time and your tuition, regardless of whether or not I find out.

All education and assessment depends on good faith effort. AI doesn’t change this basic fact.

Do not represent work produced by AI as your own

The main concern with AI in education is that it allows students to submit work they did not write as their own. While AI makes this easier, it did not create the problem: plagiarism has always existed.

What has changed is that chatbots can now produce work that looks good enough to pass — though often with errors or even invented material — cheaply and easily. Prior to the invention of chatbots, getting somebody else to write your essays or do your coding for you required you to find a human being who was willing to cheat with you — something that sometimes involved some potential embarrassment and/or difficulties. Now you can ask a robot to do it without letting anybody else know.

But just as you are not allowed to hand in an essay written entirely by your older brother under your name, you are not allowed to hand in work written entirely by AI and present it as your own. Apart from being dishonest, this is risky: AI often fabricates quotations, facts, and bibliography, making such work easy to detect (ChatGPT, for example, recently told me that the Leafs won the Stanley Cup in 2023).

So if your essay was produced by a prompt such as “Write me an essay on [topic x]” (or anything similar), you should not submit it. It isn’t your work.

If you use AI, acknowledge it and document how it was used

AI bots — like siblings, friends, parents, and professors — can also be used for tasks that have always been allowed, such as brainstorming, editing, or proofreading. If you use AI in these ways, you must acknowledge it.

This follows standard academic practice. Scholars routinely include acknowledgements in their publications to thank people who gave feedback, offered advice, or provided editorial help. Increasingly, journals also require authors to declare whether and how they used AI. Sometimes this is through a general note (“AI was used in researching and editing this article”); sometimes, when the use was extensive or sensitive, authors must provide prompts or transcripts.

Students should do the same. A brief note at the end of your essay or a footnote are the end of your first sentence is usually enough (“ChatGPT was used to assist in brainstorming and proofreading. The final product is my responsibility”). If your use was more specific — e.g. generating quotations or analysing text — you should document it more carefully. Doing so both protects you from suspicion and models good scholarly practice.

As an example: I used AI in drafting this policy — for brainstorming, research, editing, and readability checks. The opening sentence was drafted with Copilot and revised by me and ChatGPT. Here’s a link to the conversation I had in ChatGPT as I was developing this policy. Copilot doesn’t have a similar feature, but can produce PDFs of the conversation on demand.

This is more than is required at the end of most of your essays, but it shows how AI can be used and documented if necessary.

Above all, be honest about what you have done

As long as our sense of what is appropriate regarding the use of AI in disciplines such as English is developing, there will be space for confusion and misunderstanding. The best practice, therefore, is to be honest and upfront about what you are doing. If you think a use of AI (or any other resource) may be contrary to the spirit and intention of the work you have been assigned, ask in advance (if possible) and inform on submission (if not). If you have followed instructions in good faith and documented your practices openly, it is difficult to see how you could get in serious trouble with me or any other instructor.

Are there any kinds of AI use that are likely to be o.k.?

As mentioned above, humans have always asked other humans to help them with their research and writing — just as humans have also always asked people to author things for them. The first of these has always been normal and acceptable; the second has always been plagiarism in an academic context.

The same is true of AI. If you use it to write for you, then you are plagiarising. If you use it to assist you in your research and writing, then you are probably — depending on the nature of the exercise — using it reasonably (remember that instructors may ask you not to use AI at all for very good pedagogical reasons, such as learning specific skills or content; in those cases you should follow their instructions).

A good rule of thumb distinguishing between these two uses is whether your use is iterative and engaged. That is to say, whether you are engaging in a back-and-forth with the bot or asking it simply to produce something for you. If you review the conversation I had with ChatGPT in writing this document as I was developing this policy. Copilot doesn’t have a similar feature, but can produce PDFs of the conversation on demand, for example, you’ll see that my use was iterative: I was asking it specific questions and assigning it specific tasks, and I commonly asked for changes or gave it a new version of some text. The content that you are reading here is then also further revised: because I don’t trust AI to be mistake-free, because I take responsibility for my work, and because I think AI-generated texts always sound a little off, I never publish text given to me by a bot without reading and revising it word-for-word.

If you are allowed to use AI in a class, this type of use is likely to be more acceptable than simply asking AI to write something for you.

Documenting your use of AI

Students who use AI in their research and writing in my classes should be prepared to show me their process if I ask. There are three simple ways to do this:

  1. Keep track of AI conversations. Bookmark or save your chats. If asked, you should be able to provide a printout or link. If your use was particularly extensive or likely to cause confusion, you should provide a detailed description proactively: “I asked ChatGPT to find examples of female aviators in novels written during the 1930s” (perhaps including the prompt(s) used).
  2. Save drafts of your work. It is generally good practice to keep earlier versions of work you submit so that you can show where changes came from. Online tools like Google Docs or Word Online do this automatically. In Word or LibreOffice, just rename your file each time you do a major edit: Essay1_FirstDraft.docx → Essay1_SecondDraft.docx → Essay1_AiRevisions.docx.
  3. Acknowledge AI in your paper. Add a short note at the end of your paper or as a first footnote explaining how AI was used (see examples above).

I may also ask you to describe your work to me orally or use other methods to assess the nature of your contribution to an assignment.

Can you tell that I used AI if I don’t tell you?

When AI chatbots first came into popular use, their work was relatively easy to spot: there were commercial AI detectors that could identify certain traits in writing that were characteristic of such bots (such as the vocabulary they used or the mix of sentence types), and there was a certain kind of tone that experienced teachers could recognise fairly quickly in student writing.

Nowadays, this is more difficult to do. While there still is a kind of strangeness to a lot of text generated by AI-bots (something that is actually causing trouble now that so much of the data they use for training is itself AI-generated), it is often much more difficult to identify a specific trait as conclusive evidence that AI was used in the preparation of a document. AI detectors are also now much less reliable and some of the things that suggest AI in a document are also features of other types of language use, such as writing by second-language learners, or first language writers who are neurodivergent or lacking in skill or confidence. Moreover, there is also some evidence that AI-style is starting to affect human writing.

So the short answer is that I may not be able to say, for sure, whether a text you submit to me in my class is AI-generated (though it sometimes may be possible). Given the degree to which the problems associated with AI-generated texts can have other origins, moreover, it isn’t clear to me that it is even necessary to speculate whether a text is AI-generated or not as a condition of grading.

But while I may not be able to tell whether a problem is caused by AI or something else, part of my job is to identify and help correct problems in student writing. In some cases, as part of this process, I may speculate that something has been generated by AI as a means of identifying where the issue arose and how it could be corrected. Since the use of AI in my class is not prohibited — provided it is acknowledged, documented, and not misrepresented according to the above rules — the suggestion that something may have been AI-generated is by itself not an accusation of misconduct: what I am really doing is pointing out where there are problems in your work, however they came about.

Can my writing be flagged incorrectly as AI-generated?

For most people (especially writers working in their first language), the likelihood that a piece of writing done without any computational assistance will be falsely flagged as AI-generated is currently relatively low. This is because humans do not compose like large language model text generators (or rather, LLM generators do not compose like humans) and human-generated text can be quite different from unedited AI text. It is possible that small sections of your text will be flagged as potentially AI-generated (particularly at the beginnings and endings of texts and paragraphs where we all write most formulaically). But unless something unusual is going on, you should come nowhere near the threshold at which a detector (or instructor) thinks the text was AI-generated.

The likelihood that your writing will be flagged as AI-generated goes up, however, if you are a second-language writer (this has a slight effect) or if you use computational tools such as grammar checkers in your work. This is because these tools are themselves increasingly AI-driven and can leave the same signals. If you use a grammar checker, the flagging should still be relatively sporadic, unless of course your grammar was so bad that everything was fixed. If you use other tools (e.g. editors or summarisers), the likelihood goes up further, as these rely more heavily on AI.

The best way of protecting yourself if you are worried about this is to acknowledge and document your use. That way, if there is a problem, you and your instructor will be able to diagnose and address it.

----  

Comment

:
:

:

Textile help

Back to content

Search my site

Sections

Current teaching

Recent changes to this site

Tags

anglo-saxon studies, caedmon, citation practice, composition, computers, digital humanities, digital pedagogy, grammar, history, moodle, old english, pedagogy, research, students, study tips, teaching, tips, tutorials, unessay, universities

See all...

Follow me on Twitter