02 Feb '23
Academic integrity guidelines re: ChatGPT and generative AI tools
Here in Australia Semester 1 is approaching and ChatGPT is, undoubtedly, a thing. From a practical perspective it’s important to have some sort of guidelines about about ChatGPT and other generative AI tools in the classroom (everyone’s doing it). If you don’t provide any advice ahead of time you’ll end up making it up as you go along (because it will come up) and policy on the run is policy underdone.
In terms of the opportunities to incorporate, explore and critique new tools in the classroom, I’m actually kindof excited. I’ve written elsewhere about how you might be able to do this. The tl;dr is that you should look at your class activities (including assessment items) and try and actually do them with ChatGPT, just to see how it goes. OpenAI have also just released some stuff about considerations for educators.
At this stage these are just my own personal thoughts as a teaching academic; I’m not wearing my Associate Director (Education) hat here, and this is not an official (or unofficial) ANU School of Cybernetics policy. If you’re also putting together some guidelines for your own classroom, then questions/comments/suggesions are welcome—do get in touch.
Here are my current thoughts what some good use of ChatGPT in the classroom guidelines might look like. It doesn’t have everything precisely defined, but it gives you an idea of how I want to run my classes, balancing the opportunities and challenges these tools present for student learning.
-
Unless otherwise specified, you are allowed to use ChatGPT1 in this class, including in work submitted for assessment.
-
Wherever ChatGPT is used it must be cited according to the OpenAI citation instructions.2
-
You are responsible for everything you submit. “It’s not my fault—the AI generated text introduced non-sequiturs/errors/plagiarised text/offensive language” will never get you off the hook; if it’s in your submission you’re responsible for it just as you would be if you’d written it without ChatGPT.
-
You are expected to be able to explain (to your tutor, lecturer or course convenor) any assessment submission to demonstrate your understanding of the concepts being assessed.
-
Any violations of the above will be considered a potential breach of academic integrity under clause 9 of section 12(2) of the ANU Academic Integrity Rule “improperly recycles work or otherwise improperly submits or publishes work that is not original” (note: I’m unsure on which clause is best to use here—could be clause 8, could be one of the others as well).
-
No “is-this-written-by-an-AI?” detection tools (e.g. this) will be used as part of the marking process.
One open question (not necessarily part of the student-facing guidelines, but relevant for anyone running a course) is what guidance should be given to the markers (e.g. tutors/TAs) on what to do when marking ChatGPT-generated content. Should submissions created with the help of ChatGPT be marked lower than “equivalent standard” (whatever that means) submissions that aren’t?
Anyway, these are just some draft thoughts—I’ll keep this post updated as my thinking changes.
Appendix: ANU Academic Integrity Rule
From the ANU Academic Integrity Rule 2021 Section 12 (2), here’s the list of what constitutes a breach of the academic integrity principle.
(2) For this instrument, a student breaches the academic integrity principle if, in scholarly practice, the student:
- cheats; or
- impersonates another person; or
- engages in plagiarism; or
- colludes with another person; or
- improperly shares material with another person; or
- engages in contract cheating or improperly engages another person to prepare, or assist in preparing, work for the student; or
- submits or publishes anything that fails to correctly or appropriately acknowledge the work of another person or otherwise improperly appropriates the intellectual property or contribution of another person; or
- otherwise passes off the work of another person as the student’s own work; or
- improperly recycles work or otherwise improperly submits or publishes work that is not original; or
- takes a prohibited item into an examination or other assessment venue or otherwise breaches the University’s directions (however described) in relation to an examination or other assessment; or
- fabricates or falsifies any document, data or other information, or anything else, including, for example, by intentionally omitting data to obtain a desired result, or by falsely representing observations as genuinely held; or
- otherwise intentionally or recklessly engages in conduct:
- that impedes the progress of research; or
- that risks corrupting research records or compromising the integrity of research practices; or
- that uses research data from another person without appropriate acknowledgement; or
- that breaches a research protocol approved by a research ethics committee or a statutory licence condition applying to research; or
- otherwise engages in conduct with the intention of gaining, or assisting another person to gain, an unethical, dishonest, unfair or unjustified advantage; or
- otherwise engages in conduct, or assists another person to engage in conduct, that is unethical, dishonest or unfair; or
- engages in any other conduct declared to be academic misconduct by the orders.
My commentary on the above (and IANAL) is that none of those points really capture the specific case of “ChatGPT wrote this essay, not the student”, in particular because so many of the definitions reference “of another person”. I’m sure this language will be updated in the future in light of the widespread availability of generative AI tools.
-
Wherever ChatGPT is named in these guidelines it should be read as “ChatGPT and other generative AI tools”, where those tools are defined according to (ERROR: definition not found). Any guidelines which are restrict themselves to specific AI tools by name are doomed to become out of date real fast. ↩
-
These guidelines deliberately doesn’t try to address the (important) issue of AI tools and the way they appropriate the skilled labour of the millions of individuals who created, edited and labelled the data on which they were trained. ↩