Probably the best way to guard against inappropriate use of AI-generated text is to redesign your assignments, both the prompts themselves and the related processes. These options vary in their usefulness across contexts, but consider these ideas as starting points:
The first step with avoiding AI-generated responses is to avoid prompts with specific, factual answers. ChatGPT, however, still does a pretty good job with higher-order questions (analysis, synthesis, etc.), and it can be pretty creative (see the viral PBJ in a VCR example). So, aim for assignments calling for more complex cognitive skills, and then layer on some of the other techniques below.
Tie writing prompts to unique or fictional cases or scenarios in your class, particularly if those cases build over time and draw on in-class activities or group work. Relying on in-class activities as a basis for assignments leaves AI without necessary information, and feeding it all that information would be time-consuming for students. If you use this approach, be ready to have an alternate assignment ready for students who cannot come to class for medical or other legitimate reasons.
Giving an assignment in one big chunk can add pressures that sometimes drive students to cheat, while breaking an assignment into smaller pieces can improve learning and writing skills while mitigating these pressures and reliance on AI. Consider breaking larger assignments into multiple stages, giving feedback and grades on each one, and perhaps incorporating peer feedback. This helps in multiple ways: 1) It mitigates the pressure to cheat that emerges from procrastination and feeling lost on a big, high-stakes assignment; 2) It gives you some sense of students’ writing styles along the way, especially if in-class writing is added to the mix; and 3) It leads to better learning and writing in general.
This can be anywhere in the writing process—early idea-development stages, syntheses of in-class activities that will be incorporated into the project, or reflections on their work and process. Aside from being a valuable approach to teaching writing in your discipline, in-class writing can provide a baseline of a student’s style that can be used to identify writing that isn’t the student’s original work. Those of us who have dealt with plagiarism before know that those students rarely know the submitted work well, nor are they able to talk about their writing process. But try to use in-class writing for learning opportunities, not just a policing tool.
Sure, students can ask ChatGPT to do this—there are already examples of AI generating passable "personal" college admission essays—but adding a personal element like this might reduce the likelihood that students will turn to AI, especially if you have also built personal relationships with them. In general, anonymity makes cheating easier, both functionally and psychologically.
When it first launched, ChatGPT drew on a database that only went through September 2021. As of April, OpenAI was experimenting with developer plug-ins that allow ChatGPT to access some live web content, and that access will likely only grow over time. Other applications like Bard and Bing are built into web search tools, so they certainly will have access to current online content. Consider keeping track of how much access these tools have to recent materials related to your assignments, particularly discipline-specific information, pre-print scholarship, or items behind paywalls. A student could feasibly feed more recent information into the AI, but relying on very recent sources may continue to be a valuable approach, especially when layered with other strategies.
Consider incorporating videos, guest speakers, or other sources that would not be available online. We've seen assignments that ChatGPT seems to address well until they include phrases like, "Using evidence from the video we viewed in class..." or "Based upon the case described during the guest lecture on February 21st ...."
For now, ChatGPT only takes text input, so having students respond to a unique image or diagram makes the AI unable to answer the question. ChatGPT has solved some pretty good application/diagnostic questions in biochemistry, for example, but it cannot currently view and interpret an image of a chemical reaction or cell. Further, asking for student responses to be in the form of a diagram or image can take AI text-generators out of the mix. The emerging GPT-4 tool can detect some images, but it cannot yet analyze charts or diagrams. Two things to be aware of:
Consider other ways that students could demonstrate their knowledge and mastery of learning outcomes, including “performative tasks.” This is a useful practice in general, aligning well with precepts of Universal Design for Learning, but it also avoids AI text generators altogether, or at least relegates them to a supporting role. Could students develop a video, podcast, drawing, or vlog (video blog) to demonstrate their knowledge? Remember that ChatGPT can be very creative in text, so asking for alternate outputs is the key here.
Curious as to how well AI answers your assignments? Try running them through yourself to see both how well it does and what markers you see of its work. If it provides solid answers, you might want to keep working on the assignment.