Low-tech ideas for checking AI-written student work

low-tech ideas for checking AI-written student work

There is a mountain of stories, analyses and reports about the potential for students to use AI, like ChatGPT, to respond to assessments. Some hi-tech methods are already available to check for AI involvement, like ZeroGPT.  But, there are some low-tech ideas for checking AI-written student work and limiting its use. Some are good teaching practices.  Others require cutting and pasting into a search engine but can help. None of these approaches is a ‘silver bullet’. But they might prompt you to ask some more questions about an assessment.

Experiment with AI

If you haven’t already played with some AI engines, give them a go. You’ll discover that they have a particular pattern in how they write. It might not be apparent the first time you see it, but inputting different questions produces a similar repetitive structure and style.

If you’re considering setting a short answer or essay assessment, put the questions into an AI generator. Just like experimenting will give you an idea of what they produce, you’ll have an example of what they might create if one of your students uses them. You’ll also have a rough example of what you might look for when grading.

You might also like to ask your students to experiment with AI. In a seminar activity, I asked students to put an essay question into AI to see what would happen. I also got the students to critique the response against the marking guide. The activity had two objectives. First, it got them familiar with the assessment grading and expectations. Secondly, it also got them to engage critically with the quality of the work AI produces and how their work could be much better.

Get to know your students

This is just good teaching practice, but getting to know your students is a low-tech ideas for checking AI-written student work. It lets you get a feel for how they think, speak and write. Anecdotally, we know that students change their ‘voice’ in some types of assessment because of some assumptions about academic writing. But it doesn’t change their expression dramatically. Finding that piece of work doesn’t ‘sound’ like the student might prompt you to ask more questions.

Similarly, trust your experience. You have probably read hundreds of pieces of student writing from the same unit or year level. Some are authentically brilliant and exceed expectations for that cohort. But, you probably also have a ‘gut feeling’ for when something is an outlier in terms of its structure, style or expression. Trust your experience.

Take the opportunity to agree on expectations with students. I admit this might sound a little naive, but as I have discussed in another post, taking the time to discuss their expectations of the unit, you, and each other can build trust in the classroom.

Set different types of assessment

Arguably, using AI to respond to assessment is the same as any other type of ‘cheating’. In another post, I wrote about how to minimise academic misconduct. The same kinds of considerations would apply here as well.

Make sure that, where you can, assessments are varied, authentic and equally weighted.

Give opportunities to practise 

My favourite source for teaching ideas at the moment is Small Teaching: Everyday Lessons from the Science of Learning. It has a series of ideas about how to introduce small, assessment-linked tasks into your tertiary or college teaching. The benefit of these small tasks is that it enhances students’ ability to remember content. It also allows them to practise assessment tasks in zero or low-stakes settings.

I also appreciate how it contributes to student confidence. We know a little more about assessment conditions that encourage cheating. But, there isn’t much empirical research about why students cheat. Anecdotally, a lack of confidence might drive students to look for tools to help them complete an assessment.

If we can create opportunities for students to practise the types of assessment that they could use AI for, it may discourage its use in favour of a more confident, authentic submission.

Be curious about the references

One thing that we know AI can’t currently do well is produce references for its work. The best-known example is a lawyer in the United States getting caught with non-existent case citations in their submissions. Alarmingly, when ChatGPT was challenged about the references, it attempted to defend them.

Checking a random reference by cutting and pasting it into a search engine is a low-tech ideas for checking AI-written student work. Even if it results in an AI-written piece of work, it might still prompt some questions about the accuracy of the research.

Good teaching will get you a long way

You may have realised that some of these low-tech ideas for checking AI-written student work ideas are good teaching practices. Learning about the technology, building relationships with students, and creating engaging assessments are not new ideas. But, they might just discourage some students from engaging in misconduct.

 

About Andrew Henderson

I am a lecturer, lawyer and researcher in Canberra with an interest in the 'hidden' or informal curriculum of law school. I am passionate about developing engaging and authentic educational experiences for law students.

View all posts by Andrew Henderson →

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.