
Sherry Fukuzawa, University of Toronto Mississauga, Canada
Artificial intelligence (AI) has become the elephant in the room in every classroom. However, this is even more of a concern in online courses. Unlike in-person assessment methods, it is difficult if not impossible to control student reliance on AI to support (if not write) online assignments and tests. I think the debate about whether there is a place for AI in post-secondary learning is moot. We must acknowledge that students are using it. There seems to be a polarity of opinion regarding whether we create assessments that incorporate AI or we find ways to detect AI as an academic offense. I argue that the second option is currently not achievable. My university has not endorsed any particular AI detection software and therefore we cannot adequately declare AI use as an academic offense.

I have learned that grappling with the first option is equally fraught. I recently worked with a newly hired AI educational developer at my institution to create an online assignment in a second year online undergraduate course in biological anthropology with over 300 students. We created an assignment requiring students to critique AI generated essays on course material. Students were given a range of AI generated topics derived from the course lectures. They were asked to choose one topic and use an AI software program (e.g. CoPilot) to generate a 4-5 page essay with scholarly references from the last five years. They were then required to critique the AI generated essay specifically referencing course resources (lectures, readings, podcasts and TedTalks). Using their insights from this critique they were asked to revise their AI prompt to generate a second AI essay and then critique the revised AI essay. Students then reflected on the limitations of AI in relation to the topic, as well as the use of AI in general in post-secondary education.
The challenges of this assignment were greater than I originally anticipated.
- We needed enough topics to discourage cheating amongst students, while also not generating too many diverse AI generated essays for the limited number of teaching assistants to grade the assignment consistently.
- We needed to require students to reference a variety of course resources, specifically individual course lectures, to discourage them from using AI to critique the AI generated essays. (Yes that is actually possible!?!).
- Most importantly, we needed to guide students in what constitutes an appropriate scholarly critique. To address this issue we had the students critique a series of podcasts through the term specifically referencing the corresponding week’s lecture. We also posted an example assignment to encourage students to think critically about the AI generated essays. For example, encouraging students to search for and read the supposedly “scholarly” references in the AI generated essays directed them to realize that references were often misrepresented or did not exist at all.
The learning objective was to develop students’ critical thinking skills in relation to AI generated material. However, I am not convinced that we achieved this goal. From the assignment’s inception I kept thinking, “Am I inadvertently teaching students how to generate AI essays to cheat in other courses?” This question continues to weigh on me. I am still pondering on the success of the assignment. I have attended several “Using AI in the classroom” workshops and I am still conflicted about incorporating AI effectively and efficiently in a large online anthropology course.
Leave a Reply
You must be logged in to post a comment.