I’ve been using AI to add images to my blog since June of 2022 when I discovered AI generated art: DALL•E. I don’t credit it, I just use it, and find it much easier to generate than to find royalty free alternatives. I haven’t yet used AI as a writing or editing tool on my blog. While I’m sure it would make my writing better, I am writing to write, and I usually do so early in the morning and have limited time.
I already have to limit the time I spend creating an image, if I also had to use AI to edit and revise my work I’d probably only have 15-20 minutes to write… and I write to write, not to use an AI to write or edit for me. That said, I’m not disparaging anyone who uses AI to edit, I think it’s useful and will sometimes use it on emails, I simply don’t want that to be how I spend my (limited) writing time.
I really like the way Chris Kennedy both uses AI and also credits it on his blog. For example, in his recent post, ‘Could AI Reduce Student Technology Use?’ Chris ends with a disclosure: “For this post, I used several AI tools (Chat GPT, Claude, Magic School) as feedback helpers to refine my thinking and assist in the editing process.”
Related side note, I commented on that post,
The magic sauce lies in this part of your post:
“AI won’t automatically shift the focus to human connection—we have to intentionally design learning environments that prioritize it. This involves rethinking instruction, supporting teachers, and ensuring that we use AI as a tool to enhance, not replace, the human elements of education.”
A simple example: I think about the time my teachers spend making students think about formatting their PowerPoint slides, think about colour pallets, theme, aesthetics, and of course messaging… and wonder what they lose in presentation preparation when AI just pumps out a slide or even whole presentation for them?
“Enhance but not replace,” this is the key, and yet this post really strikes a chord with me because the focus is not just the learning but the human connection, and I think if that is the focus it doesn’t matter if the use of technology is more, less, or the same, what matters is that the activities we do enrich how we engage with each other in the learning.
Take the time to read Chris’ post. He is really thinking deeply about how to use AI effectively in classrooms.
However I’m thinking about the reality that it is a lot harder today to know when a student is using AI to avoid thinking and working. Actually, it’s not just about work avoidance, it’s also about chasing marks. Admittance to university has gotten significantly more challenging, and students care a lot about getting an extra 2-5% in their courses because that difference could mean getting into their choice university or not. So incentives are high… and our ability to detect AI use is getting a lot harder.
Yes, there are AI detectors that we can use, but I could write a complex sentence in three different ways, put it into an AI detector, and one version could say ‘Not AI’, one could say 50% chance that it was written by AI and the third version might say 80% chance of AI… all written by me. 20 years ago, I’d read a complex sentence written in my Grade 8 English class and think, ‘That’s not this kid’s work’. So, I’d put the sentence in quotes in the Google search bar and out would pop the source. When AI is generating the text, the detection is not nearly as simple.
Case in point: ‘The Backlash Against AI Accusations’, and shared in that post, ‘She lost her scholarship over an AI allegation — and it impacted her mental health’. And while I can remember the craze about making assignments ‘Google proof’ by asking questions that can’t easily be answered with Google searches, it is getting significantly harder to create an ‘AI proof’ assessment… and I’d argue that this is getting even harder on a daily basis with AI advances.
Essentially, it’s becoming a simple set of questions that students need to be facing: Do you want to learn this? Do you want to formulate your ideas and improve your thinking? Or do you just want AI to do it for you? The challenge is, if a kid doesn’t care, or if they care more about their mark than their learning, it’s going to be hard to prove they used AI even if you believe they did.
Are there ways to catch students? Yes. But for every example I can think of, I can also think about ways to avoid detection. Here is one example: Microsoft Word documents have version tracking. As a teacher I can look at versions and see large swaths of cut-and-paste sections of writing to ‘prove’ the student is cheating. However, a student could say, “I wrote that part on my phone and sent it to myself to add to the essay”. Or a savvy student could use AI but type the work in rather than pasting it in. All this to say that if a kid really wants to use AI, in many cases they can get away with it.
So what’s the best way to battle this? I’m not sure? What I do know is that taking the policing and detecting approach is a losing battle. Here are my ‘simple to say’ but ‘not so simple to execute’ ideas:
- The final product matters less than the process. Have ideation, drafts, and discussions count towards the final grade.
- Foster collaboration, have components of the work depend on other student input. Examples include interviews, or reflections of work presented in class, where context matters.
- Inject appropriate use of AI into an assignment, so that students learn to use it appropriately and effectively.
Will this prevent inappropriate AI use. No, but it will make the effort to use AI almost as hard as just doing the work. In the end, if a kid wants to use it, it will be harder and harder to detect, so the best strategy is to create assignments that are engaging and fun to do, which also meet the learning objectives that are required… Again, easier said than done.
Like this:
Like Loading...