The education industry is freaking out about AI. Some institutions have imposed a blanket ban on ChatGPT. To complete an assignment using AI is considered plagiarism.
To enforce this rule, academies turn to AI detectors, which use statistical patterns to identify texts produced by machines. These are so unreliable that OpenAI, the creator of ChatGPT, pulled its own detection product due to inaccuracy.
Students who learn how to use AI will have an advantage over those who choose not to.
American academic Ethan Zuckerman estimates that a third of his students have disclosed their use of AI. Their work was not measurably better or worse than that of students who have not disclosed use of AI, which isn’t all that surprising.
A new study from a team of business school professors working with Boston Consulting Group found that AI had a levelling effect on many tasks, with the bottom performers significantly enhancing their output, and top performers improving only a little.
I encourage students to use AI – but to then shape, polish, re-write and apply the content to their knowledge and sprinkle it with their own insights and examples. AI is superb at experimentation – for example using ChatGPT as a brainstorming partner. In my experience, AI engines give competent but uninspiring answers.
The surprising and exciting ideas in essay homework are more likely to have come from the students themselves.
Students often express concerns about becoming too dependent on technology. We have all learnt to use and rely on Sat-Nav. I can’t navigate unfamiliar places the way our parents did, from landmarks and memory. It’s possible that students will end up similarly dependent on AI. Yet we have no desire to return to navigation using paper maps.
Computer programming students using generative AI as a ‘co-pilot’, say it helps automate the tedious and repetitive parts of writing code. These students are vastly more productive than those writing code entirely by hand. This is because they have good coding skills to start with and can detect when their AI co-¬pilot makes errors.
The problem is that the AI co-pilots can pass most assignments for an introductory course, meaning that we might not be able to train competent programmers in the first place, as beginners will be able to avoid the frustrating and challenging work of learning to code that is a necessary part of developing expertise.
We should be less worried about students gaming the system than about losing their trust. Students are most concerned about being falsely accused of using AI, particularly when teachers use pattern and plagiarism detectors. Some even report receiving failing grades in classes for using AI on assignments that they insist were completed manually.
It is sometimes helpful to write in a text editor like Google Docs, which can maintain snapshots of a document as it is written, showing that it was not cut and pasted from AI.
So, as hundreds of task forces are set up on generative AI, we should be making the case that we must not discourage its use, but that we should approach teaching from a position of collaborative discovery, trust, and kindness rather than distrust.
In the next few years, every profession will need to figure out what it means to use AI effectively and ethically. Those whose job it is to prepare students for work in this new world, that we’re figuring out together in real time, should not be penalising them, when it’s our methods of academic evaluation that need to change.

(Source: Ethan Zuckerman, Prospect magazine, 2023)
James Neophytou

Leave a Reply