AI prompting is the new critical thinking
Rather than outsourcing thinking to AI, students must be taught to interrogate ideas, test assumptions and refine their own reasoning.
The problem with AI in education may not be the technology itself – it may be that students are using it incorrectly, treating it not as a tool, but as a solution. And because educators tend to see it as a threat, they aren’t taking students along a learning path that enables them to use GenAI as an enabler rather than a cheating tool.
That is a genuine concern, because without guided instruction on ethical, critical and creative use, students are left to experiment on their own – often reinforcing misuse and widening the gap between how AI could support learning and how it is actually used in practice.
By outsourcing their work to GenAI, students risk missing the critical thinking building blocks they will need to solve practical problems independently later in life.
AI faces the same suspicion as calculators in classrooms once did in the 1960s – 1970s. While electronic calculators removed the need to manually apply functions like sin and cos, they didn’t take away the need to understand when and why to use them.
Just because the tool changes, it doesn’t mean thinking must end. Tools exist to extend human capabilities and the step change with AI is the kind of capability being extended.
An enabler
GenAI solutions like ChatGPT are designed to help people process information faster, such as running calculations, supporting thinking, not substituting it and expanding what one can explore, test and create.
This is where things get complicated. How does one “support thinking”? If a student hands in a paper fully comprised of GenAI output, can they credibly argue the tool supported their thinking? Or should educators be considering whether GenAI crunching the data, enabling a conclusion, is more defensible than if ChatGPT simply provided the conclusion?
There needs to be a distinction on whether students are using AI to outsource thinking (and get it to do their homework) or to interrogate ideas, test assumptions and refine their own reasoning. The former erodes understanding, while the latter means AI becomes a tool for deeper learning.
Changing teaching methods
In computer science, which is my area of specialisation, the use of GenAI to generate code in place of students poses a significant concern.
A 2025 study by Kaléu Delphino for Georgia Tech established that one in four students in this field of study anonymously admit to using ChatGPT for plagiarism. Crucially, though, Delphino frames this as a threat to computer science “in its current form”.
When we look beyond GenAI as a threat and start seeing it as a tool, we can adjust how we teach in a way that will enable our students to work alongside it. It becomes an enhancement and we move beyond subjects “in their current form”.
- Dr Alfred Hove Mazorodze, programme coordinator, Belgium Campus iTversity
By this year, attackers were using AI to scale and accelerate cyber crime, which extends from generating code and automating attacks, to crafting convincing phishing and deepfake scams. The AI Incident Database lists more than 7 000 incidents in which AI was used as a hacking tool.


