AI detection tools are powerful when instructors know how to use them

To the editor:
I sympathize with the overall role of Steven Mintz’s argument Internal Advanced ED“Writing in the Age of AI Skepticism” (April 2, 2025). AI detection programs are unreliable. To the extent that teachers rely on AI detection, they contribute to the erosion of trust between lecturers and students, which is not a good thing. And because AI “detection” affects things like stationary or “fluency,” they implicitly reverse our values: we would love to place higher emphasis on lower structured or coherent writing because it makes us more authentic.
However, Mintz’s articles can cause misunderstandings. He repeatedly testified that when testing the detection software, he and other non-AI-made writings came up with certain scores as “the percentage produced is AI.” For example, he wrote, “27.5% of the works in January 2019 were considered likely to contain AI-generated text.” Although the software (Zerogpt) used for this exercise does claim to mark it as “how much to write” from AI-generated, many other AI detectors (such as Chatgptzero) show that the entire writing is written by AI. Both types of data are imperfect, but they convey different content.
Likewise, Mintz’s argument is useful. But if serious lecturers will object to technology for empirical or principle reasons, they will show a good appreciation of the nuances of various tools.