Education News

Content that AI cannot read: Ambiguity and silence (opinion)

A year ago, I saw AI as a shortcut to avoid thoughtful thinking. Now, I use it to teach myself.

Like many educators, I initially viewed AI as a threat – easily getting rid of rigorous analysis. But Baning AI became a defeated battle. This semester, I took another approach: I brought it into the classroom, not the crutch, but as the subject of learning. The results surprised me.

This spring, my students not only used AI for the first time, but also reflected on it. AI is more than just a tool; it is a mirror that exposes bias, reveals knowledge gaps and reshapes students’ interpretive instincts. In the same way, a river carved its route through stone, not through force, but through lasting ways, and this intentional interaction with AI has begun to change how students deal with analysis, nuance and complexity.

AI (when critically engaged) does not provide students with passive consumer information, but rather a tool for improving analytical skills. It does not simply produce answers, but it causes new questions. It exposes bias, forcing students to reconsider assumptions and ultimately enhances their ability to think deeply.

However, universities often focus on controlling AI rather than understanding it. AI policies in higher education are often used by default for detection and execution, treating technology as a problem to be included. But this framework misses this. The question in 2025 is not whether to use AI, but how Use it in a way that deepens rather than dilutes learning.

AI as a tool for deep participation

This semester, I ask students to use AI in a seminar on testimony of Holocaust survivors. At first glance, it seems paradoxical to use AI to analyze these deep human narratives, which is almost ungodly. Survivor testimony resists coherence. It is composed of silence, contradictions and emotional truths, against classification. How does AI trained in probability and pattern interact with stories of trauma, loss, and memory fragility?

However, that’s why I’m making AI a core component of the course rather than a shortcut to understanding, but rather a challenge. Every week, my students use AI to transcribe, summarize and identify patterns in testimony. However, instead of treating AI’s answers as authority, they interrogated them. They see how AI exists by chance, how it misreads hesitation as omission, how it resists the split that defines survivor narratives. When observing this resistance, something unexpected happened: the students gained a deeper understanding of the meaning of listening, explanation, and witness.

The output of AI fashion hides a deeper problem: it is not neutral. Its response is shaped by bias embedded in the training data and a relentless pursuit of coherence (i.e., at the expense of accuracy). The algorithm removes the contradictions in testimonies, not because they are not important, but because it aims to prioritize seamless contradictions over clarity of ambiguity. But the testimony is ambiguity. Memory flourishes in contradictions. If not restricted, the tendency of AI to smooth rough edges tends to erase what makes survivor narratives so powerful: their primitiveness, hesitation, refusal to adhere to a concise, digestible version of history.

For educators, the question is not only how to use AI, but also how to resist its temptation. How do we make sure that students do a careful check of the AI ​​instead of accepting its output on the face value? How do we teach them to use AI as a lens instead of a crutch? The answer lies in making AI itself the object of inquiry – inviting students to check their failures and challenging their confident misreading. Artificial intelligence cannot replace critical thinking; it requires it.

Artificial intelligence as production friction

If AI is distorted, misunderstood, and promoted, why use it entirely? The simple answer is to reject it – place it from the classroom and treat it as a contaminant rather than a tool. But this would be a mistake. Artificial intelligence will remain here, and higher education has the option to choose: either let students cope with their limitations on their own or place those limitations as part of the education.

I see it as an opportunity, not an AI flaw as a reason for being excluded. In my classroom, AI-generated answers are not definite answers, but objects of criticism, namely entities, temporary and open challenges. By interacting critically with AI, students learn not only from it, but from it. They see how AI fights ambiguity, how its summary reduces, and its confidence often exceeds its accuracy. By doing so, they improve the skills that AI cannot replicate: the ability to doubt, explain, and challenge the knowledge that is accepted.

This approach is consistent with Marc Watkins’ observation that “learning requires friction.” AI can be the force of producing friction in the classroom. Education is not about seamlessness; it is about struggle, revision and resistance.

Teaching history, especially the history of genocide and mass violence, often feels like standing on one threshold: the past planting one foot and the other heading towards an uncertain future. In this space, AI cannot replace interpretive behavior. It forces us to ask what it means to carry memory.

Use it thoughtfully, AI will not erode intellectual inquiries, it deepens it. If wisely participate, it can sharpen (rather than replace), which is the skill that makes us human.

Jan Burzlaff is a postdoctoral assistant at Cornell University’s Jewish Studies Program.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button