The Case for Artificial Intelligence Disclosure Statements (Opinions)

I once required my students to submit an AI disclosure statement when using generative AI in their assignments. I won’t do this again.
From our current moment of AI saturation, I have been leaning toward ChatGPT, not away, and was an early adopter of AI in my college composition class. My early adoption of AI hinged on the need for transparency and openness. Students had to disclose to me when and how they used artificial intelligence. I still believe strongly in these values, but I no longer believe that the required disclosure statements will help us achieve them.
look. I get it. Abandoning the AI disclosure statement runs counter to many of the current best practices for responsible AI use in higher education. But in the spring of 2024, I began to question the wisdom of the disclosure statement when I noticed a problem. Students in my composition course submitted assignments that were apparently created with the help of artificial intelligence, but they failed to provide required disclosure statements. I feel confused and frustrated. I thought to myself: “I allow them to use AI; I encourage them to experiment; all I ask is that they tell me they are using AI. So, why the silence?” Chatting with colleagues in my department who share similar attitudes and disclosure requirements about AI, I discovered that they were experiencing similar issues. Even if we tell students that the use of artificial intelligence is okay, students are still reluctant to admit it.
Faith gets up. admit. That’s the problem.
Mandatory disclosure statements now feel a lot like a confession or admission of guilt. Given the current culture of skepticism and shame that pervades AI discussions in higher education, I can’t blame students for being reluctant to disclose their uses. Even in the classrooms of professors who allow and encourage the use of AI, students cannot escape the broader message that its use should be illegal and secretive.
AI disclosure statements have become a strange kind of performative confession: apologizing for professors, labeling honest students “scarlet AIs,” while less cautious students escape undetected (or may be suspected, but not found guilty).
While mandatory AI disclosure statements have good intentions, they have backfired on us. Rather than promoting transparency and honesty, they further stigmatize the search for ethical, responsible, and creative uses of AI and shift our teaching toward more surveillance and suspicion. I suggest that it would be more productive to take some degree of AI use for granted and, in response, adjust our assessment and evaluation methods while working to standardize the use of AI tools in our own work.
Research shows that AI disclosures pose risks both inside and outside the classroom. A study published in May reported that under a variety of circumstances, any form of disclosure (voluntary and mandatory) led to decreased trust in people using AI. (This was true even when study participants had prior knowledge of individuals’ AI use, meaning, the authors wrote, “the observed effects were primarily attributable to the act of disclosure and not solely to the fact of AI use.”)
Another recent article points to the gap between honesty and fairness values when it comes to mandatory AI disclosure: If there is an underlying or apparent lack of trust and respect, people will not feel safe disclosing the use of AI.
Some opponents of AI will point to these findings as evidence that students should avoid AI altogether. But that doesn’t strike me as realistic. Anti-AI bias will only drive student use of AI further underground and lead to fewer opportunities for honest dialogue. It also hinders the AI literacy that employers are starting to expect and require.
Forcing the disclosure of AI information to students does not facilitate authentic reflection, but is a virtue that signals that we should have honest conversations with our students. Coercion only breeds silence and secrecy.
Mandatory AI disclosures also do nothing to curb or reduce the worst characteristics of poorly written AI papers, including a vague, mechanical tone; excessive filler language; and, their most egregious hallmark, fabricated sources and quotes.
I am not advocating for students to admit their AI sins to us through mandatory disclosure statements, but rather for changing perspectives and changing assignments. We need to move from viewing AI assistance for students as a special exception requiring reactionary surveillance to accepting and normalizing the use of AI as a pervasive feature of our students’ education.
This shift does not mean that we should allow and accept the use of AI by any and all students. We should not succumb to reading AI nonsense generated by students to avoid learning. When faced with a poorly written AI paper that sounds nothing like the student who submitted it, the focus should not be on whether the student used AI, but on why it was poorly written and why it did not meet the requirements of the assignment. It goes without saying that false sources and quotes, whether human or AI-sourced, should be considered fabrications and will not be tolerated.
We must establish homework and assessment standards to curb unskilled use of AI that circumvents learning. We must teach students basic AI literacy and ethics. We must build and foster learning environments that value transparency and honesty. But true transparency and honesty require security and trust to thrive.
We can start building such learning environments by working with students to normalize the use of AI. Some ideas that come to mind include:
- Tell students when and how you have used artificial intelligence in your own work, including successes and failures in its use.
- Provide students with clear explanations of how they can use AI effectively at different points in the class and why they may not want to use AI at other points. (Danny Liu’s menu model is a good example of this strategy.)
- Adding assignments such as using AI and reflective journaling gives students a low-stakes opportunity to experiment with AI and reflect on the experience.
- Provide students with an opportunity to show the class at least one cool, weird, or useful thing they did with AI (and maybe even encourage them to share their AI fails).
The point of these examples is that we are inviting students into the chaotic, exciting, and scary times we are all in. They are shifting their focus from forced confessions to welcoming invitations to join in and share their accumulated wisdom, experience and expertise as we all adapt to the age of artificial intelligence.



