Isn’t college supposed to be about learning?

Published in a long, impassioned, well-reasoned and well-evidenced heartfelt current eventsSan Francisco State University professor Ronald Purser declared, “Artificial intelligence is destroying universities and learning itself.”
The catchy title is a bit misleading, because as Purser makes clear in the article, it’s not “artificial intelligence” per se that is destroying these things. The root of the problem lies with humans, mainly university leaders, who see what technology companies have to offer and fail to realize that vampires are preparing to drain the life force of their institutions by not only inviting them across the threshold but declaring them their new soul mates.
Dartmouth University recently announced a deal with Anthropic/Amazon Web Services, with university president Sian Beilock declaring “this is more than just a partnership.” The promises are familiar, using AI to “enhance rather than replace student learning,” as if it’s something we know how to do and best explored collectively across all aspects of the university simultaneously rather than through careful experimentation. I think I understand some of the motivations for this type of deal — to seize some sense of agency in uncertain times — but the idea that even an institution like Dartmouth, with its long history in AI development, would be a “collaborator” for both entities seems to me to be wishful thinking.
Pusser’s article details much of what I’ve heard while speaking and consulting at various institutions on these issues. There’s a lot of well-earned angst out there, especially in places where the government’s bets look like a Texas hold’em player going all-in against one-eight. There is no consultation, no collaboration, no vision other than vague promises of future abundance. A recent AAUP survey of its 500 members revealed that one of the biggest concerns teachers have is being completely marginalized as governments make these deals.
This uninvited guest casts considerable doubt on the core purpose of the university. As Purser puts it, “Students use AI to write essays, professors use AI to grade, and degrees become meaningless while tech companies make fortunes. Welcome to the death of higher education.”
While the purser’s account is accurate to a certain extent, I would also argue that it is incomplete. As I wrote a few months ago, there are also signs of significant progress in addressing current challenges. The kind of administrative and institutional carelessness Pusser documents was not widespread, and even in such cases, faculty and students were finding ways to do meaningful work. Many have successfully addressed what I have long considered a core problem, which is the “transactional model” of schooling that actively prevents students from taking the risks necessary for learning and personal development.
One of the most common observations I made while doing this work was that many, if not most, students had no real enthusiasm for an AI-mediated future and that their ideas and experiences were secondary to the output of the LL.M. model. They found the fact that the model output it works Being in a school setting is where the problem lies.
I was inspired by Matt Dinan’s account, which detailed how he structured the course experience from fundamental teaching values, making it clear to students the importance of doing their own work, the importance of their own thinking, and a genuine belief that risk-taking learning is worth doing and fully supported.
What we see is that success comes from giving teachers the freedom to solve problems under conditions that allow them to be solved. Note that this does not actually require rejecting artificial intelligence. For those more interested in AI, there’s plenty of room to explore its integration, but it does mean doing more than just signaling to teachers and students, “You’re going to use AI, and you’re going to love it.”
Much of what Purser describes is not just the imposition of AI, but the imposition of AI in a system frayed by decades of austerity, leaving it vulnerable to an ideology that promises greater efficiency and lower costs while still allowing institutions to collect tuition revenue. This thinking reduces the “value proposition” of higher education to its accreditation purposes.
I know the common image of colleges and universities is that they are slow to change, but I’m actually surprised by how quickly many institutions are betting on the future of AI, especially when we don’t know what the future is that we’re betting on.
Applying the “move fast and break things” ethos of technology to education has gained some traction because of the evidence that says, “This thing is broken, so what do we have to lose?”
We could lose a lot—and lose it forever.
I’m still open to the idea that generative AI and anything that comes with it can have a positive impact on higher education, but I’m increasingly convinced that when it comes to learning experiences, we know very little about how to do that. As Justin Reich recently stated chronicle“Stop pretending you know how to teach artificial intelligence.”
As we try this new technology, we shouldn’t abandon the things we do know how to teach (such as writing). We should not shy away from the structural barriers that Ronald Purser outlines in his article, hoping that an AI savior is around the corner. This is not what students want, it is not what students need, and it is not the way to ensure the continued value proposition of higher education.



