Research, Courses and Ratings: New Data Explain How Professors Use AI

Kasun is one of the growing number of higher education teachers who use generated AI models in their work.
A national survey conducted by more than 1,800 higher education staff conducted earlier this year by consulting firm Tyton Partners found that about 40% of administrators and 30% of instructions to use generated AI or use generated AI every day or week, which is 2% and 4% in spring 2023, respectively.
New research from the company behind AI Chatbot Claude shows that professors around the world are using AI for course development, designing courses, conducting research, writing grant recommendations, managing budgets, rating students, designing their own interactive learning tools, and other uses.
“When we looked at the data late last year, we saw that of all the ways people use Claude, education constitutes two of the first four use cases,” said Drew Bent, an education leader at Anthropic and a researcher who led the research.
This includes students and professors. Bente said the findings inspired a report on how college students use AI chatbots and research on the use of Professor Claude.
Teach how to use AI
Anthropic’s report is based on approximately 74,000 conversations with higher education email addresses conducted with Claude during the 11-day period in late May and early June this year. The company uses automation tools to analyze conversations.
Most (or 57% of the conversations analyzed) are related to course development, such as designing lesson plans and assignments. One of the most surprising findings, Bent said, was that professors used Claude to develop interactive simulations for students, such as web-based games.
“It helps to write code so that you can do interactive simulations and as an educator, you can share with students in your class to help them understand a concept,” Bente said.
The second most common way professors use Claude to conduct academic research – 13% of the conversation. Educators also use AI chatbots to complete administrative tasks, including budgeting plans, drafting letters of recommendation and creating conference agendas.
Their analysis shows that professors tend to automate more tedious and routine tasks, including financial and administrative tasks.
“But for other areas like teaching and curriculum design, it’s more of a collaborative process, where educators and AI assistants are coming back and forth and collaborating together,” Binte said.
The data comes with warnings – humans released their discoveries, but did not release the complete data behind them – including how many professors are in the analysis.
The study captured snapshots in a timely manner. The study period covered the tail of the school year. For example, if they analyzed 11 days in October, Binte said the results might be different.
Working with AI Grading Students
About 7% of the conversations human analysis is about the ratings of student work.
“When educators use AI for grading, they often automate a lot of it, and they have important parts of grading AI for grading,” Binte said.
The company conducted the study in partnership with Northeastern University – a survey of 22 faculty members to understand how and why they use Claude. In a survey response, university teachers said rating students’ work is the most unlikely task for chatbots.
It is not clear whether any of Claude’s assessments really take into account the grades and feedback students receive.
Nevertheless, Marc Watkins, a lecturer and researcher at the University of Mississippi, fears that human discoveries indicate a disturbing trend. Watkins examines the impact of artificial intelligence on higher education.
“This nightmare we may have is students who use artificial intelligence to write papers and teachers to rate the same papers using AI. If so, what is the purpose of education?”
Watkins said he was also shocked by the use of AI in a way that devalues the professor’s relationship.
“If you just use it to automate part of your life, whether it’s writing emails, recommendations, ratings or providing feedback to students, I really object to that.”
Professors and teachers need guidance
Georgia professor Kasun also doesn’t think professors should use AI for ratings.
She hopes the university has more support and guidance on how best to use the new technology.
“We are here, alone in the forest, working hard for ourselves,” Kasu said.
Drew Bent, along with Anthropic, said companies like him should work with higher education institutions. “We, as a tech company, are not the right way to tell educators what to do or what not to do,” he warned.
However, educators and people working in AI (such as Bent) agree that the decision now on how to include AI in college and university courses will impact students in the next few years.