On the sensitivity of cognitive outsourcing (Opinions)

I was very worried about my vacuuming skills. I have always loved vacuum cleaners, especially with vacuum cleaners. It has a clear bin and runs on the carpet on the upstairs hallway to see all the dust and debris it collects. However, I was worried because I kept outsourcing the downstairs vacuum to the robot vacuum, which my wife and I bought not long ago. There are three kids and three dogs in the house, and our family room saw a lot of traffic, and I could save a lot of time by cleaning up the robot. What did I lose by relying on robot vacuum to clean the house?
Of course, not much, and I’m not actually worried about losing my vacuuming skills. The vacuum for the family room means a lot to me and I’m happy to have the robot handle it. Doing this frees up my time on other tasks, preferably watching the kitchen windows, but cooking more often, which is a trivial thing I don’t have a robot to help me. It is perfectly reasonable for me to not care about the tasks I don’t care much about when the machine is there waiting to get the job done for me.
This is my response to a new high-profile study by the MIT Media Lab team led by Nataliya Kosmyna. Their preprint, “Your Brain’s Cognitive Debt Accumulation when Using AI Assistant for Paper Writing Tasks,” details their experiments. The team invited 54 adult participants to write short articles using SAT tips at multiple meetings. One third of participants can access Chatgpt to help with their paper writing, one third can access any website they can reach through Google’s search engines, but Chatgpt or other large language models are forbidden, while one third does not have external aids (the “brain-only” group). The researchers not only scored the participants’ paper quality, but also used EEG to record participants’ brain activity in these writing tasks.
The MIT team found that “brain connectivity systematically reduces the amount of external support.” The only group “expresses the strongest and widest range [neural] Networks, “AI assistance in the experiment” caused the weakest overall coupling. Additionally, ChatGpt users were less involved in the writing process over multiple sessions, and at the end of the experiment, they usually just copy and paste from the AI chatbot.
The study inspired some dramatic headlines: “Chatgpt may be eroding critical thinking skills” and “Research: Using AI may cause you to damage your brain” and “Your dependence on Chatgpt may be really harmful to your brain.” Savvy news readers will focus on these headlines (“may,” “may,” “may”) rather than more terrifying words, while the authors of the study have worked hard to stop journalists and commentators from exaggerating the results. From the FAQ of the research: “It is safe to say that LLM essentially makes us ‘Dumber’? No!” As in AI- and learning discourse, we need to slow down the scrolling and go beyond the exaggeration to understand the role of this new study, and actually say nothing.
I should now record that I am not a neuroscientist. Although others with expertise in this field have done so and expressed concern about the author’s interpretation of EEG data, I cannot weigh any of the permissions of EEG analysis. However, I do know one or two about teaching and learning in higher education, and they spent my vocational teaching and learning at the University Center, helping teachers and other lecturers in other subjects explore and adopt evidence-based teaching practices. During MIT’s research, the background of teaching and learning caught my attention.
Consider the task of participants in this study, i.e., all students or faculty members of Boston Area University. They received three SAT paper prompts and asked to select one. Then give them 20 minutes to write an article in response to the prompts of their choice while wearing some kind of EEG helmet. Over a period of several months, each subject attended three such meetings. We should be surprised that participants who have access to Chatgpt are increasingly outsourcing their writing to AI chatbots? In this way, are they engaging in writing less and less?
I think the takeaway from this research is that if you give adults totally unreal tasks and access to Chatgpt things, they will let the robots do the job and save their own energy for other things. This is reasonable, maybe a cognitively efficient thing. Like I had the robot vacuum cleaner tidy up my family room while washing dishes, or searching for Oriental wood tin in the backyard.
Of course, writing a SAT paper is a cognitively complex task, and for some high school students, it may be an important skill. However, this study shows that since Chatgpt launched in 2022, generative AI for AI has been showing higher ED: When we ask students to do things that are neither interesting to their personal or professional life or related to their personal or professional life, they all look for shortcuts.
John Warner Internal Advanced ED Contributor and author More than just words: How to think about writing in the AI era (Basic Books), written about this concept in the first article on Chatgpt in December 2022. He noted that he was worried that Chatgpt would lead to the end of high school English and asked, “What does it mean to us ask students to do in school, and we assume they will do anything that can be avoided?”
Regarding the new MIT study, it’s amazing that we’ve been in the Chatgpt era for over two years and we’re still trying to evaluate the impact of generative AI on learning by studying how people cope with boring paper assignments. Why not explore how students use AI in more realistic learning tasks? Like law students draft contracts and client memorandums, or do students design multimodal projects or communicate students to tasks that are trying to be persuasive? We know that more realistic tasks inspire deeper engagement and learning, so why not let students relax these tasks and then see what the impact of AI usage might have?
There is another more subtle question about the generative AI we can see in this study. “We did not divide the essay writing tasks into subtasks such as creativity, writing, etc.” Writing a cognitive process is more complex than vacuuming my family room, but criticism of the use of AI in written form is often focused on outsourcing the entire writing process to a chatbot. This seems to be what participants did in this study, and AI may be used naturally when given uninteresting tasks.
However, when a task is interesting and relevant, it is unlikely that we will give it entirely to chatgpt. Savvy AI users may get some AI help in a portion of their tasks, such as generating examples or imagining different audiences or tightening our essays. AI can’t do all the things a trained human editor can do, but as a writing lecturer (and human editor) Heidi nobles believe that AI can be a useful alternative when human editing is not easy to obtain. It can be said that my robot vacuum cleaner works with me to keep the house clean, but there is reason to think that someone investing in a complex activity like writing might use generated AI because Ethan Mollick calls it “common intelligence.”
If we are to better understand the impact of generative AI on learning, it is crucial for higher education to keep its teaching tasks relevant, we must study the best uses of AI and the best learning activities. Thankfully, this research is happening, but we should not expect simple answers. After all, learning is more complicated than vacuuming.