Education News

Agent AI invades learning management systems and other things we should know

I’ve spent much of the past 18 months writing and talking about how I think we can and should continue to teach writing, even as we have the technology to produce synthetic text. While my values ​​on this issue are unshakable, there is no denying that the world around me is changing, which requires constant vigilance about the capabilities of this technology.

But like most people, I don’t have unlimited time to deal with these things. One of my suggestions more than just words The key to dealing with these challenges is to “find your guides,” people you can trust who are focused on all aspects of the problem.

Marc Watkins was one of my guides throughout this period. yes Dedicated to staying current on the evolving impacts of technology and the ways students use technology.

I thought it might be helpful to others to share the questions I wanted to ask Mark, to inspire myself.

Marc Watkins is director of the Artificial Intelligence Teacher Institute and assistant director of academic innovation at the University of Mississippi, where he serves as a lecturer in writing and rhetoric. He believes that when it comes to training teachers in applied AI, educators should receive equal support if they choose to work with AI or add friction to curb AI’s impact on student learning. He frequently writes about artificial intelligence and education on his Substack, rhetoric.

Q: One of the things I appreciate most about your work in thinking about the intersection of education and generative AI is that you actively engage with technology, using a lens to ask what specific tools might mean for students and classes. I appreciate it because my personal interest in using these things beyond maintaining sufficient, general familiarity is limited, and I know we share similar values ​​at the heart of the work of reading and writing. So, my first question is for those of us who aren’t doing these things at our own pace: What is the state of things? What specifically do you think teachers should know about the capabilities of this generation of AI tools?

one: Thanks, John! I think we are on the same page when it comes to values ​​and artificial intelligence. I mean, we all see human agency and will as key to education and society moving forward. Part of my life now is talking about AI updates to many different groups. I visited with faculty, administrators, researchers, and even quite a few people outside academia. Just keeping up is exhausting and taking stock is nearly impossible.

We now have agent AI that uses your computer to complete tasks for you; multimodal AI that uses computer speech to see and interact with you; machine inference models that take simple prompts and run them repeatedly to guess what complex responses might look like; and browser-based AI that can scan any web page and perform tasks for you. I’m not sure students know what AI can do beyond interfaces like ChatGPT. The best thing any teacher can do is to have conversations with students, ask them if they are using AI, and assess how AI is impacting their learning.

Q: I would like to learn more about artificial intelligence “agents”. You recently published an article on this issue, as did Anna Mills, and I think it’s important for people to know that these companies are purposefully developing and selling technology that can go into a Canvas course and start doing “work.” How do we think about this in terms of how we think about design courses?

one: I think online assessments are usually corrupted at this point and cannot be saved. But there are still opportunities for online learning, and it’s something we should strive for. Despite its many drawbacks, online education provides an effective way for people to obtain a college education that they may not otherwise be able to afford. There are too many issues around equity and access to eliminate online education from higher education entirely, but that doesn’t mean we can’t think fundamentally about what it means to learn in online spaces. For example, you could assign students in an online course a process notebook and have them write by hand with pen and paper, then take a photo or scan and upload it. this [optical character recognition] Features in many base models will make it possible to transcribe most handwriting into legible text. We can and should look for ways to give our students concrete experiences in invisible spaces.

Q: In her newsletter, Anna Mills calls on AI companies to work together to prevent students from deploying these agents to do all the work for them. I doubt this is possible. I see an industry that seems to enjoy suppressing teachers, institutions and even students. Am I being too cynical? Is there a space for co-working?

one: There’s definitely room for collaboration and limiting some of the more egregious use cases, but we also have to be realistic about what’s going on here. AI developers move fast and break something with every deployment or update, and we should be deeply suspicious when they offer to clean up the pieces, lest we forget how they were broken in the first place.

Q: I’m curious if this technology is evolving where you expected it to be a year or more, 18 months ago. How quickly do you think these things are developing in terms of competencies related to school and learning? What do you see on the horizon?

one: The problem we see is uncritical adoption, hype and acceleration. AI labs create a new feature or use case and deploy it within days for free or at low cost, and the industry is suddenly adopting this technology to bring the latest AI capabilities into enterprise products. This means that a non-AI application we’ve been using for years will suddenly have AI integrated into it, or see it rapidly updated if it has AI capabilities.

Most of these AI updates have not been tested enough to be trusted outside of human assistance in the loop. Otherwise, we would all be beta testers. It is creating “work waste,” where companies see employees using AI uncritically to save time and produce work filled with errors that then require time and resources to fix. To complicate matters, there are growing signs that venture capital funding the development of artificial intelligence is one of the main reasons our economy has not fallen into recession. Students and teachers find themselves at ground zero for much of this work, as education appears to be one of the major industries affected by AI.

Q: When I work with faculty on campus, one of the questions I often get is what I think “literacy” looks like in AI, and while I have my own ideas, I tend to go back to my core message, which is that I’m more worried about helping students develop their human abilities than teaching them how to use AI. But let me ask you, what does AI literacy look like?

one: I think AI literacy actually has nothing to do with using AI. For me, I define AI literacy as learning how technology works and understanding its impact on society. Based on this definition, I believe we can and should integrate aspects of AI literacy into our teaching. The part of using AI responsibly, which I call AI fluency, has its place in certain courses and disciplines, but it needs to go hand in hand with AI literacy; otherwise, you risk uncritically adopting a technology about which AI is poorly understood, or demystifying AI and helping students understand its impact on our world.

Q: Whenever I visit a campus, I try to have an opportunity to talk to students about their AI usage, and in most cases I see a lot of critical thinking about it, with students recognizing the many risks of outsourcing all their work, but also sharing that in the systems they are running, it sometimes makes sense to use it. This makes me think that, ultimately, our only response may be to deal with the demand side of the equation. We cannot regulate these things. Tech companies won’t help. Students should make the choices that are best for their lives. Of course, this is always the case as we grow and develop. What do you think we should focus on to address these challenges?

one: My current thinking is that we should teach students, and possibly ourselves, how to discern the capabilities of AI tools. When we deal with machines that mimic human intelligence, there is no rulebook or a priori for us to refer to. My approach is to be radically honest with students and teachers. What I’m saying is: I can’t police your behavior here, and no one else will. We all have a responsibility to form a social contract and agree on where this technology belongs in our lives, and to establish clear boundaries in areas where it does not belong.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button