Sept. 22, 2025

Context rot, AI over-hype and an intriguing, hilarious video

Context rot, AI over-hype and an intriguing, hilarious video
The player is loading ...
Context rot, AI over-hype and an intriguing, hilarious video

Welcome to another episode of "AI Goes to College," where hosts Dr. Rob Crossler and Craig dive into the ever-evolving landscape of artificial intelligence in higher education. In this episode, they kick things off with a lighthearted look at the viral, AI-generated parody "Redneck Star Trek," using it as a springboard to discuss the rapidly advancing capabilities of AI video creation and what this means for both educators and students.

Rob and Craig explore the implications of AI tools making creative content more accessible, shaking up traditional teaching methods, and opening new doors for engagement. They unpack the excitement—and potential pitfalls—around trend topics like vibe coding, the “agentic layer,” governance concerns, and the phenomenon of “context rot” in AI conversations.

Along the way, the hosts share personal experiences with different AI platforms, discuss challenges in scaling AI within institutional systems, and highlight the need for critical thinking and strong oversight as universities start to embed AI more deeply into daily operations. Whether you’re a faculty member, student, or just AI-curious, this episode offers practical insights, food for thought, and a dose of humor for anyone navigating the intersection of technology and higher ed.

So grab your coffee (or sweet tea) and join Rob and Craig as they “go where no podcasters have gone before” in the world of collegiate AI!

Takeaways:

  • The emergence of AI technologies presents unprecedented opportunities for students to create engaging content, thereby transforming traditional classroom dynamics.
  • With AI-generated content, educators can offer varied media presentations, catering to diverse learning styles and enhancing student engagement.
  • The recent advancements in AI tools have made it feasible for novices to produce high-quality content, which was previously the domain of experts alone.
  • Concerns regarding AI-generated outputs necessitate critical evaluation to avoid potential misinformation and ensure educational integrity.

Links:


Companies mentioned in this episode:

  • Neural Derp
  • Grok
  • Google Vids
  • Wondery
  • Claude
  • ChatGPT
  • Fast Company
  • Microsoft Copilot
  • Suno AI

Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - Untitled

00:41 - Untitled

00:42 - Introduction to the New Season

06:51 - The Impact of AI on Education and Content Creation

11:49 - Vibe Coding and Its Implications in Education

25:01 - The Importance of AI Governance in Education

28:56 - Understanding Context Rot in AI Conversations

38:45 - AI Tools in Education: A Discussion

Transcript
Craig

Welcome to another episode of AI Goes to College. And as always, I am joined by my friend, colleague and co host, Dr. Robert E. Crossler from Washington State University.Rob, how are things in the great Pacific Northwest?

Rob

It's starting to become fall, Craig. Students are on campus and we actually saw some rain.

Craig

Yeah, Mother Nature was just being nasty to us. We had some cool weather, but now it's back up in the mid-90s. So that's life. So speaking of life in the south, how's this for a segue?I sent you a link to one of the funnier AI videos I've seen. It's called Redneck Star Trek and it's by a group called Neural Derp. Like Rob, you looked at it. What's your description of this video?

Rob

Well, before I give my description, Craig, I'm going to say you better put that in the show notes because people need to enjoy this. Yes, I thought it was hilarious. It took me back.It's been a long time since I've gone down a Star Trek trail, but growing up, that was one of the shows I watched regularly and it was perfect. It was the whole Beam Me Up Bubba fits very much the Southern mindset and it had the characters from Star Trek and it was done very well.I was very impressed with what they were able to create.

Craig

Let me read the description. By the way. It has almost 1.5 million views at this point.It's welcome to Redneck Star Trek, where sweet tea flows like warp plasma and tribbles clog up the septic. This AI generated music video is a wild ride through retro futuristic chaos, deep fried dreams and Redneck Trek pride.With AI music, AI country and a smokin redneck remix. This is Beam Me Up Bubba like you've never seen. And it goes on from there.It is set in a trailer park star base, features a redneck Jedi, a time traveling rasta wizard, a 50s wizard, and even an 80s wizard. All Grillin gator and belting out Star Trek music powered by Suno AI. This ain't your grandpa Star Trek parody.This is Beam Me Up Bubba with attitude, AI swagger and some real backwoods genius. Other than being funny, what did you think?

Rob

Well, for one, my first thought was how long did this take them to create this and what was their process like? Because I've seen other videos that have been made with AI and they have not been nearly this good. Not this well done.So it would be fun to watch that process to see, you know, how easy is it how or how hard as the case may be because, you know, it's scary if you can do that through prompts and various different things from all sorts of reasons in life. What can we believe and not believe if we get really good at making up realistic videos?

Craig

Yeah, yeah, it was scary how realistic it looked. You knew when you were looking at Khan and Spock and William Shatner. It was really pretty good. And it's absolutely worth the.However long it was, I don't know, three minutes or so to watch it. If you need a laugh. Have you done much with video yet?

Rob

I've played a little bit with video through Grok and through chat and Gemini and I'm always disappointed with what I create. So it's definitely not easy to get dialed in.

Craig

Yeah, I think we're in the very early edge of where models like VO2 and Google vids, I don't know if you play with Google Vids. That's not just an AI platform. It's also a pretty easy to use video editing platform.And I think some interesting things are just about to be possible for non experts. I mean, that's one of the big recurring messages we're seeing in AI is the things that only experts were able to do. Now pretty novice people can do.Not as well as an expert, but okay. I mean, I don't know if you saw the AI Goes to College newsletter article that I let AI write. I mean it was, I don't know, you judge it for yourself.I don't think it was as good as what I would have written, but it was not bad. It was decidedly not bad. And I haven't had one person comment on the quality of that article.Now I did full disclosure had the AI generated article and then at the end I put a little reflection in about what I thought of it. But look, if I'd have put that out there without saying anything, I'm not so sure anybody would have noticed. They weren't paying attention.

Rob

Well, that begs the question, Craig, how many people actually read your newsletter?

Craig

That one's got 350. 400.

Rob

Yeah. Reads, you know, it was well done and which you hadn't have told me.I. I think there are some things you might, may have picked up on if you were being a critical reader. But on the surface it was. It was well done.

Craig

Yeah, I mean if you read a lot of stuff that I wrote, you could kind of pick up that it wasn't exactly my voice.But if you're just reading something, we talk about AI slop the days of somebody even with a Tiny amount of skill and care churning out AI slop that's over. There's a Wondery is a big podcast studio. They're putting out thousands of podcast episodes a day now.And I haven't listened to them, and I won't, but I'm guessing they've got some people that are fairly talented that can put out something that's not bad. I think we're entering into the era of not bad. This redneck Star Trek thing is fantastic. But those people, they know what they're doing.They know what they're doing with AI, they know what they're doing with editing video. So I'm sure that's a bunch of clips that are stitched together, but the way that they have continuity, that's really hard to do still.All right, so it's a fun video. It definitely is worth spending a few minutes on if you need a little laugh. And who doesn't? But what does this mean for higher ed?

Rob

Central is endless, I think, from giving students power to create really engaging content, we can ask more of our students and having them create things that we could never ask before. We can actually flip classrooms in different ways than we haven't been able to do before.If video creation and doing these different things becomes easy, you can probably change the way you teach students and incorporate really, really cool videos. It's engaging with the students, not the boring face of dad up there talking at them.And then spend class time engaging with what the learning provides us.

Craig

Or grandpa, in my case. Yeah.You know, there's a lot of criticism of learning style theory, but I think it's absolutely spot on that some people learn better through reading, some people learn better through audio, video. You know, being able to pretty quickly provide a range of media with essentially the same, at least the same ish material is pretty powerful.I mean, I hate creating tutorial videos. And we have to do it.You know, you teach a class that's got Excel, you've got to do a pivot table video, and then you've got to do this formula video and that formula video. And there. I mean, mine are always utilitarian and awful. You know, they get the job done, but they're just not great.Now it could actually be engaging and entertaining instead of just here, here's how you do this thing.

Rob

Yeah, no, I did this experiment with Google's new product. Part of. I'm not sure the product or part of what they do, but to where you can make storybooks. And I gave it a prompt.I wanted it to explain How TCPIP worked. For those of you who aren't technical, TCPIP is basically the rules for how Internet communications work.So it's a fairly complicated topic, but it's a fairly simple implementation of it. But I asked for it to write in an anime theme how this works for undergraduate students. And it was about 10 pages. And I was amazed.In two minutes it had what I thought was a fun concept. My kids told me it was worthy of being shared with my students.And I just opens up a whole world of possibilities of how to easily present things in a way that's much different than the death by a 30 page chapter in a textbook.

Craig

Not that we're dissing textbooks. Yeah, it was pretty good. It's a custom gem which you can just click on it. It'll ask you some questions about what you want.And it produces a storybook. I did one for our great nephew Charlie in his first day of kindergarten. You know, you put in a few details and you know, it's pretty good.But that's something that was clearly not intended for educational purposes. But yeah, I mean, kids would read that. And I think that's where we get with this video.

Rob

Well, and that's what I'm thinking as we think about this video.I mean, what if you could actually take a chapter in a textbook and feed it to something like this and have students interact with it in a way to where every interaction resulted in a two to three minute video that was spot on with what they wanted to learn. So it became almost a dynamic, real world video presentation of the material in the corpus text that students should be learning from.That changes things, right? That changes the approach to education.And we can start focusing more on how to apply what you've learned in context as opposed to so much the digesting and the learning of it and the theory involved.

Craig

Absolutely. It's not really meet students where they are, but being able to at scale, deliver the same content in multiple ways.

Rob

What makes this exciting though, Craig, is you can move to a world of mastery and different students are going to obtain mastery in a subject at different paces. And if you can define your learning around this is the level of mastery I want to see of my students.And you can do so in a customized way for each student dynamically based on what the material is. It blows up some of the ways we've done education, because you've never been able to do that at scale before.And it's getting closer and closer to the point where we can do that at scale.

Craig

Right, right. Absolutely. All right, so let's wrap this, this up. We covered a lot of ground there. It's something to keep an eye on.I personally am not advocating anybody doing more than just playing around with these things yet, unless you just enjoy doing it. I mean, some people do, if you do dig in deeper. But the VO2 and Google vids are the two that I would start checking out.I mean, they're probably better tools, but Google seems to be making these accessible and they're getting better and better. Rob, you sent me a brief little article from Fast Company about Vibe coding maybe not being exactly what a lot of people thought it was.So first tell us what Vibe coding is and then tell us what you thought about the article.

Rob

Yeah, so Vibe coding is the idea that we can speak to these large language models, whether It's Claude or ChatGPT or others, and tell it what you want, and then it will go out and write the code for you and it will execute and it will work. It's almost amazing, right? You can take people with no technical skills and they can create programs.So when that first emerged, it raised a lot of questions about the value of computer scientists, the value of programmers, entry level jobs, a lot of those very, very concerning things that we're seeing in the headlines. Lately I've been seeing more and more of this.Yes but sort of attitude, which what I mean by that is people who are in the industry who are playing with this are beginning to realize that Vibe coding isn't everything it's meant to be, that it oftentimes breaks existing code, because one of the things about many, many companies is their software includes many different pieces that have to talk to each other, and they all have to work seamlessly in order for the entire big application that, from a user's perspective may look simple, actually needs to be doing in order to work. And so with this Vibe coding, it may add new features, but in the process, it may break 10 others.And then the coder, the person who knows what they're doing, goes in and spends much longer trying to fix the things that it broke than if they would have just written the code themselves to begin with.So while it looks like it could be really awesome and great, at the end of the day, this approach to coding is creating more work because the programmer has to go understand what it was doing, understanding what it was supposed to do, and then figure out how to tweak and change to get back functionality that was lost.

Craig

It's an interesting issue. So I think that we're entering into this phase, at least with vibe coding, where vibe coding is really good for some things. I mean, it's pretty cool.You can create a little game, you can do a lot of stuff without knowing how to code at all, but that doesn't scale. So I think we're in a danger zone in a couple of ways.One way in which we're in a danger zone is organizations that just say, okay, we don't need as many programmers anymore. We can just Vibe code all of this stuff. They're experiencing a rude awakening. I mean, that was ridiculous to start with.I'm not a coder, but anybody that spent any time at all with these tools could tell that that was not going to go well. So let's put that aside for a second.The other danger zone is when organizations that look at Vibe coding and think, well, why can't we do Vibe accounting? Why can't we do Vibe portfolio management or some other form of knowledge work?And there will be tools and probably are tools for doing that kind of thing, they're going to think, oh, the same thing could happen. We're already seeing a big decline in entry level hiring.I think we're going to see a rebound at some point because this AI stuff is not going to work as well as the hype said it would. Then I think we're going to see another dip. Now, what happens in between those dips? You get into this whipsaw effect where there's no hiring.We cut back on the number of students we put out in the world.Then there's this demand, so we oversupply and then, you know, it's a classic problem in supply chain management that I think we're going to see here as well. So that's the other danger zone is in the whole employment aspect.I will say we're already seeing tools that are set up to be enterprise grade because vibe coding may not work at scale. Beginning AI to help you code absolutely does.

Rob

But there's a couple of concerns I want to point out with that too, Craig.One is with the tools that they're putting in place is there's these new threat vectors or these new ways that bad guys, for lack of a better word, are attacking organizations and that is they learn where that source replacement repository of code is that these systems look at, which is oftentimes GitHub, and they're burying malicious code into those places.So unbeknownst to you, when you're vibe coding, whether it's enterprise grade or otherwise, you're grabbing code to build upon that may not have been vetted perfectly well because a human's not actually looking at it to say, oh, well, nothing should ever do that, or what that nuance is. You're potentially introducing risk into your organization. So.So if you're a student hearing this, know to be skeptical of these tools and to know that you need to really understand them. And if your organization don't cut corners because you very well will regret introducing a potential security incident that you didn't intend.

Craig

Yeah, yeah, absolutely.Although I think it's just an accelerated problem that's been around because people have been grabbing stuff off of GitHub and other repositories for a long time. All right, so we kind of geeked out on that. What does this mean more broadly for higher ed, for other disciplines? Any ideas on that?

Rob

Well, I actually think when I was reading this from a perspective of an information system student, which is the domain that I teach in, and Craig as well, is vibe. Coding is really good for prototyping.If you have an idea of something you want to do, you can very quickly get a working version of it and sit down with engineers or end client or whomever and show them how to and get real time feedback without having to go off and have people create something to be able to talk about from. Is this what you want it to do? Is this what we're trying to accomplish? Sort of perspective.So from a rapid development of prototypes perspective, I saw nothing in here that got me terribly concerned. It was really just the scaling and how well the scales and how it breaks things that it became very concerning.So I can see differences in approaches to a systems analysis and design class and maybe markups of websites and various different things that you can probably do a whole lot more without having to be technically capable in creating some things to engage in the conversation of how does technology meet business demands?

Craig

So that's still our world. So what if I'm teaching sociology or I'm teaching psychology or biology or some other ology? Why would we care about this?Why should they care about this?

Rob

Well, in some ways they may be able to take advantage of it. Right?So they could actually have students that know nothing about technology creating solutions that match the theory of what they're teaching and do some technologically cool things.My guess is you could also begin to understand how these changes in how we're doing things technologically are affecting the, the human mind and the human state and various things like that. So I can see a couple of avenues. What do you think, Craig?

Craig

Yeah, I Think other faculty in other areas ought to take a look at this and follow the trajectory. I mean, it's a classic. Here's this cool thing. Hype, hype, hype, hype, hype.Okay, maybe it's not as cool, but then what we're going to get to is going to be okay. There was a lot of stuff that it wasn't good for, but there's still a lot of stuff that it is good for.And I think we're going to see that carry out in a lot of other areas anything that's related to knowledge work in any form, dealing with data, dealing with documents, trying to structure information. Maybe not as dramatically as vibe coding, but we're going to see the same sort of thing play out.So just don't panic, Find ways to leverage it, ride out that initial wave, don't get too carried away, but also don't write it off. It was Amara's law. We overestimate in the short term and underestimate the effects of a new technology in the long term. Something like.I'm sure I butchered that, but you get the idea.All right, so I want to move into a related article that you sent me, kind of, and this one's got a long title and there will be links to these articles in the show. Notes the Agentic layer. Why the Middle of the Cake Matters. An AI driven delivery.Got a little carried away with their metaphor, but I think it's a pretty good article overall. So, Rob, what do you got to say?

Rob

Yeah, so this article warned people basically not to get so caught up in the magic, if you will, or the frosting. They referred it to it as of the AI.So with that frosting being the really fancy, cool things that it can do, because AI can do a lot of really, really cool things, we're seeing a lot of companies being started up that are selling what those really cool things are.And the premise of the article is to pay attention to the transparency of how it is accomplishing all of those things that we're buying the product for.And so as I think about administrators of people making decisions of what products they're going to bring into the institution for deployment is if you don't know what's happening with your data in the middle, if you don't know what that code is doing, if it's just smoke and mirrors to make this beautiful, awesome thing happen, spend some time trying to figure out what that is and how that works and how you can see into it, because there's a lot of potential Security vulnerabilities, data not being processed, that at the end of the day, you, the organization, are going to be on the hook if the those bad things happen. Also, I've seen enough stories of people deploying new AI technologies to where it does not behave the way it's supposed to.And it is an agent of you, the organization, and all of a sudden it is giving advice or giving answers that are inconsistent with what you would want to give. So I could see, and I haven't seen this story, but I could see software that you purchased to advise students.And let's say that a student is struggling and your agent advises a student to drop out of school. Is that what a human counselor might advise? Perhaps. Right. With enough information, you need to get things in order. Here's some resources.Let me help you. But if the story gets out that a machine had told students, hey, you need to leave, that may actually be perceived really, really badly.And you don't want to put the machine in the position to make that recommendation to students. And so I think there's a lot of risk if you don't understand why certain recommendations are being made without that human component.

Craig

Let's go one step further with your example of telling a kid to drop out of school. So you do want a machine to say that if that's the right thing for the student to do. But you've got to be sure.Generative AI is generative, so it may not do the same thing twice in a row. And so this article comes out of kind of a sub field called DevOps Development Operations, which is a really geeky thing.And it's focused on continuous integration and continuous delivery.So you've got these environments where things are very dynamic and they're moving rapidly and you want to be able to continuously put different things together and then deliver a third thing. So you're pulling all these pieces together and you're delivering something.So if you're on Amazon, there are a thousand different things coming together in Amazon and it's all about presenting you with what you need to see, when you need to see it, and you times however many million customers they have using Amazon.com at any given moment. Where we're going with AI eventually is where we're going to rely on it to do things for us without us really thinking about it being AI.We've already got systems that will create schedules for students. Here's your plan of study, here's where you are, here's maybe some other stuff about you. Here's your schedule for next term.If you don't like it, try to change it. If you're okay with it, just click register. You're guaranteed these seats. There's a lot to like about that.There's some things to not like about it, but there's a lot to like about that. Think about all the effort that goes into advising for just kind of routine scheduling stuff.How much time students spend trying to think about should I take this class or that class and trying to figure out their plan of study. And all that can be done by AI.But when you start rolling in generative AI into it, where it's doing things that aren't really cut and dried, there's a lot of risk there. If you advise a kid out of school that really needs to take a break and get their head together or whatever, that's good.But if you start advising kids to stay in, that ought to be leaving and leaving that ought to be staying.

Rob

Well, Craig, I think where this is important is, and I'm not saying don't use these tools because there's gonna be a lot of really cool tools, a lot of really helpful tools that come out. But to be skeptical and to ask those questions of, you know, how is it coming to its conclusions, what is it doing?And to test it thoroughly and be okay with, you know, the fact that it might be wrong sometimes, but understanding what world will it be wrong and what are the implications of that? I'm not sure what the statistics are but.But every large language model that's out there, every generative AI tool has some percentage of hallucinations that it still is making up data it's being inaccurate with. And so there are some places where even a 1% chance of making a wrong decision is going to be not good.So, you know, pay attention to that and don't be so quick to jump on the next new thing to not really process out what happens if and be comfortable with that before you make those decisions.

Craig

Yeah, right, absolutely. So I want to point out two more things on this article.They had a little section where they bullet pointed some things that keep AI systems running and don't really get talked about a lot. I want to focus on two of them. One is governance. Rob and I were talking in the pre show that we need to do an episode on governance.But AI governance, which is just now kind of making sure AI is doing what it's supposed to do in the way that it's doing it. A lot of it's about risk management, about establishing the level of trust and monitoring. That should be there. Compliance, all kinds of things.It's a complicated topic. In fact, I did a deep research report on AI governance. I've got a new record number of pages. Take a guess. Rob, this will blow your mind.

Rob

I'm not going. 275.

Craig

No, not that big. 151 pages.

Rob

Okay.

Craig

Yeah. Which is probably three times as long as it needed to be.But universities, and I'm talking to you administrators more than individual faculty at this point.But administrators need to be thinking about AI governance, because we are absolutely going to have, and already are seeing AI embedded in administrative systems and learning management systems and retention systems and advising systems, all of these kinds of things. If they don't have AI in them now. They will. They absolutely will. And we need to have ways institutionally to govern. Schools are not good at that.When the web became a thing, it was a bunch of faculty going off and creating web pages that lacked security and privacy. And I'm as guilty as anybody.And every course had a different look and feel, and it took, I don't know, a decade or so at least, to where we kind of got to where we are now with learning management systems that are at least somewhat under control. So that's one thing we need to be thinking about. The other one is knowledge management. And I had not thought about this.You've got people going off, doing their own thing, reinventing the wheel, et cetera, et cetera, et cetera.We've talked about that in terms of assignments, but in terms of workflow and processes and AI and being embedded in these things, we're probably going to see departments doing their own little experiments. Individual faculty, maybe colleges. That's going to be inefficient, and it's going to be really hard to govern.So I feel like I went on a little rant there.

Rob

But no, I think it's important, and I think it's a nice teaser to where we need to go in a conversation, because these are all important things that as we, you know, make our way through the wild, wild west of where AI is, there's going to be landmines. And if we're not intentional about them, we are purely going to step in them and it's going to cause some pain.

Craig

Are we going to go where no man has ever gone before? Is that a Star Trek thing? I feel like that's a Star Trek.

Rob

That is. I think you may need to write a song, Craig.

Craig

So there you go. Well, AI can do that for us. All right. Anything else on that article.

Rob

No, we'll revisit that, I'm sure.

Craig

So you mentioned hallucinations and making things up. There's also a related problem where AI forgets stuff. And there's actually a term for it. It's called context rot. And look, we do this as humans.I don't remember exactly what we were talking about when we started this conversation. And the longer the conversation would go on, the less and less I would remember about the early points.And you've all been in meetings where you or somebody else forgot something that got said in the first five minutes of an hour and a half long meeting, which ought to be outlawed, by the way, but we do this. AI does this too. There's a thing called a context window, which is essentially the amount of memory that the AI large language model has.It can run out of that, but you kind of don't know when it's starting to run out of it, when it's starting to forget things. And it's not like it says, whoa, whoa, whoa, whoa, whoa, we gotta stop, because my brain is full. It just starts getting some things wrong.That's a problem. Rob, what do you think about that?

Rob

I think it's important.Even more reason why the human involved in the process needs to be critical of what it's getting, because in those moments where it's starting to get things wrong, it's still going to be very confident that it knows what it's talking about. So if you're just, you know, on cruise control, you might be in trouble.

Craig

And what makes context rot really bad is a problem, is that it sneaks up.

Rob

Yeah. And I want to go back to our previous topics, even right and context wrought in Vibe coding or those sorts of things.It could actually start creating programs through your Vibe coding that aren't exactly what you intended because it didn't have the context exactly right anymore.So whether it's document creation or whatever you're doing, it can have you turn left when you meant to turn right and end up in a place where you never intended. But you may not catch it if you're not paying particular enough attention to what's going on.

Craig

Yeah, absolutely. And I know I've said this before, but I'm going to say it again. It's really hard to know what makes this problem particularly pernicious. How's that?Is that it will fabricate. So the guiding function, the objective function of these AI models is to satisfy the user, which is why they hallucinate.I mean, they are literally programmed in ways that Lead them to want to give you an answer. They're rewarded for giving an answer over saying, I don't know.And just like a kid, if you constantly reward that kid for doing something, they're going to end up doing it, or a dog or whatever. We learned that the hard way. We accidentally taught our dog Maggie to point to the treat basket, trying to tell her to leave it.And too late we realized that she was pointing towards the treats. So until the day she died, she would point towards the treats. Sorry, if you're not a dog person, just ignore that last part.But they will just give you an answer. So they'll fabricate.Even if they don't remember something from early on, they try to maintain coherence so they'll fill in gaps when they really shouldn't. And they don't want to constantly ask you, hey, can you repeat what you said 20 minutes ago?So it can hurt your research accuracy, it can screw up things you're trying to get writing assistance with. It's a big problem. With complex problem solving, any big thing, context rock can be a problem if it's high risk.And if you're in long sessions, think about restarting your conversation. And so build in some checkpoints every 20 minutes or whatever.Say, can you summarize the most important parts of this conversation that does two things that gives you a checkpoint because you'll forget this stuff too. You've had chat sessions where you forgot what you started out with. Make up a number, 15 minutes, 20 minutes.Summarize this conversation and if it's starting to get things wrong, if it's misremembering something, take it over into a new conversation. You can use that summary, clean it up to be accurate, start a new conversation and keep going.I think that that and awareness are the two big things you can do. If you're brainstorming, if you're asking it, you know, what's this spider that I just saw? In my garden context, Rod is not a problem.But if it's a long session, if it's really detailed, if you need a lot of precision, just put those checkpoints in place.

Rob

The other thing I'd recommend, Craig, is repeating this session as well and seeing how it treats things differently, because I found that to happen as well, right? A very good session talking about things.And then I go back the next day and recreate and it will have a slightly different set of information and in context in that.So I think that realizing that this is a predictive analytics tool, it's not an Absolute Answer Tool can help you to actually see things from some different perspectives, even if you go through the same process of prompting and following through.

Craig

Right, this, this is a little bit of an art where you do need to do some of those kinds of experiments to just start to see where, where it goes wrong. But if it's important, stop the conversation sooner than you think you should and carry it over into a new conversation.I mean, that's the best advice I can give on it. You know, other stuff, little stuff. Eh, you know, who cares?

Rob

But you can take those summaries that are created at the end of that checkpoint and feed them back into the start of a new session to give it an idea of where it's actually at and what's going on.

Craig

Yeah, you absolutely should do that. And over time you'll figure out what you should ask it to do.You can just say something simple like summarize the most important points of this conversation. But you might say, give me key definitions that we've determined, key insights, whatever.Again, it's a little bit of an art, but with some practice you can get there. You can't eliminate context rot, but you can start to manage it. Speaking of context rot, I feel like we went a little off the rails today.What do you think?

Rob

Maybe just a little, but that's what we get for starting with a redneck Star Trek video.

Craig

That's right. That's right. All right, so let's end with a little bit of a.Not really a hot take, but this is not a pre prepared bit, but are you using anything new? What are you using the most these days in terms of AI tools?

Rob

The most I use is Copilot, because Copilot is what my university has blessed as the tool to have FERPA compliance and all those sorts of things. Outside of that, one of the things I learned from my students is they really like Grok.And so I've been poking at Grok a little bit lately and seeing how it does things and is different from other tools.

Craig

Did you ask them why they like grok?

Rob

To me, the answer they give me.

Craig

Is because I suspect the real answer.

Rob

Is not what you're probably right, but the answer they gave me is that they feel like it's not as constrained as they feel like the other tools are. That it's not being limited in what it will talk about and refer to and go into conversations about.So they feel like it's a more open and free tool than others.

Craig

And that. That's absolutely true. And that's Also very honest of them. Yeah, it's. The guardrails are much broader with Grok.I've played with it and it's not bad, but I. It's not anything that I care about in terms of the guardrails. That's not a problem for the way I use AI. So I kind of. I don't see.I'm not saying it's bad, don't get me wrong. But it's just, yeah, you know, I don't care. So I have a question for you about Copilot, because I am not a Copilot fan.Do you use Copilot as a chatbot or within the Office 365 apps or both?

Rob

Both. So at WSU, most people don't have it as the integrated one inside of the apps. It's truly just a chatbot.And I've found lately when I first started using it, I thought it was inferior to OpenAI and ChatGPT lately has gotten a lot better. They've done some things that have made things work in a much, much better way. You can even turn GPT5 on, and that's working pretty well.So for most of what I do, I found it, at least from a professional perspective, quite helpful.

Craig

So how do you use it within an app?

Rob

Like in Outlook, I can type an email and then I can highlight it and hit Copilot, and then it'll rewrite my email for me. I can.In PowerPoint, I can give it an outlined document and then it will turn it into a PowerPoint feed for me or a journal article I've written that I want to present. It'll just click a button and make a beautiful PowerPoint slide deck for me.

Craig

So, yeah, I'll have to give it another try because that kind of thing did not work for me at all. But it's been several months.

Rob

Yeah, something happened over the summer where it got a lot better. And the improvements that they don't tell you about, they don't say, hey, there's this release of this thing that it does differently.All of a sudden it just starts acting differently. And I'm like, oh, that was. Or, oh, it used to do this, now it won't.

Craig

All right, well, I was curious. I'm. I started using Claude again. Claude had gotten really rate limited to where it just kind of wasn't usable for me.But the last couple of weeks I've been using Claude a lot more. But I would encourage listeners to play around with the different tools. It's one reason I brought this topic up Kind of on the fly is.I wanted to ask you about Copilot. I used Copilot, didn't like it, wrote it off. And I think I was right at the time, but now, apparently it's a lot better.So use different tools, play with them. They're going to ebb and flow about what's good and what's not so good. But don't stress out over it.You know, don't feel like you've got to chase every new development that comes along. There's some new things I'm going to foreshadow.There's some new things coming to NotebookLM that they released and then immediately took away because they were not stable. But they'll bring them back, so we'll have to talk about that. If you haven't used NotebookLM, you should. Their video overview is just fabulous.All right, any last words? No dog stories?

Rob

No. My dog is sad. We're trying to make him healthy again.But no, I'm excited to see a new semester, to see how students are using AI and different things. And I would note that if there's something you want Craig and I to talk about that you're like, why don't they talk about this? Shoot us an email.Let us know some topics that are of interest to you. We'd love to answer those, to speak into those, and not make this so much. What do Rob and Craig think?You should know, but to allow us to help you to explore and uncover some. Some things that are pressing for you.

Craig

Help us help you. There are a couple of ways that you can do that. You can use the contact form at aigostocollege.com, which, by the way, has all of our back episodes.You can also email rob.crossleri goes to college.com Did I remember that right?

Rob

Perfect.

Craig

All right. Or me@craigstocollege.com and we'll be glad to consider your topic for a future episode. All right, any last words?

Rob

Nope. I hope it's a great football weekend. Whenever you listen to this this fall, whoever you're rooting for, I hope they win.

Craig

All right, that's it for this episode. Remember, all things AI goes to college are available at aigostocollege.com Rob and I are both happy to make a visit, do a virtual talk.Whatever you might like us to do, just email us. All right, that's it. Talk to you all next time.