April 28, 2026

When You Bring AI to the Party Matters More Than Whether You Bring It

When You Bring AI to the Party Matters More Than Whether You Bring It
The player is loading ...
When You Bring AI to the Party Matters More Than Whether You Bring It
Apple Podcasts podcast player badge
Goodpods podcast player badge
Overcast podcast player badge
Podcast Addict podcast player badge
Spotify podcast player badge
Castamatic podcast player badge
PocketCasts podcast player badge
Podurama podcast player badge
Apple Podcasts podcast player iconGoodpods podcast player iconOvercast podcast player iconPodcast Addict podcast player iconSpotify podcast player iconCastamatic podcast player iconPocketCasts podcast player iconPodurama podcast player icon

When in your thinking process should AI show up? A new study suggests the timing matters more than the access.

In this episode, Craig and Rob work through a recent CHI (Computer Human Interaction) conference paper that found a counterintuitive pattern: participants who had AI access from the start of a 30-minute task wrote weaker reports than those who got AI late or had no access at all. Same tool, same task, opposite result. The hosts connect the finding to Herbert Simon's satisficing concept and ask what it means for how faculty should teach AI use in their classrooms.

The conversation also covers entry-level hiring trends in tech (which look better than the headlines suggest), Microsoft Office Agent's strange refusal to generate slides on a textbook chapter about AI, and why Rob worries the floor in higher education is rising while the ceiling may be coming down.

What you'll hear

A four-tool slide deck experiment. Craig made the same presentation in Microsoft Office Agent, Claude Cowork, ChatGPT, and Gemini. The differences in output quality, refusal behavior, and editability are larger than the marketing suggests.

The CHI satisficing study. Researchers from Chicago and Toronto ran almost 400 participants through a civic decision-making task. With ten minutes, early-AI access helped. With thirty minutes, it hurt. The hosts unpack why and what it means for any knowledge work that requires actual thinking.

Why "good enough" is now a problem. When AI can produce a serviceable draft in seconds, the differentiator shifts to what happens after the first draft. Craig and Rob discuss why the floor is rising for entry-level work and why the ceiling may not be rising with it.

Entry-level hiring data. Recent IEEE Spectrum reporting suggests entry-level tech roles are growing in some categories, contradicting the prevailing narrative. The hosts walk through which roles and what the trend means for university programs preparing students for those jobs.

AI sycophancy in the wild. Rob shares why the tools' tendency to agree with the user's framing is more dangerous in high-stakes situations than in low-stakes ones, and what that means for how we should be using them.

Why timing matters more than access

The dominant question in higher education has been whether students should use AI. The CHI study suggests that's the wrong fight. The better question is when AI should appear in a student's thinking process.

Participants with late AI access in the study produced the same number of arguments as those without AI, but with more balanced pro-and-con reasoning. The tool became a counterweight to their own thinking rather than a substitute for it. That's a different mental model than the one most faculty (and most knowledge workers) default to, and it has practical implications for course design, assignment structure, and how we coach students to work with these tools.

Episode highlights

  • (approx. 5:41) Craig on the four-tool slide experiment: "A lot of times with AI, 50% is better than 100%, because you can get the 50% really quickly."
  • (approx. 17:10) Craig on what entry-level workers need now: "The solution to helping our students get jobs is to show them how to lean into their humanity."
  • (approx. 19:55) Rob on the floor-and-ceiling tension: "The floor is going up, but the fear is that our ceiling is coming down."
  • (approx. 31:15) Craig on the satisficing finding: "It's not literally where you stopped — it's where your engagement stopped."
  • (approx. 37:15) Rob's end-of-semester challenge to faculty: "Pick one thing. One thing that you're going to engage with over the summer."

Links and references

  • Computer Human Interaction (CHI) conference paper on AI access timing and decision quality (researchers from Chicago and Toronto): https://dl.acm.org/doi/pdf/10.1145/3772318.3791796
  • IEEE Spectrum reporting on entry-level technology hiring trends: https://spectrum.ieee.org/ai-effect-entry-level-jobs
  • Herbert Simon's concept of satisficing (1956)
  • Microsoft Office Agent, Claude Cowork (Design feature), ChatGPT, Gemini

For faculty: questions worth sitting with

  • Where in your course design does AI currently show up, and would your students be better off if it appeared later in their process?
  • How would you redesign one assignment so that students engage with the problem cold before the AI shows up?
  • What does excellence look like in your discipline now that "good enough" is trivially achievable? How will you recognize it, and how will you teach students to reach for it?

About the show

AI Goes to College is a podcast for higher education professionals trying to make sense of artificial intelligence in their classrooms, their research, and their institutions. Co-hosted by Craig Van Slyke and Rob Crossler, the show focuses on practical, evidence-based perspectives on AI in higher education without the hype.

Subscribe and listen: https://www.aigoestocollege.com/

Newsletter: https://aigoestocollege.substack.com/

Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - Untitled

00:41 - Untitled

00:42 - Introduction to Generative AI in Higher Education

01:14 - Updates on AI Tools and Their Implications

10:09 - Transitioning to AI Tools in Education

15:31 - The Future of Entry-Level Jobs in the Age of AI

23:08 - Exploring the Impact of AI on Cognition and Learning

34:09 - Reflecting on Teaching Practices

Transcript
Speaker A

Welcome to another episode of AI Goes to College, the podcast that helps higher ed professionals figure out just what in the world is going on with generative AI.

Speaker A

I am joined once again by my friend, colleague and co host, Dr. Robert E. Crossler of Washington State University.

Speaker A

Rob, how are things out west?

Speaker B

Craig, the semester is wrapping up and it will be a beautiful summer before we know it.

Speaker A

I am heading to Flagstaff in the morning to give a keynote at the Northern Arizona University Nonprofit Leadership Conference, which will be fun.

Speaker A

All right, well, let's get to it.

Speaker A

I want to start out by talking about a number of updates with AI tools, at least updates in how I've been using them.

Speaker A

And so I'm going to go through a lot.

Speaker A

Love to hear your reactions.

Speaker A

The first one was Microsoft Office Agent, which as nearly as we can tell is doing something with Claude, maybe Cowork on the back end.

Speaker A

It's a little mysterious at the moment.

Speaker A

Yeah.

Speaker B

What I like about that, Craig, is as Microsoft starts bringing some of these other tools in house, right when they started with Copilot, it was very much a wrapper that you got access to ChatGPT and their different models.

Speaker B

And now whether it's through Researcher where you can get to Claude's Deep Reasoning or this frontier model that gives you access to Cowork, it is the problem of governance and the perspective of which tool does an organization or a university authorize people to use.

Speaker B

Microsoft is trying to position themselves so these things become useful and not a series of one off arrangements with a whole bunch of different vendors.

Speaker B

So it hasn't been perfect.

Speaker B

I hear there's some friction that people aren't loving, but it's moving in a good direction as we rapidly see these AI tools change and become available to do different things.

Speaker A

Yeah, and there's a lot of benefit to having that kind of aggregator wrapper, whatever you want to call it, especially when it comes to things like FERPA and HIPAA, the EU's various regulations.

Speaker A

But what agent is is it's a sort of autonomous agent that will just go out and do stuff for you based on a prompt.

Speaker A

And it, it's limited.

Speaker A

Right now I'm using it on my personal Office 65 subscription.

Speaker A

It's listed as a frontier tool and right now I think I get 15 tasks a month and 50 conversations.

Speaker A

So I'm not even sure what the difference is there maybe in fine tuning the output.

Speaker A

But what I did is I wanted it to create slides our textbook Information Systems for Business and Experiential Approach Prospect Press.

Speaker A

It's coming out with Edition 5.1, which is basically adding a chapter on AI and we need to do slides.

Speaker A

So I thought, well, how are these tools going to do with the slides?

Speaker A

And it was a good opportunity to try Agent.

Speaker A

It went poorly to, okay, so the first problem, it created nice slides.

Speaker A

Then it went outside of our chapter.

Speaker A

I gave it our chapter and gave it our chapter's artwork, screenshots, that kind of thing, and then gave it just some instructions and it went outside of the material I gave it, which sometimes is good.

Speaker A

In this case, it was bad.

Speaker A

I intentionally tried a really simple prompt.

Speaker A

So I didn't tell it, only use the chapter.

Speaker A

I wanted to see what it would do and that was okay.

Speaker A

So I went in and I said, now just use the chapter that I gave you.

Speaker A

And this is where it got kind of funny.

Speaker A

I don't know whether Agent got salty on me because I pushed back, but it refused to create slides.

Speaker A

Right at the end.

Speaker A

It would go through the whole thing, get ready to produce the slides.

Speaker A

I could see it going through and producing individual slides.

Speaker A

And then it would say, I can't do this because I can't create harmful material.

Speaker A

And there is nothing, there is zero, that could be seen as harmful in that chapter.

Speaker B

Unless it's a meta response where AI is saying AI itself is harmful.

Speaker A

Yeah.

Speaker A

Maybe it's thought it was doing something it shouldn't do because we had this proprietary information.

Speaker A

I don't know what was going on, but I pushed back a couple of times and finally it created the slides.

Speaker A

And you saw them.

Speaker A

I thought they were fine.

Speaker A

Nothing spectacular about them, but more than serviceable.

Speaker B

Yeah.

Speaker B

And I think, Craig, that's an important thing that I'm seeing with a lot of AI tools that I use, at least in the work that I do, creating documents, creating slides and whatnot is I have yet to find where I get perfect output, where I just say, boom, go.

Speaker B

Even if it's writing them an email or simple tasks of making paragraphs sound better.

Speaker B

I always have to review, I always have to update, but I find it increases the speed to completion.

Speaker B

It's not a replacement for my ability to engage, but it can make me way more productive and get more done in a shorter amount of time.

Speaker A

Yeah.

Speaker A

It's what we've been saying all along, that a lot of times with AI, 50% is better than 100% because you can get the 50% really quickly and it can take way more time to get to 100% than it would have to.

Speaker A

Just do it by yourself.

Speaker A

So one more interesting thing about the Microsoft Office Agent is even after the explicit instructions to only draw from the chapter, it completely made up a quote that wasn't in the chapter and I don't think it existed.

Speaker A

So it's not bad.

Speaker A

It was not as good as coworkers Claude Cowork, but it's not bad.

Speaker A

So I would keep an eye on it.

Speaker A

If you have a personal subscription to Office 365, go in, activate the Frontier Experimental or Labs or whatever they call it in the Microsoft world.

Speaker A

Try it out on something that's low risk, but I'd give it a try.

Speaker A

So speaking of slides, we ran kind of a big test.

Speaker A

As long as we had to do these textbook slides.

Speaker A

I'm trying to remember.

Speaker A

I ran it in Microsoft Office Agent Claude Cowork and I feel like I'm leaving one.

Speaker A

Oh, chatgpt.

Speaker A

And they were very different.

Speaker A

The one that was really good, I thought was the new Claude design that put out the best looking slides.

Speaker A

Claude went nuts, created 66 slides.

Speaker A

I told it specifically, if you're not sure if you should include a slide included, I'd rather have more slides than I need and cut back than the other direction.

Speaker A

The others were kind of.

Speaker A

I liked the content of Claude the best, but they were all kind of okay.

Speaker A

They all would have saved a bunch of time.

Speaker A

The big thing is, Rob, back to your point, none of these are perfect, but all I did was put the prompt in and go do other things and come back and I had a slide deck that was more than serviceable.

Speaker B

Yeah.

Speaker B

And Craig, I think this is a great point to talk to any students maybe that are listening to this podcast because I've been seeing this a lot with student slide decks that they create really nice looking slide decks.

Speaker B

You can tell that they're better than they were pre AI in how they're being designed.

Speaker B

But there are oftentimes things about the slides that if they would have gone in and took a critical eye to them, that would have made them better.

Speaker B

So just because something's created that looks really nice, do take the time to say, do I really need that many words on that slide or is this font the right size?

Speaker B

Is the audience going to be able to read what's on there?

Speaker B

Because I even saw that in some of the slides we created that there was just some stuff that the slide was basically there, but it needed some of that human touch to make it a better communication tool to do what we needed to do.

Speaker A

Yeah, it's bad at making the text too Small and at trying to jam too much text on a slide.

Speaker A

I mean, you can control a little bit of that with prompting.

Speaker A

But even with the Claude design, which I actually used for my keynote that I'm giving on Thursday, there were some things that just didn't make any sense.

Speaker A

I've got one slide that's about the slow drip of messaging.

Speaker A

You know that it builds up over time.

Speaker A

You've got to give it time.

Speaker A

You can all understand the metaphor.

Speaker A

And it gave me some weird bar chart.

Speaker A

And it's okay.

Speaker A

I kind of get what you're going with here, but it really makes no sense.

Speaker A

And if your visualization makes people think too hard in ways you don't want them to think, then it's a bad visualization.

Speaker A

So I had to replace that, which was no big deal.

Speaker A

But what would have taken me probably a couple of hours took me maybe 30, 45 minutes and could have been done more quickly than that.

Speaker A

I really like to fine tune slides, but they would have been okay in probably half an hour's worth of work at the most.

Speaker A

So if you're not using AI for that kind of thing, start using AI for that kind of thing.

Speaker A

We're going to be going into summer prep for fall mode before too long.

Speaker A

And things like particularly Cowork.

Speaker A

I'm a huge fan of Cowork can be a big help with this.

Speaker A

In fact, we're working on the instructor's manual for this new chapter.

Speaker A

And that involves giving the answers to their questions at the end of the chapter and that sort of thing.

Speaker A

So I thought, well, I'm going to try Cowork on this.

Speaker A

And just a quick glance.

Speaker A

I think it's going to take hours of work and cut it down to minutes of work because all we have to do is scan to make sure everything's right.

Speaker A

And since we know the material so well, that's not a big ask.

Speaker A

So AI did the first 50%, 75%, whatever number, and saved a huge amount of time.

Speaker B

Yeah, I do that for a lot of things.

Speaker B

Right.

Speaker B

Creating long, monotonous documents that just are well defined.

Speaker B

AI is a great tool when you've got pretty absolute guidance on where you're going with it.

Speaker B

I'm about ready to give a speech on Thursday.

Speaker B

Five minute little intro to an event we're doing in our college.

Speaker B

And I've been using Copilot to help me craft it and to get it into my own words.

Speaker B

And it saves so much time.

Speaker B

I don't know why it takes so long to write a five minute speech.

Speaker B

It gets stuck in my head of do I want to go this direction or that direction?

Speaker B

And it's just really helpful in figuring those things out.

Speaker B

So use the tools.

Speaker B

I think they're good for any of these sorts of things.

Speaker A

Yep.

Speaker A

Could be harder to do a five minute than a 30 minute talk.

Speaker A

So two more things on Claude.

Speaker A

And I do not get paid by anthropic.

Speaker A

I don't get a discount on anything.

Speaker A

But I'm spending most of my time in Claude now.

Speaker A

So I mentioned the design.

Speaker A

There's also something that's been around for a while, but I don't think many people are using called skills.

Speaker A

So I've created a number of skills.

Speaker A

Like I've got one for reviewing journal articles, not for doing reviews of other papers, but doing pre reviews of my own work.

Speaker A

I want to be clear about that.

Speaker A

I've got several that mimic my writing style.

Speaker A

I have at least three different writing styles depending upon what I'm working on, but doesn't get it all the way there.

Speaker A

But there aren't EM dashes, there aren't those triplets, There aren't.

Speaker A

I'll be honest here because I've been lying up to now.

Speaker A

All that stuff is out and it really.

Speaker A

Unless somebody really, really knew my writing style, it would pass muster.

Speaker A

Nobody would think a thing about it.

Speaker A

Look, get a Claude subscription, the 20, 25 bucks a month, whatever it is, and start playing around with the skills.

Speaker A

Claude will build the skills for you.

Speaker A

It literally has a skill builder skill which is meta and I love that.

Speaker A

But there are all kinds of uses for these things.

Speaker A

I built a copy editor skill that does a really good job of going through and doing a copy edit pass.

Speaker B

Well, the thing about Claude 2 Craig and I think the marketplace is telling us this is it is becoming the number one choice of people paying for subscriptions and using generative AI tools.

Speaker B

So they are doing something right right now.

Speaker B

Is that going to be the forever solution?

Speaker B

I don't know.

Speaker B

It seems like every three to six months we're talking about somebody else who's taking the lead of doing something.

Speaker B

But right now it feels like Claude is leading the way.

Speaker A

Yeah, I don't know if you had a chance to look at the little chart I put into our notes.

Speaker A

It comes from the Neuron, which is one of the largest AI newsletters out there.

Speaker A

It's really good.

Speaker A

So I read it every day.

Speaker A

They did a poll of their users.

Speaker A

Now they've got tons of users and they had about 2,300 people that responded to this survey Claude had twice the use of ChatGPT, so Claude was just shy of 1500.

Speaker A

ChatGPT looks like about 7.

Speaker A

Oh, they 790.

Speaker A

Gemini is a little below 500.

Speaker A

And then everything else is small.

Speaker A

But that's pretty surprising to me.

Speaker A

If you're not using Claude, you need to check it out, pay the money.

Speaker A

Look, we're privileged, we have decent incomes.

Speaker A

But 20 bucks, it's not that much money if it can save you a ton of time.

Speaker A

One more Claude shill moment and then we'll move on.

Speaker A

So Rob, I've been sending you my daily AI briefings that were created by Claude routines which they just released.

Speaker A

They're pretty good.

Speaker B

They're very good.

Speaker A

Yeah.

Speaker A

I mean it's basically I've got this task, it goes out at 9 o' clock every morning if my computer's awake and does a scan of the Internet and writes kind of a paragraph about things that we ought to be taking a look at.

Speaker A

And it's impressive, it's really impressive for something that's largely hands off.

Speaker B

And what I love about at least we created, and I think this is where real opportunity is, is in the past we relied on some other aggregator to say these are the five articles that you should read this week.

Speaker B

And it was based on somebody else that maybe you're aligned with in your thinking to put those things in place, but with the ability to do it yourself, you can customize basically what's important to your workflows and to your thinking and what you're dealing with in your day to day based on just very unique things about you.

Speaker B

And it does it automatically and puts those right in your inbox.

Speaker A

Let's move on entry level jobs.

Speaker A

Rob, there was a pretty interesting article out of IEEE Spectrum about entry level jobs.

Speaker B

And this is one of those big topics that a lot of people wrestle with right now is what does entry level work look like in the world of AI.

Speaker B

We've seen a lot of decreases in hiring of students coming straight out of very technical programs over the last few years.

Speaker B

And so a lot of angst about what degrees people should be pursuing and what those degree programs look like.

Speaker B

And so IEEE put out a paper where they're seeing that they expect these entry level jobs to increase, but not exactly the same students with the same skill sets that they were hiring in 2023.

Speaker B

So the roles demand higher order thinking where there is more of that human nature, that human critical thinking components baked into what they're doing in using those technical skills.

Speaker B

So it's Pushing the boundary of what's possible or expected from students beyond just the ability to code and to do what AI is beginning to do a really good job of.

Speaker A

Yeah, exactly.

Speaker A

And for those of you who don't know what IEEE is, it's the world's largest technical professional organization.

Speaker A

It's the Institute of Electrical and Electronics Engineers.

Speaker A

So it's engineering and Computer Science.

Speaker A

This is a legit organization.

Speaker A

When I read through this, we were trying to make sense out of all of this.

Speaker A

A couple of interesting things here.

Speaker A

One is that employment for programmers fell a lot.

Speaker A

Software developers, which has more of a design piece to it than just writing the code basically was flat.

Speaker A

And then information security analysts, AI engineers, they have double digit growth in entry level hiring.

Speaker A

So if we look at all of that, one of the things it says to me is that in kind of a weird, not paradox, but something like a paradox is the solution to helping our students get jobs is to show them how to lean into their humanity.

Speaker A

And so it's not that AI tools can't critically think, it's that they critically think in a certain way.

Speaker A

And we'll see that here in a few minutes.

Speaker A

I think it doesn't have knowledge of all of the context, no matter how big the context window that humans have.

Speaker A

And so I think we need to help our students, particularly those who are in more technical fields, learn how to leverage being human in the best way of being human, not the irritating way.

Speaker B

Yeah.

Speaker B

And I think one of the things that AI does very well is it comes up with answers and answers that generally work and that it's confident in, and especially for early stage people in learning, it's the good enough solution.

Speaker B

And I think where the human aspect comes in is knowing the problem set well enough and critically evaluating that well enough that you can push that further.

Speaker B

And whether that's through programming skills or continued prompting conversations, it's deciding that you're going to push the envelope in what's being created in ways where in prior times you might work pretty hard to get to that good enough solution.

Speaker B

Now that's not adding much value to the equation.

Speaker A

That's right.

Speaker A

Anybody can get the good enough solution with a moderate amount of skill.

Speaker B

Well, as educators, it used to be we'd be excited when students gave you that good enough solution because it demonstrated a struggle that they had to go through in order to get to that point to cognitively develop those skills and those abilities to do that thing.

Speaker B

And so it really does put the onus on instructors for how do you approach education In a way that success is by getting creative and going beyond what's created in two minutes by a machine.

Speaker A

Yeah.

Speaker A

And if you can't do that, then they don't really need you.

Speaker A

But I think that's going to be a subtle shift in kind of where the floor is in higher ed.

Speaker A

That floor has got to go up.

Speaker A

And so we've got to help move our students up that stack to things that are more human.

Speaker B

Yeah.

Speaker A

More ambiguous.

Speaker A

Yeah.

Speaker B

I had a conversation the other day with the peer who was saying that the floor is going up, but their fear is that our ceiling is coming down because we haven't found those ways to push our students through the ceiling.

Speaker B

And I wrestle with that thought a little bit.

Speaker B

How can we raise everybody by raising that floor but push many beyond where they currently were?

Speaker B

Are we going to find those creative ways of designing class projects and class engagement that really does say, wow, that college degree is moving people way beyond where we ever thought possible.

Speaker A

I get what you're saying, and I do not disagree.

Speaker A

But that's always been what we should be doing, is setting the conditions that allow students.

Speaker A

It's such a trite thing, but to live up to their potential.

Speaker A

It really is.

Speaker A

I think we need to move a little bit away from what activities do we do to help them push through that ceiling to how do we create the conditions that encourage them and allow them to push through that ceiling?

Speaker A

But I don't know what that is at the moment.

Speaker A

But I think that there's a mindset shift that needs to occur where we.

Speaker A

We don't try to fight the battle of being very detailed and particular in how we do some of these things, because as AI gets better and better, what those details need to be is going to change.

Speaker A

And if we start thinking about how to set the conditions, we might be better off.

Speaker A

But again, I don't know what that looks like at this point.

Speaker A

I'm not sure anybody else does either.

Speaker B

Well, it sounds like what we're wrestling with as instructors is a world of ambiguity.

Speaker B

The same way we want our students to struggle through ambiguous problem solving, we're all struggling through that ambiguity and trusting that with the right guardrails, the right direction, that the outcome is going to be positive.

Speaker B

And that's hard sometimes because you don't know the absolute ways that this all should go.

Speaker A

Yeah, it's not going to be easy, but I think there's a lot of potential to really make things better in the long run, because a lot of the criticisms of higher ed and our students and the way we prepare them are not new.

Speaker A

They've just been magnified by AI.

Speaker A

So one more quick thing on this and then we'll move on to the next article, which is very interesting.

Speaker A

The.

Speaker A

The ability to use AI and apply AI to practical problems is now almost table stakes where that's just an expectation.

Speaker A

And I think employers are going to be surprised.

Speaker A

That's not really going to happen for them.

Speaker A

And it's a real big signal for higher ed that we need to come up to speed on helping students be able to meet those table stakes.

Speaker A

And God, that was very ineloquent.

Speaker B

That's what I expect from you, Craig.

Speaker A

That's right.

Speaker A

At least I'm consistent.

Speaker A

So employers are not going to find masses of students graduating that know how to use these tools effectively yet.

Speaker A

We need to start working on that as higher ed.

Speaker B

I think where this gets interesting, though, Craig, is as students are in our classes, as we're trying to build them up, we're starting to see research around how cognition is developed and what impact AI is having on the development of that cognition.

Speaker B

And I think, you know, again, it comes back to the humanity, the humanity, the human, the human nature of students.

Speaker A

Oh, who's in eloquent?

Speaker B

I know I learned from the master.

Speaker B

But is, you know, to lean into that critical thinking and perseverance when AI can make things easy?

Speaker B

Because big problems that you're going to face in organizations aren't going to have some of the slick and easy solutions that our current approaches pre AI, may have had from textbooks and different things like that.

Speaker B

So how do we scaffold, if you will, in such a way to where the students might be interacting with AI and how they're doing things, but at the same time they're developing that cognition, that confidence that they know how to do things even if we took AI away, that their thinking wouldn't be hampered by that, that we weren't all of a sudden hamstringing their ability to perform, which I think is going to require some experimental approaches to how we do things in the classroom.

Speaker B

I don't think there's a one size fits all recipe for how we ensure that that cognition is developed.

Speaker A

We're going to have to be willing to have some failed experiments, without a doubt.

Speaker A

But the article you sent Rob used the analogy of the boiling frog, which I have no idea if it's really true, but that's the one where if you try to throw a frog in boiling water, it'll jump out.

Speaker A

If you start with warm water and crank up the heat, it'll just let itself get cooked.

Speaker A

Which I don't want to perform an experiment that validates that empirically.

Speaker A

But I think there's a real danger there that goes beyond over reliance.

Speaker A

I think it starts to rewire how we think about things.

Speaker A

And that's especially dangerous for the obvious reasons, but not only the obvious reasons.

Speaker A

It's because AI has some really weird ways of thinking.

Speaker A

If I put thinking in quotes.

Speaker A

For example, we had a dog that we thought was dying over the weekend.

Speaker A

And so I started plugging things into Gemini and it turned out the dog was fine.

Speaker A

It was just.

Speaker A

It had some storm medication.

Speaker A

It's getting older and it just hit it a lot harder than it normally did.

Speaker A

And going back and looking at it, Gemini was feeding into what I was saying.

Speaker A

So it wasn't quite confirming it would push back, but it felt like it was picking up on my assumptions about what was going on.

Speaker A

Because this was classic.

Speaker A

This dog is at the end of life.

Speaker A

She's 13.

Speaker A

Suddenly her hind end wasn't working.

Speaker A

You know, she was just laying down that slow breathing.

Speaker A

But it turned out she was just stoned.

Speaker A

What it amounted to.

Speaker A

But Gemini was really feeding into what I was thinking and my anxiety over it.

Speaker A

So this is going to be tough.

Speaker A

But we've been saying this for a while.

Speaker A

It's time to get after it.

Speaker A

We can't put this off much longer.

Speaker A

All right, so let's move on to our third, third topic.

Speaker A

This is an article from Computer Human Interaction Conference, which is a huge association for Computing machinery conference.

Speaker A

I mean, it's huge, huge, huge, huge.

Speaker A

This just came out I think a week ago today about AI and critical thinking.

Speaker A

But this is not just another AI and critical thinking article.

Speaker A

The authors did something really clever here.

Speaker A

It's three authors from University of Chicago and University of Toronto.

Speaker A

So it's no surprise that well written paper.

Speaker A

They did an experiment with almost 400 participants where the participants had a realistic civic decision making scenario where they acted like they were city council members deciding whether or not to accept a proposal on water contamination cleanup.

Speaker A

And so they chose that thinking that almost nobody that's in the experiment would know anything about that world.

Speaker A

And then they ran them through, they gave them some documents that varied in quality and asked them some questions about this proposal.

Speaker A

And what was really interesting is under different time pressure conditions, using AI at different stages in this process led to different outcomes.

Speaker A

So I know, Rob, I sent this to you really late.

Speaker A

I don't know if you had a chance to scan the article, but what did you think?

Speaker B

I thought it was interesting because it isn't the typical AI better than human or is human better than AI Sort of things that you see in the press that's out there, that it put some conditions in place that allowed us to see when is AI helpful and when is it not as helpful.

Speaker B

I really like seeing how they attempted to help set up those boundary conditions for knowing when to use these tools and how they can be the most.

Speaker A

Helpful, not just what tasks to use them for, but when in terms of time during the performance of that task should you use it.

Speaker A

So they gave students either 10 minutes or 30 minutes to come up with their evaluation when there was only 10 minutes.

Speaker A

If they used AI from the start, those groups outperformed the groups that started their AI use later did not have access to.

Speaker A

But when participants had a half an hour, that pattern completely reversed.

Speaker A

If they didn't have access or if they had late access, they wrote stronger reports than those who had the AI early on or throughout the process.

Speaker A

And there's a lot of inference that goes on when you're reading a paper like this.

Speaker A

But one thing I'm thinking is that if you had AI from the start, you got to a good enough solution.

Speaker A

It's Herbert Simon satisficing idea good enough.

Speaker A

It's not literally where you stopped, but it's kind of where your engagement stopped.

Speaker B

I see that a lot, even when I'm using the AI tools.

Speaker B

Right.

Speaker B

You kind of anchor to where this feels like it's the right answer.

Speaker B

And why do I need to put a whole lot more effort into it?

Speaker B

So to see some level of evidence of that in this study, I think confirmed maybe what I hadn't put an exact name on, but I've definitely experienced that myself.

Speaker A

Something to think about.

Speaker A

If you've got a really important task, like I'm working on a paper and it's got an interesting story, but we have to be really careful that we stick to that story.

Speaker A

It's one of these where we have a lot of results that kind of are what we expected.

Speaker A

But there's one thing that kind of came out from a post hoc analysis that it's like, oh, this is the paper.

Speaker A

And I don't want to go back and rewrite the front end because that's not really entirely honest.

Speaker A

So I want to kind of show the story as it unfolded.

Speaker A

But the conclusion to that is going to be critical to the paper.

Speaker A

A lot of times I write a paper, I just want the conclusion.

Speaker A

The reviewers probably aren't Even going to read it a lot of times.

Speaker A

But this one, this is a critical piece of the paper.

Speaker A

So I'm trying to work on it first.

Speaker A

Think about how I want that conclusion to go, what I want it to communicate.

Speaker A

Even if I don't sit down and write it, that goes into my prompt or I write it and then tell Claude, you know, where are the holes in this?

Speaker A

What am I missing?

Speaker A

Am I communicating this story?

Speaker A

So I think that's a huge message.

Speaker A

If something's really important, you've got time, think about it.

Speaker A

At least give it a lot of thought first before you turn to AI.

Speaker A

Yeah.

Speaker B

And I think what you're suggesting is that late access to the AI is a huge approach, actually, especially with something that's important and critical.

Speaker B

And when I translate that sort of thinking into what's going on in the classroom, and I've seen some faculty in my hallways that are doing this, is they'll have students attack a problem and very purposely say, for right now, we are not going to use AI at all.

Speaker B

And then they bring AI in later into the game and are able to demonstrate through some activities the role that AI can play after you spent that time wrestling with a problem set or a solution.

Speaker B

So you're actually bringing in that unique human perspective to solving that problem and not just saying the machine's going to have it.

Speaker B

Because we may actually pick up on something that is important that might be ignored, but that might be where the competitive advantage is or otherwise in solving that problem.

Speaker A

Right, Right.

Speaker A

And you don't get that lock in if you see a solution, whether it comes from AI or somebody else, we might tend to lock in on that, which would be a really interesting study, is to replicate this, but instead of just AI access, you give them access to another human.

Speaker A

Because I think you would see some of the same patterns.

Speaker A

But I don't know that, of course.

Speaker A

It's just what I'm thinking.

Speaker A

There was another interesting result buried.

Speaker A

They had 30 minutes sufficient time.

Speaker A

Participants with late AI access produced the same number of arguments as those with no AI access at all.

Speaker A

But they gave a more balanced view of the pro and con reasoning.

Speaker A

So they didn't lock into their own view.

Speaker A

So AI kind of provided counterbalance or another perspective, which is a huge benefit.

Speaker A

So this is absolutely not a don't use AI article.

Speaker A

It's use it at the right point in time.

Speaker A

It's a really good article.

Speaker A

I haven't said it yet, but there'll be links in the show notes to all of this stuff.

Speaker A

So Anything else on that article, I.

Speaker B

Would encourage people to read these things and start putting the filter on that says what does this mean to me?

Speaker B

And where I interact with AI and educating others.

Speaker B

How does this change my thinking?

Speaker B

I would caution people from picking one article as their end all, be all approach for what they anchor on, for how they approach these things.

Speaker B

Because I think there are a lot of different directions and a lot of different ways that we can be purposeful and engaging.

Speaker B

And the more that you process and read on that, I think the better prepared you're going to be.

Speaker A

Right.

Speaker A

Right.

Speaker A

This is just one more piece of evidence and what I thought was a reasonably well done article.

Speaker A

It's really pretty good.

Speaker A

But it is not, as you said, it is not the end of the story.

Speaker A

It's just one more piece of the puzzle.

Speaker A

All right, Rob, anything else?

Speaker B

I'm going to end today with an encouraging word to people as they approach the end of the semester, assuming you're on a similar timeframe that we are here as we're recording this podcast.

Speaker B

And that is as you step into reflecting on what you're doing heading into next fall.

Speaker B

If nothing else, pick one thing, one thing that you're going to engage with over the summer months to make your classroom more equipped to meet students where they're at and preparing them for what the next is, whether the next is a job in the workplace or graduate school.

Speaker B

But just pick one thing.

Speaker B

Don't feel overwhelmed where it has to be.

Speaker B

Throwing out everything and starting from scratch.

Speaker B

Find one thing and that one thing will make your class better.

Speaker A

And regardless of what that one thing is, one thing that will help you with that is subscribing, liking, following whatever your podcast app uses.

Speaker A

Following AI Goes to College.

Speaker A

Everything at AI Goes to College is@aigostocollege.com the Substack Newsletter is linked there, the podcast is linked there.

Speaker A

And we'd love to know what you're planning to do and maybe what some of your questions are about what your one thing should be over the summer.

Speaker A

So you can reach us at Rob Crossler, C R O S S l e r aigostocollege.com or craig@aigi goes to college.com all right, anything else, Rob?

Speaker B

No, I'm good.

Speaker A

All right, thank you very much.

Speaker A

That's it for this time.

Speaker A

See you next time.

Speaker A

Thank you.