Feb. 16, 2026

Students Are Confused About AI and It's Our Fault (with Dr. Bette Ludwig)

Students Are Confused About AI and It's Our Fault (with Dr. Bette Ludwig)
The player is loading ...
Students Are Confused About AI and It's Our Fault (with Dr. Bette Ludwig)

Dr. Bette Ludwig spent 20 years in higher ed working directly with students before leaving to build something different — a Substack (AI Can Do That), a consulting practice, and most recently, the Socratic AI Method, an AI literacy program that teaches students how to think critically alongside AI while keeping their own voice intact.

That last part is the hard part.

Craig opens with the question that drives the whole episode: Socratic dialogue requires you to already know enough to ask good questions. So what happens when a student doesn’t know enough to push back on what AI is telling them? Bette’s answer is both practical and unsettling — younger students literally don’t know what they don’t know, and that gap is where the real danger lives.

The conversation moves into dependency territory when Craig shares a moment from his own morning: Claude froze while he was editing a manuscript, and he felt a flash of genuine panic. Two seconds later, he remembered he could just… write. But he names the uncomfortable truth — his students won’t have that fallback. Bette compares it to the panic we feel when the wifi drops, which is both funny and a little alarming when you sit with it.

From there, the three dig into the policy mess — teachers across the hall from each other running opposite AI rules, students confused about what’s allowed, and educational systems moving at what Bette calls “a glacial pace” while the technology sprints ahead. Craig shares his own college’s approach: you have to have a policy, it has to be clear, but how restrictive or permissive it is remains your call. The non-negotiable? You can’t leave students in the dark.

The episode’s most surprising thread might be Bette’s observations about how students actually use AI. It’s not just homework. They’re using it for companionship, personal problems, cooking questions, building apps — ways that don’t even register as “AI use” to most faculty. Her closing point lands hard: students have never used technology the way adults assume they should, and they’re going to do the same thing with AI.

Key Takeaways


1. The Socratic method has an AI prerequisite problem. You need existing knowledge to know what questions to ask, which means younger students are especially vulnerable to accepting AI output uncritically. Bette and Craig agree that junior/senior year of high school is roughly where the cognitive capacity for meaningful pushback begins.

2. AI dependency is already happening to experienced users. Craig describes a two-second panic when Claude froze mid-editorial. He recovered by remembering he could just write the way he always has. His concern: students who grew up with AI won’t have that muscle memory to fall back on.

3. The “helpful by default” design is a subtle problem. Craig raises the point that AI systems are programmed to be agreeable, which means they can lock students into a single mode of thinking without anyone noticing. The hallucinations get all the attention, but the quiet steering might be worse.

4. Policy chaos is the norm, not the exception. Teachers in the same hallway can have opposite AI rules. Bette recommends clarity above all: whatever your policy is, make it explicit. In K–12, she argues for uniform policies. In higher ed, where faculty governance complicates things, Craig’s approach works — require a policy, let faculty own the specifics.

5. Grace matters more than enforcement right now. Both Craig and Bette push back on the “AI cop” mentality. Students sometimes cross lines they didn’t know existed, just like past generations plagiarized without understanding citation rules. Teaching moments beat punitive responses, especially when the rules themselves are still being written.

6. Students use AI in ways faculty don’t expect. Companionship, personal problems, everyday questions, building apps. Bette’s observation: students are as likely to use AI for roommate conflicts as for essay writing. Faculty who don’t use AI themselves can’t begin to understand these patterns.

7. Education isn’t moving fast enough. New York got an AI bachelor’s program launched in fall 2025, which Bette calls “Mach speed for higher ed.” Most institutions are still in the resistance-or-denial phase. The shared worry: AI across the curriculum could become another empty checkbox, like ethics across the curriculum before it.

Links

Dr. Ludwig's website: https://www.betteludwig.com/

AI Can Do That Substack: https://betteconnects.substack.com/

AI Goes to College: https://www.aigoestocollege.com/

Craig's AI Goes to College Substack: https://aigoestocollege.substack.com/

Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - Untitled

00:41 - Untitled

00:58 - Introduction to Dr. Bette Ludwig

02:42 - The Challenges of AI in Education

14:22 - The Future of Education in the Age of AI

22:07 - Navigating Educational Policies in the Age of AI

31:45 - The Impact of AI on Student Learning

Transcript
Speaker A

Welcome to another episode of AI Goes to College, the podcast that helps higher ed professionals figure out what in the world is going on with generative AI.

Speaker A

And as always, I am joined by my friend, colleague and co host, Dr. Robert E. Crossler of Washington State University.

Speaker A

And today we are joined by a special guest, Dr. Bette Ludwig.

Speaker A

She's been in higher ed, or was in higher ed for over 20 years, serving in a variety of student facing positions.

Speaker A

In 2022, she left higher ed to strike out on her own as a writer and consultant focused on helping students and parents make smarter academic and career choices, which is something we all need.

Speaker A

Since leaving higher ed behind, she's written over 450 articles.

Speaker A

Rob, have you written over 450 articles?

Speaker A

I haven't.

Speaker B

I've thought up 450 articles.

Speaker B

I haven't written them.

Speaker A

That's right.

Speaker A

And these articles are on education, leadership, strategic thinking, and of course, AI.

Speaker A

BET and I connected through her fantastic substack AI can do that, which helps educators, parents and otherwise busy people stay informed about the rapid developments in AI and what they mean for education.

Speaker A

I was really intrigued by her latest effort, the Socratic AI Method, which is a comprehensive AI literacy program that helps students learn how to think critically with AI while maintaining their own voice.

Speaker A

Bet likes to combine her academic training with hands on expertise to bridge the gap between rapidly emerging technology and the needs of students and families.

Speaker A

She holds a Ph.D. in educational leadership from Western Michigan University and an Ms.

Speaker A

In counseling from Westchester University and a BA in psychology from Michigan State.

Speaker A

And we'll put a link to BET's homepage, bet Ludwig.com in the show notes.

Speaker A

That's where you want to go for all things bet.

Speaker A

Beth, welcome.

Speaker A

Did I get all of that right?

Speaker A

It was a lot.

Speaker C

You got all of that right.

Speaker A

I'm kind of out of breath.

Speaker A

Kind of out of breath.

Speaker A

Let's start off with something that Rob and I have been wrestling with.

Speaker A

And that's the difference between offloading thinking to AI, outsourcing it, and what we like to call genuine CO thinking.

Speaker A

We call it co produced cognition.

Speaker A

Your Socratic AI method seemed to land squarely on that CO thinking end of things.

Speaker A

But here's what I struggle with.

Speaker A

So Socratic dialogue requires a certain level of existing knowledge to even know what questions to ask.

Speaker A

How does this work with a student who doesn't know enough about what to push back on when they're engaging with AI?

Speaker C

That's a good question.

Speaker C

And you're absolutely right.

Speaker C

And that's where it gets tougher for Students.

Speaker C

And I think that's where teaching them how to use this and how to push back and how not to get too dependent on using it for your writing.

Speaker C

For it's okay to think through things with it.

Speaker C

I use it a lot for that brainstorming.

Speaker C

That's one of the things that I really push with the students is that you cannot become completely reliant on this and depend on it for everything.

Speaker C

If you want to brainstorm with it, if you want to use it to help you study for tests, if you want to use it to critique your writing.

Speaker C

But you can't let it do everything for you because it does get easy to fall into that.

Speaker C

One of the things that I really focus on is explaining to them it hallucinates.

Speaker C

You can't trust the output like it's factual.

Speaker C

You have to go back and check everything that it gives you because it will give you things very confidently.

Speaker C

That is not accurate.

Speaker C

So one of the things that we have to really instill in them is that you have to double check it.

Speaker C

If you're just having a back and forth conversation about things and reflecting on your thoughts, that's a little different.

Speaker C

But if you're getting things that it's saying this actually happened, you can't just accept that.

Speaker C

Like, I wrote an article on the Pope recently speaking out about AI and I threw it in Gemini.

Speaker C

And Gemini says Pope Leo doesn't exist.

Speaker C

I'm like, pretty sure he does.

Speaker A

Yeah.

Speaker A

People in Chicago would object to that.

Speaker C

Yeah.

Speaker C

Even ChatGPT said it's not Pope Leo, it's Pope Francis now.

Speaker C

That was the previous Pope.

Speaker C

They both knew that it was 2026 though.

Speaker C

So it's those types of things that you have to push back on it.

Speaker C

And we're going to really, really have to teach students how to do that as we start integrating AI in.

Speaker B

But that's a great point that you made.

Speaker B

And I'm thinking back on the last guest we interviewed.

Speaker B

We were talking about the age at which engaging with students in AI and beginning to establish this and think about this and what is that?

Speaker B

And he kind of landed about juniors in high schools where they have the cognitive abilities to be able to engage in these conversations.

Speaker B

What are you seeing as far as what that point is?

Speaker B

Because I know that middle schoolers are even using this.

Speaker B

At what point do young people have the ability to engage in this sort of a Socratic dialogue and questioning what they're getting from the machine?

Speaker C

Yeah, I think your last guest was pretty spot on with that.

Speaker C

I think really junior, senior Year is when you really start seeing that.

Speaker C

But I do think that we have to start teaching them about it earlier than that, at least in an introductory type way.

Speaker C

But yeah, they're not going to be able to push back on some of this because you think back to when you were in high school or junior high, you don't know what you don't even know.

Speaker C

And so we have an advantage, right, because we have life experience and we can look at it and say, no, that's not right, you're wrong.

Speaker C

I was watching a YouTube video yesterday with Daniel Pink and he had put all these prompts in and was getting all of this very personal stuff back because he had asked these prompts and he kept pushing back and saying, no, that's not right.

Speaker C

Oh, that's not right.

Speaker C

But he knows himself enough to be able to say that.

Speaker C

Right.

Speaker C

Somebody younger doesn't.

Speaker C

Which goes back to something I've been saying too, that I really think we need to think about restructuring education, focusing on teaching skills, empathy, self awareness, emotional regulation, teaching those things even earlier and focusing on being aware of what they're thinking, what they're feeling.

Speaker C

Because if you're not aware of those things, how can you push back on anything?

Speaker A

Yeah, I worry about students being reluctant to push back on AI.

Speaker A

They often won't challenge your instructors.

Speaker A

They won't challenge anybody they see as an authority figure.

Speaker A

And if they view AI as an authority figure, they may not push back enough.

Speaker A

And the hallucinations are a problem, without a doubt.

Speaker A

But I think there's a more subtle problem in that AI can lock into a certain mode of thinking and there's just one.

Speaker A

And it's going to think that way and it's going to try to guide your thinking that way.

Speaker A

And that's just not the way the world is.

Speaker A

And it's not the way to develop your critical thinking skills.

Speaker A

So I worry about that as well.

Speaker C

Yeah, I think that's a good worry to have.

Speaker C

You're right.

Speaker C

Because it does get into that kind of mode of thinking and you have to really push back hard on it.

Speaker C

And there is some of that.

Speaker C

That's programming.

Speaker C

Right.

Speaker C

They can program this not to be that way.

Speaker C

These companies aren't deciding how they're.

Speaker C

They're programming these things.

Speaker A

Anthropic released the Claude Constitution, which is, I don't know, 75 or 80 pages long.

Speaker A

And I've only started going through it, but one of its kind of prime directives is to be helpful.

Speaker A

And I think that leads to some problems.

Speaker A

You can override that through prompting.

Speaker A

But you've got to know that that's even a problem before you try to solve that problem.

Speaker A

But it can be such a fantastic co thinker.

Speaker A

I was working on an editorial for a special volume of a book series on AI and HR management, and it was helping me brainstorm research questions.

Speaker A

But we went back and forth, so it came up with some.

Speaker A

And I said, you know, these are really generic.

Speaker A

They're not really tied to what's in the volume, and they're kind of obvious, and let's dig in.

Speaker A

And we went back and forth and back and forth and back and forth and ended up with something that I probably would not have come up with on my own, but that AI certainly would not have come up with on their own.

Speaker A

And I think that's probably a little advanced for a lot of students.

Speaker A

Yes, but that ought to be our goal.

Speaker A

How do you produce something that's better?

Speaker A

So.

Speaker A

Yeah, but you know how we get there.

Speaker B

So, Craig, I think this makes me think of something that Bet wrote in this Thinking with AI about outsourcing your memory to AI and how that initial relief turns into something that can actually be more like a cage.

Speaker B

So that metaphor really sticks with me, because I think it's something that happens with our students, that they'll get something from AI where AI does the thinking for them in that first thing, and it feels like a freedom to them, where they're like, aha, I've got it.

Speaker B

This is brilliant.

Speaker B

Because that initial.

Speaker B

I want to answer this question.

Speaker B

I want to please you.

Speaker B

And ultimately, it traps their thinking to where they don't move from where that initial anchor is.

Speaker B

Do you see this pattern with students, or am I imagining things?

Speaker C

No, there's definitely that pattern there because they want to get stuff done.

Speaker C

Let's just let me get this assignment done, or let me.

Speaker C

Whatever's going on, let me get this accomplished.

Speaker C

That goes back to the pushing back, right?

Speaker C

Not pushing back on it because not knowing what they don't know.

Speaker C

So, yes.

Speaker C

And also becoming dependent on it, which is easy to do, and.

Speaker C

And when it's right there and you can do something in 10 minutes versus spending an hour wrestling with it, it is tough to push back on it.

Speaker A

First of all, I feel compelled to throw in Janis Joplin and Kris Kristofferson.

Speaker A

Freedom is just another word for nothing left to lose.

Speaker A

And that's got some real cognitive consequences when we think about AI and outsourcing everything to AI.

Speaker A

But I want to relate an experience that I had this morning so I was working on this editorial, and Claude froze, having technical difficulties, and I had this moment of panic.

Speaker A

I did.

Speaker A

I had this really weird moment of panic because I'm so close.

Speaker A

I'm on the next to the last subsection in this whole editorial.

Speaker A

And it's like, oh, my God, what do I do now?

Speaker A

And it's like, oh, well, you could just do what you've always done.

Speaker A

And the way I was working with Claude is I would write something, and then I actually have a keyboard shortcut for how is this?

Speaker A

And how is this.

Speaker A

I'd paste in what I had, and it would say, well, this transition is a little abrupt, and this could tie back to something in one of the chapters in the volume.

Speaker A

It's like, okay, I could have just not done that, and it would have been fine.

Speaker A

But it was this weird, visceral panic for about two seconds.

Speaker A

And that kind of worried me.

Speaker A

I've been at this a really long time.

Speaker A

And so I quickly pivoted to, oh, well, I'll just write the way I've always written.

Speaker A

But students aren't going to have that capability.

Speaker A

They're going to be frozen, I'm afraid.

Speaker C

Yes.

Speaker C

I think that ultimately we're going to have to completely reteach how we teach a lot of things, like.

Speaker C

Or restructure the whole foundation of it.

Speaker C

And I could envision a future where that instead of creating it from scratch by yourself, you're in class actually working on developmentally editing a lot of things with AI rather than coming up with it by yourself.

Speaker A

I always get very concerned when we're talking with somebody and there's a long pause and a sigh.

Speaker A

It's like, either that was a really great, great question, or bet's trying to figure out how to not say, you know, you're just an idiot.

Speaker B

Or both.

Speaker A

Or both.

Speaker A

It could be both.

Speaker A

It could be both.

Speaker A

So, yeah, it's the kind of thing that we're going to have to struggle with because you don't want to cut it off.

Speaker A

Because, look, this editorial is going to be better because I had Claude working with me.

Speaker A

It just is.

Speaker A

It's going to be better.

Speaker A

So you don't want to not use that.

Speaker A

But at the same time, it's like, wow, you should not have even felt that little moment of panic, no matter how quickly you recovered from it.

Speaker A

So it's a weird time.

Speaker C

It is a weird time.

Speaker C

But is that much different than the panic that you feel when your wifi goes out or when you can't find your cell phone or you've got no cell phone coverage.

Speaker C

You know what I mean?

Speaker C

I think right now the panic is because this is kind of newer, but I know I feel a whole lot of panic when my wifi goes out and I've had it where it's been out a couple of days and I'm like, oh, this is almost as bad as having no electricity.

Speaker C

And that's kind of ridiculous to think, right?

Speaker A

But you're talking to somebody that has fiber optic starlink and a second starlink, plus the ability to hotspot to the phone.

Speaker A

So, yeah.

Speaker B

So all of this, Craig has made me.

Speaker B

I've got like seven questions in my head.

Speaker B

I'm going to try to boil it down to two of these ideas into one question.

Speaker B

And whenever I hear about we need to rethink and redo how we do education.

Speaker B

We have such inertia in the way that the question is, how do we do that?

Speaker B

And then the second part of that question is, I've recently read about Gen Z and their test scores showing that they actually didn't perform as well as the generation before them.

Speaker B

With the blame on ed tech, on social media, on the amount of screen time that students are having.

Speaker B

In this world of AI, I see quicker answers, maybe giving more time for screen time.

Speaker B

And we're supposed to be changing how we do education, but technology may actually be getting in the way of true learning.

Speaker B

What does five years from now look like?

Speaker B

And how do we even get there, given the systematic inertia that's in place?

Speaker C

Well, that's the hard part, right?

Speaker C

And you've got all these different things going on in the US You've got the federal government, then you've got the state agencies, and every state has regulations that they have that schools and universities have to meet.

Speaker C

And if you don't live in a state where you have a state government that is willing to work with you, look how fast New York got an AI program through for bachelor's degrees.

Speaker C

They actually started them in 2025 in the fall.

Speaker C

That is really mach speed for higher ed.

Speaker C

They don't get programs pushed through that quickly.

Speaker C

And so there has to be a lot of coordination.

Speaker C

And right now we're still at that weird phase where you've got a bunch of people not wanting AI, not wanting to work with it, not wanting to experiment with it, thinking it's just going to, I don't know, hopefully go away until we get to the place where the majority of people are moving forward with this.

Speaker C

It's going to be hard, I think, to move the educational system and five years, you know, that's when some of them are planning on having graduation requirements for AI stuff in place.

Speaker C

20, 29 educational systems unfortunately move at a glacial pace.

Speaker A

Yes, they do.

Speaker A

We have a program that's slowly working its way through, and Louisiana is actually fairly quick.

Speaker A

But I really worry that it's gonna be another something across the curriculum.

Speaker A

So in business schools, it was ethics across the curriculum, and then it was globalization across the curriculum and technology across the curriculum.

Speaker A

And that can work.

Speaker A

There's nothing wrong with that as a basic idea, but I'm not speaking about any schools with which I am currently or have been affiliated in the past, but in my experience, going around to accreditations meetings and that kind of thing, it's often this veneer.

Speaker A

We've got this.

Speaker A

Yeah, there's a lecture in this class and a lecture in this class and a lecture in this class.

Speaker A

None of it's coordinated, none of it's cohesive.

Speaker A

None of it builds towards anything significant.

Speaker A

And I feel that happening with AI, and I think that's going to be a huge problem.

Speaker A

And I think business schools are better about this than a lot of other.

Speaker C

Yes, I would agree with you, because they have the outside business world knocking at their door, keeping them a little bit more on task.

Speaker C

That is a problem.

Speaker C

And I'm personally not sure putting AI in every classroom, every subject, is the answer here, at least not initially.

Speaker C

I think the way to scale this is to start having more individual classes and making those required and getting students broader knowledge of it, rather than saying, okay, you have to incorporate this into every subject matter across the board.

Speaker C

I don't see.

Speaker C

See how that.

Speaker C

First of all, they don't.

Speaker C

If you're looking at K through 12, they don't have time.

Speaker C

They're already, like, packed all the way with what they have to do.

Speaker C

Plus the state mandates, the testing mandates, everything else that's going on.

Speaker C

And then in higher education, you have the same thing.

Speaker C

And then think about when you have a couple of snow days, which we actually had here in Michigan because of the weather being so bad, then you've got instructors packing even more in those days that they miss.

Speaker C

Now they've got to incorporate all these other things.

Speaker C

I just, I. I don't know how y' all are gonna do it.

Speaker B

Well, and I think that's the challenge, too, is there are places where you truly need to learn knowledge.

Speaker B

But how do you do it in an AI resilient way?

Speaker B

And what does that look like?

Speaker B

K through 12?

Speaker B

How do you ensure that the students are, are learning what they need to and not just going home and punching it into the machine and then getting the answers.

Speaker B

It's paradigm changing and how you have to approach education.

Speaker B

And there's.

Speaker B

Nobody's told us the rules for how to do that.

Speaker B

So it's like we have all these different experiments going on where we're trying what seems right to us.

Speaker B

And yeah, some people, I think are going to do it better than others.

Speaker B

And then how does that get disseminated?

Speaker B

So we learn those best practices and we do so in a way to where we aren't harming a generation of youth who are developing in the midst of this rapidly changing environment.

Speaker B

It's a big scary problem.

Speaker B

I think that we need to be.

Speaker A

Talking about more and it's at a scale and fundamentally more at the core of what we do than anything else I've seen in a very long time in higher ed.

Speaker A

The Internet came along and that kind of changed modalities and it kind of changed a little bit about how students went about finding information.

Speaker A

But compared to this, that was nothing.

Speaker A

And I think it's a fundamental shift.

Speaker A

And we're not saying what hasn't already been said by a lot of other people, but I think one of our messages is you need to do something.

Speaker A

Even if you're an individual faculty member.

Speaker A

If nothing else, you better learn what this stuff is all about.

Speaker C

I don't understand how people can't be experimenting.

Speaker C

How are you not experimenting with this?

Speaker C

And I still have people that will say things like, like somebody said something about AI is it's not funny.

Speaker C

I'm like, have you used it?

Speaker C

It can be incredibly funny.

Speaker C

There's a lot of back and forth experience I've had with it where it's really hilarious and I'll start laughing at some of the stuff that it comes up with.

Speaker C

So I just, I think that people early on decided that either this is no good or I've just decided we shouldn't be using it.

Speaker C

And I feel like that's not the solution here because the kids are using it.

Speaker C

And if we don't understand how it works, how are we going to even be able to talk to them about it and explain anything to them?

Speaker C

And I think one of the biggest things that we have to be open to is being open with them and talking about some of.

Speaker C

We don't know that.

Speaker C

We don't know everything that's going on with us either.

Speaker C

Teachers don't have all of the answers, parents don't have all of the answers.

Speaker C

We're all learning this as we go.

Speaker C

And it is scary.

Speaker C

It is scary because we can see how it can potentially really hamper learning and hinder it.

Speaker C

They can't, no.

Speaker C

And we also don't even know what is it that is that constitutes the basics anymore.

Speaker B

Well, what I think is an interesting challenge in all of this is in some ways, the further you go down in the hierarchy of who's in charge, whether it's the K through 12 or in higher education, is at some level they're looking above to say, what's our policy, what's our rules, what are we supposed to be doing?

Speaker B

And when that's ill defined, it moves that policy establishment to lower level, say the faculty.

Speaker B

And so now there's not this one size fits all policy for how everybody's supposed to do things.

Speaker B

And what I hear from students is they get confused because the rules differ as they move from place to place to place.

Speaker B

Are you seeing that to where?

Speaker B

Depending on who you talk to, the policies are different.

Speaker B

And how do you see that being coped with?

Speaker C

Yeah, absolutely.

Speaker C

That is a problem.

Speaker C

I mean, you can have a teacher literally across the hall banning it, saying, nope, can't use it in my classroom, and somebody else experimenting with it.

Speaker C

And students are confused, parents are confused, and even the policies can be really vague.

Speaker C

I actually created a GPT system.

Speaker C

Students could actually put the policies in it and get hopefully some more explicit language on what exactly is allowed and what isn't.

Speaker C

But I always stress you have to ask if you are confused.

Speaker C

This is not a ask for forgiveness later kind of thing because some people are very rigid about how they view using this.

Speaker C

And if they say it can't be used and you use it and it can be a problem.

Speaker C

But yes, it's very confusing and they want to use it.

Speaker C

Think about you're being told, well, you can't use it for this.

Speaker C

They're seeing it as well, why this is helping me.

Speaker C

This is a way that I can learn and use it and you want me to go back and do things the old fashioned way.

Speaker C

And it's not something that is helpful.

Speaker B

What I hear you saying is that you've empowered students to be able to learn how to interpret things, but at the same time it's a problem that is being caused by thus far, the inability for the teachers in the room to come to an agreement.

Speaker B

If you could give a piece of advice to whether it's a high school or a university or even a higher level of oversight, that would be helpful.

Speaker B

In the midst of training people up.

Speaker B

What would that one piece of advice be?

Speaker C

Well, the one piece of advice that I would have is whatever the policy is that you create, it needs to be very clear.

Speaker C

It can't be ambiguous.

Speaker C

There's a lot of ambiguity with how they define what's allowed and what isn't.

Speaker C

And you can't have this thing that is so hard to understand that they don't even know what they can and cannot do.

Speaker C

It has to be explicit.

Speaker C

For one, for two.

Speaker C

I don't think that you really should be having all these different policies in K12 with the teacher deciding there needs to be a more uniform policy that gets a little trickier in higher ed because faculty have governance and they cannot be told how to teach their classes.

Speaker C

So that gets a lot stickier when you get into higher ed.

Speaker C

But again, what I would argue with is that they need to have very clear policies within their class.

Speaker C

Now, if you can get all faculty to do that.

Speaker A

I'm still a big fan of what we've done here in the college.

Speaker A

We have a policy that basically says you have to have a policy.

Speaker A

And it's, I don't know, five or six areas that you have to address.

Speaker A

And we give a range.

Speaker A

You can be maximally restrictive, you can be maximally permissive.

Speaker A

That's entirely up to you.

Speaker A

But what you cannot do is leave students in the dark.

Speaker C

That's good.

Speaker A

Yeah.

Speaker A

And it's worked out pretty well.

Speaker A

We had no pushback on it that I've heard of.

Speaker A

And it accounts for faculty governance and faculty academic freedom.

Speaker A

But at the same time, it's at least fair to the students.

Speaker A

The other.

Speaker A

I'd like to hear your take on this because it may differ from high school to college, but I think you also have to give a little grace to students.

Speaker A

Cause somebody kind of is a little bit over that line.

Speaker A

That's a teaching moment.

Speaker A

Here's why that's a bad thing for you to have done.

Speaker A

Not just because of class rules, but because you need to know this stuff.

Speaker A

And what you've done is just hurt your learning as opposed to the hammer of God coming down and crushing their grade.

Speaker A

I think because it's so gray right now, it's time to be a little slack.

Speaker A

Slack's not.

Speaker A

Sorry, can I correct that?

Speaker A

It's time to be a little understanding.

Speaker A

Slack is not the right word.

Speaker C

Right.

Speaker C

Understanding is good.

Speaker C

Yeah.

Speaker C

And sometimes they legitimately aren't trying to do anything wrong.

Speaker C

It just, you know, they don't know.

Speaker C

I used to have students when I taught classes in college, they plagiarized.

Speaker C

They didn't do it on purpose.

Speaker C

They literally didn't know that you were supposed to cite certain things.

Speaker C

So I think that, yes, we have to have a little bit of grace with this.

Speaker C

And teaching moments are good.

Speaker C

We don't have to scare the bejeebis out of them.

Speaker A

Yeah.

Speaker A

Yeah.

Speaker A

And I don't want to be an AI cop either.

Speaker A

That's part of it.

Speaker C

Yeah.

Speaker C

And you can't.

Speaker C

Right.

Speaker C

How can you?

Speaker C

I mean, yes, when you read things, you can probably kind of speculate.

Speaker C

That's probably AI, but.

Speaker C

But you can't know for sure because none of the AI detectors are accurate.

Speaker C

None of them.

Speaker C

If you put your thing in four different ones, you're going to get maybe one that it trips, the other three it doesn't, or a couple that it does, the others that it doesn't.

Speaker C

So you can't trust any of that.

Speaker C

So how do you go to a student and say you're using AI?

Speaker A

Yeah.

Speaker A

And you're only going to catch the lazy ones that don't know what they're doing?

Speaker A

If they're willing to put a little bit of effort into it, you're never going to know.

Speaker C

So have you heard of the instructors putting injections in there?

Speaker A

Yeah, yeah, yeah.

Speaker C

I'm just like, so what's the goal of that?

Speaker C

If you're explicit about it, I guess then I'm okay with that.

Speaker C

If you specifically say there's things in here, blah, blah, blah.

Speaker C

But then they can get around that by doing something else.

Speaker A

I tell my students, look, if you cheat on this, I might catch you.

Speaker A

I might not.

Speaker A

But you're just not going to know the stuff you need to know.

Speaker C

Yeah.

Speaker A

And I'll get you at some point.

Speaker A

Might not be on this assignment, but, you know, this is designed to help you learn the stuff that you're going to need to be able to pass the test.

Speaker A

And so I'll find out, and I'm okay with that.

Speaker B

Well, I think what you're getting at, Craig, is at some level, the stakes of where the points are earned becomes different.

Speaker B

Right.

Speaker B

If it's truly an activity that I need you to learn these things so you can demonstrate it later on, it may be that it's in that later on that the stakes exist, whether it's the test where they're in a controlled environment where they can demonstrate their learning, or whether it's in a presentation where they get up in front of an audience and demonstrate their ability to articulate what it is that they've learned.

Speaker B

As I've seen education, as I've gone up through it and then into higher education, there's a lot of small stakes that go on where you can earn a good portion of your points by doing the little things that you might be able to AI your way through and never truly demonstrate.

Speaker B

Demonstrate learning.

Speaker B

So I think that's part of the conversation too is they don't want to discourage people by you get one final exam and it's a make or break thing of whether or not you pass this class.

Speaker B

But at the same time, where are those key moments of demonstrating whether it's knowledge or whether it's the ability to engage in a process or whatever that is?

Speaker B

I think it begins looking a little different than take home worksheets that you complete in an opportunity to demonstrate skills.

Speaker A

At the risk of being the old man who shakes his fist and tells kids to get off his lawn.

Speaker A

Having a midterm and a final worked for a really long time.

Speaker A

I know that's not the fashion these days, but I don't know, maybe we should go back to that.

Speaker A

Sorry, bet I interrupted you.

Speaker C

Yeah, I was gonna say there are still a lot of classes that kind of teach that way.

Speaker C

When I left the last university I was at, there were plenty of them that did midterm and final, especially in the health and human services field, the biology classes, things like that also too.

Speaker C

A lot of the big classes that have 200, 300 students in, that's how they're doing it.

Speaker C

They're testing with multiple choice exams, maybe some fill in the blank kind of things, but they're not doing take home essays and things like that.

Speaker C

So I don't know the class sizes with the classes that you have.

Speaker C

But a lot of these problems aren't really going to affect some of these instructors.

Speaker C

Probably a huge chunk of instructors really if you think about it, especially at these big ten universities, large universities, they don't have to change anything.

Speaker C

In these intro classes that have a couple hundred students, they continue and do what they're doing.

Speaker C

They don't care if students use AI to help them study for their tests or not.

Speaker C

They're in class as long as they don't have the meta glasses on.

Speaker C

I suppose you do have to maybe monitor a little bit more in terms of making sure they're not using other things.

Speaker C

But I think that for a chunk of people this isn't going to impact them at all.

Speaker C

It's certain classes that it's going to get really sticky.

Speaker C

The smaller ones, the writing classes, graduate classes, you would hope that grad students aren't doing these things, but I'd be willing to bet that they probably are.

Speaker C

Yeah.

Speaker A

Yeah.

Speaker A

Yep.

Speaker A

These are big questions.

Speaker A

Big questions.

Speaker A

So as we get ready to wrap up, I want to ask the really big question.

Speaker A

You're in a unique position because you see students before we get them.

Speaker A

Especially with your background in student services, you also had that experience from a different perspective.

Speaker A

So what do you think would surprise most college faculty about how high school students are either currently using or are currently thinking about AI?

Speaker C

I think they'd probably be surprised at how much they're using it and for the types of things that they use it for.

Speaker C

They aren't just using it for educational purposes.

Speaker C

There are a lot of them that uses it for just regular, everyday tasks like how do I cook this pizza kind of thing, stuff that you would normally Google or not do at all and just kind of trial and error it.

Speaker C

So they're using it for a lot of everyday things.

Speaker C

They're using it a lot for companionship, personal problems.

Speaker C

I think that surprises a lot of people, parents especially.

Speaker C

They don't think their kids are using it this way.

Speaker C

The data says otherwise.

Speaker C

They're using it in ways that isn't just about learning, but they also use it in ways that enhance their learning.

Speaker C

They want to do better on their tests, they want to do better, and they see this as a tool to do it.

Speaker C

Building apps, building things that we don't think of doing because it's not something we're interested in, but they are.

Speaker B

And I would argue that what you describe right there is that world of continuous learning, of learning in the midst of life.

Speaker B

We realize students are doing those sorts of things that they're using these tools, whether it's getting counseling with a problem they're struggling with at 11 o' clock at night or whatever when, you know, people aren't answering phones.

Speaker B

What are some ways you think that education can lean into the students who are, I would call, are they native AI users, but you know, kind of younger AI users that they haven't learned it while they're in college, but they're coming into college.

Speaker B

Are there things that we can leverage in our thinking that can help meet them where they're at in some ways that would create some nice synergies and leading to their success?

Speaker C

Yeah, well, I think the one thing that we need to do is use it, because how can you understand what that experience might even be like if you're not using it?

Speaker C

I use it as Google a lot.

Speaker C

I don't even Google stuff anymore.

Speaker C

So I understand that desire to do that because that's one of the ways that I use it.

Speaker C

I also use it for very mundane things to ask IT questions.

Speaker C

I also ask IT questions to help me deal with personal issues, so I understand those needs.

Speaker C

But if you're not using it in that way and some students comes to you and says, I use it to talk about my personal problems or help me deal with roommate issues or whatever, then you're not going to even be able to understand what they might even be doing.

Speaker C

But I do think colleges and universities could use it for those types of things.

Speaker C

What's one of the biggest things that a lot of students struggle with living on campus?

Speaker C

Roommate problems.

Speaker C

Roommate conflicts.

Speaker C

You can use that in housing situations to help them kind of work through maybe some of these conflicts that they might have.

Speaker C

You can use it in career development to help them interview.

Speaker C

Put an interview into the ChatGPT, have it ask the questions, then have it critique your answers.

Speaker C

Have.

Speaker C

Have rehearsals with it.

Speaker C

You can use it in advising for students who.

Speaker C

Who may think they know exactly what they want to do, but maybe don't understand what it takes to actually get there.

Speaker C

And so they can use it for that.

Speaker C

There's so many ways that you can actually use this educationally and outside of that, but I think they need to be taught it.

Speaker C

I remember several years ago when I was working in a grant program, we worked with some of the students, and one of the big surprises that they found was students didn't use the Internet the way that we thought that they should use it.

Speaker C

So, like, instead of looking up how to cite something, they would just try and find a place where they could just dump the stuff in and it would cite it for them rather than figuring out how to do it themselves.

Speaker C

And we're like, well, why don't they just Google how to actually do this?

Speaker C

And students would email me all the time about questions.

Speaker C

That is, did you go to the website?

Speaker C

It's really right on there.

Speaker C

And I would usually send them back the link specifically where it was.

Speaker C

Instead of taking 10 minutes to search on the Internet to find it, they would email me and then have to wait like a day to get the answer so they don't use it in the way that we think that they should.

Speaker C

And guess what?

Speaker C

They're going to do the same thing with AI.

Speaker A

Yeah, absolutely.

Speaker A

Absolutely.

Speaker A

Well, this has been a great conversation.

Speaker A

I want to give you a chance to tell our listeners anything you want them to know about what you offer.

Speaker A

I know You've got a number of books on leadership and you've got your program.

Speaker A

So tell them all about it.

Speaker C

Yeah, so I started off writing about leadership.

Speaker C

That's what my background is in educational leadership.

Speaker C

And then as I was coming into reading more about AI and I started using it myself.

Speaker C

I come from this.

Speaker C

Not from a technology background, but from an educational background, one where I worked with students, one where I use this technology myself.

Speaker C

So I have a very unique perspective on it in the sense that I am not a techie, but I've learned how to use these things myself and how they fit into my life.

Speaker C

And I think ways that students can learn how to use these tools without feeling overwhelmed and helping parents work through these things as well.

Speaker C

Because we are deep in the weeds of this, right?

Speaker C

They aren't.

Speaker C

They're going to work.

Speaker C

They're doing what they need to do to take care of their families.

Speaker C

They're taxiing their kids around to extracurricular activities.

Speaker C

They're not thinking through this stuff every day, all day, like.

Speaker C

Like we are.

Speaker C

And so that's what I help people with, is to help them use this in a way that can enhance what you're doing without students getting in trouble doing it.

Speaker A

And I really like the way that your articles start from your experiences.

Speaker A

Hey, I was doing this thing with ChatGPT, and then you generalize it up and make it something that's useful for just about anybody.

Speaker A

So that's great with your articles, Rob.

Speaker A

Anything else?

Speaker B

No, I just want to thank BET for taking the time to hang out with us.

Speaker B

Part of me, when I wake up in the morning and I know we have an interview like this, I'm like, man, it's going to be too short because we could talk about so much and it could be such a, you know, a great day filled with.

Speaker B

With these sorts of conversations.

Speaker B

So I do greatly appreciate you taking the time.

Speaker B

It's 8:45am My time, so this, for me is a great way to start my day.

Speaker B

And I'm pretty sure it's not going to get better than this conversation.

Speaker B

So I appreciate you taking the time and it's been very meaningful.

Speaker C

Oh, well, I appreciate that.

Speaker C

Yeah, I had a lot of fun.

Speaker C

It's nice to be able to talk about this stuff on an intellectual level as well as a practical one.

Speaker A

Thanks again for being on.

Speaker A

And remember, listeners, for all things BET, go to BET Ludwig.com and that's B E T T E L U d w I g.com and there'll be a link in the show notes.

Speaker A

All right, that's it for this time.

Speaker A

See you all next time.

Speaker A

Thank.

Speaker A

You.