March 3, 2026

We're On Our Own: Academic Integrity through AI Resilience

We're On Our Own: Academic Integrity through AI Resilience
The player is loading ...
We're On Our Own: Academic Integrity through AI Resilience
Apple Podcasts podcast player badge
Goodpods podcast player badge
Overcast podcast player badge
Podcast Addict podcast player badge
Spotify podcast player badge
Castamatic podcast player badge
PocketCasts podcast player badge
Podurama podcast player badge
Apple Podcasts podcast player iconGoodpods podcast player iconOvercast podcast player iconPodcast Addict podcast player iconSpotify podcast player iconCastamatic podcast player iconPocketCasts podcast player iconPodurama podcast player icon

Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States.

They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words.

Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much.

The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials.

That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it.

Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction.

Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades.

The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode.

Links referenced in this episode:

  1. notebooklm
  2. anthropic
  3. claude
  4. google
  5. canvas
  6. einstein

Mentioned in this episode:

AI Goes to College Newsletter

Transcript
Speaker A

Welcome to another episode of AI Goes to College, the podcast that helps higher ed professionals figure out what in the world is going on with generative AI.

Speaker A

I'm joined once again by my friend, colleague and co host, Dr. Robert E. Crossler from Washington State University.

Speaker A

Rob, how are things out west?

Speaker B

Things are good, Craig.

Speaker B

We got the snow now that you had a few weeks ago.

Speaker B

It finally found us.

Speaker A

We're in that springtime up and down, but nobody cares about that.

Speaker A

So let's get started.

Speaker A

Before we get to our first topic, I want to encourage everybody to stay with us till the end because I want to talk about some interesting NotebookLM updates, Google's tool that's kind of retrieval augmented generation.

Speaker A

Rob, do you use notebooklm much?

Speaker B

I do, I play with it.

Speaker B

It's a good way to organize thoughts and ideas and things.

Speaker A

Yeah, it's impressive and they keep putting pushing its capabilities, but they made some not earth shattering updates, but they made some useful updates to it.

Speaker A

So we'll talk about that at the end.

Speaker B

What a teaser, Craig.

Speaker B

Way to go.

Speaker A

I know.

Speaker A

Let's start with a different major AI provider, Anthropic.

Speaker A

A few weeks ago Anthropic released Claude's Constitution, which is an 84 page behemoth that lays out how Anthropic governs the way in which Claude, quote unquote, thinks, reasons, refuses and engages.

Speaker A

So it gives us a little bit of a look under the hood without being technical.

Speaker A

It's not a technical document in any meaningful way, but there's a lot to unpack.

Speaker A

So we may come back and talk about this some more.

Speaker A

But I wanted to touch on a few things that I thought were especially relevant for higher ed.

Speaker A

I want to start with the high level set of guiding principles and I'm going to read here, which does not make for good podcasting, but I'm going to do that to make sure I get it right.

Speaker A

So this is directly verbatim from Claude's Constitution.

Speaker A

In order to be both safe and beneficial, we want all current Claude models to be one broadly safe, not undermining appropriate human mechanisms to oversee AI during the current phase of development.

Speaker A

I want to come back to that.

Speaker A

By the way, that phrasing is interesting.

Speaker A

Number two, broadly ethical, being honest, acting according to good values and avoiding actions that are inappropriate, dangerous or harmful.

Speaker A

Number three, compliant with Anthropic's guidelines, acting in accordance with more specific guidelines from Anthropic where relevant and four genuinely helpful, benefiting the operators and users they interact with.

Speaker A

I want to pause here for a second.

Speaker A

So there's a little bit of a hierarchy here in terms of the entities involved, the stakeholders.

Speaker A

If the topics is anthropic, operators are companies or universities that use CLAUDE primarily through the API.

Speaker A

For example, if Washington State created an advising system on top of claude, they would be an operator.

Speaker A

The users are at the bottom of that hierarchy, so that becomes important.

Speaker A

All right, I'm going to continue on with reading.

Speaker A

In cases of apparent conflict, CLAUDE should generally prioritize these priorities in the order in which they're listed.

Speaker A

Prioritizing being broadly safe first, broadly ethical second, following Anthropic's guideline third, and otherwise being genuinely helpful to operators and users.

Speaker A

And that comes in last.

Speaker A

They go on to say here the notion of prioritization is holistic rather than strict.

Speaker A

That is, assuming CLAUDE is not violating any hard constraints.

Speaker A

Higher priority considerations should generally dominate lower priority ones.

Speaker A

But we do want CLAUDE to weigh these different priorities informing an overall judgment, rather than only viewing lower priorities as tiebreakers relative to higher ones.

Speaker A

All right, that's one tiny piece of the Constitution, but I think there's a lot to unpack there.

Speaker A

So first of all, genuinely helpful, although it's important it comes in last.

Speaker A

So it is not going to be helpful to the user if it violates what Anthropic thinks is broadly safe.

Speaker A

So you can't put in a prompt that says, hey, how can I social engineer this type of user to get their credit card information?

Speaker A

I don't think it'll do that.

Speaker A

It shouldn't.

Speaker A

So I guess that's a good thing.

Speaker A

What do you think about that?

Speaker B

Yeah, I mean, it's interesting because a, what is safe and what is ethical?

Speaker B

Who defines that?

Speaker B

Right.

Speaker B

We could take an entire semester long class talking about what those things mean.

Speaker B

So, you know, on the surface I think it seems reasonable, but it gets into execution and what does that actually look like when the rover meets the road?

Speaker A

See, this is why we're co hosts, because that's exactly where I was going to go with this.

Speaker A

So it's anthropic deciding what's broadly safe.

Speaker A

And I think that comes into play even more strongly in the second prescription to be broadly ethical.

Speaker A

So being honest, I mean, we can kind of all agree on what that is, but acting according to good value, I mean, who, you know, who's deciding that?

Speaker B

And here's where I think this is interesting.

Speaker B

I've had the opportunity to teach internationally and depending on what countries you're in, if it's a more socialistic country, working together is deemed ethical appropriate.

Speaker B

And Right.

Speaker B

Whereas in the United States, much more individualized country, you know, everybody doing their own work is deemed as the ethical way to do things that.

Speaker B

That in some cultures, the collective good of everybody performing well is valued more over the individual performance, which is the metric that someone in a place like the United States might come down in.

Speaker B

Ethics.

Speaker B

Where's Anthropic on that range of what defines ethically good?

Speaker B

Is it the good for the whole or the good for the individual?

Speaker A

Yeah.

Speaker A

Another example, until fairly recently in China, seeking privacy was seen as a pretty negative social signal.

Speaker A

If you wanted privacy, you were hiding something, you were trying to do something that was good for you and not good for the collective, which is pretty contrary to the values in China, although values are very amorphous, emerging things.

Speaker A

So if we trust Anthropic, okay, then I guess we're all right.

Speaker A

But I've got some concerns there.

Speaker A

Although at the end of the day, it's their product, you know, they get to decide these things.

Speaker A

So I don't have any big problem with that.

Speaker A

But we just need to be aware that Anthropic could decide.

Speaker A

I'm totally making stuff up here that helping students cheat gets them through college and gets them better jobs.

Speaker A

And so why not do their homework for them?

Speaker A

Mm.

Speaker A

I'm not saying that they're doing that, but it's not out of the realm of possibility.

Speaker B

Right.

Speaker B

And I see an opportunity here for Anthropic, though, if they execute well on this and they establish this well as a differentiating factor, that will make it easier for institutions to adopt.

Speaker B

So I'll talk about higher education.

Speaker B

If their alignment with ethics and safety and so on, so forth is aligned with the vast majority of universities.

Speaker B

When universities are considering what tools are we bringing in house, this may actually be a differentiating factor that goes to their favor.

Speaker B

And the fact that they're open about this constitution and putting it out there for review and poking holes at I think is only going to help make it stronger.

Speaker A

Yeah, that's a huge point.

Speaker A

I really like Anthropic's openness here.

Speaker A

It's a long document, there's a lot to go through, but it's pretty wide open.

Speaker A

In a lot of cases, if you want to dig into it, you can get into the weeds.

Speaker A

And I strongly encourage listeners to take the time to at least scan through the constitution.

Speaker A

It's available.

Speaker A

I think it's available as a PDF.

Speaker A

We'll have a link to the website in the show notes.

Speaker A

But they go into what their values are in some detail.

Speaker A

So I like this.

Speaker A

I mean, I have no idea what's governing ChatGPT.

Speaker B

Well, and here's, here's what I'd like to see, Craig, and we'll see how this plays out.

Speaker B

Is Anthropic going to be able to be held accountable for consistency with their constitution?

Speaker B

Does it have that ability to kind of hold their feet to the fire?

Speaker B

That said, this is what you said you were going to do.

Speaker B

You didn't please adapt or adjust.

Speaker A

It's funny you should mention that because that's another thing that I wanted to bring up that's very interesting.

Speaker A

I said that the phrasing was interesting under broadly safe and I'm going to read that again.

Speaker A

Broadly safe, not undermining appropriate human mechanisms to oversee AI during the current phase of development.

Speaker A

When I read that, it's like, oh, they have given themselves a pretty significant amount of wiggle room here.

Speaker A

And not inappropriately, you know, if you look back the, what, three years and a couple of months that we've been dealing with all of this since the ChatGPT's release, what it can do now is way different than what it could do a few years ago.

Speaker A

In fact, here's a little aside.

Speaker A

Have you ever heard of the Colossal Cave adventure game?

Speaker B

I have not.

Speaker A

It was a mainframe text based adventure game where you were going through this labyrinth of caves.

Speaker A

It was all text based literally on a green screen, kind of old school terminal.

Speaker A

When I was getting my master's at Appalachian State, I was at the computer center one night.

Speaker A

I literally stayed there all night playing this stupid game.

Speaker A

It was one of the most engaging, infuriating, fun time wasting things I've ever done in my life.

Speaker A

But it was an interesting experience.

Speaker A

And so Rob, you and the listeners might be wondering, what tangent is Craig off on?

Speaker A

Well, the tangent is that this morning while I was supposed to be reviewing a paper and actually was reviewing a paper, on my right hand screen I had Claude Code creating a clone of the Colossal Cave where the whole framing was around understanding large language models.

Speaker A

I wanted to try a really simple prompt.

Speaker A

This was a really simple prompt, three sentences, four sentences long, with me every once in a while going over and clicking on approve.

Speaker A

In an hour I had a Python game.

Speaker A

Now I haven't played it yet, but Claude Code even goes through and runs the game in a sandbox to make sure it executes because I could see it.

Speaker A

Oh crap, I've got an error here.

Speaker A

Let me figure that out now.

Speaker A

Again, I haven't played it.

Speaker B

Well, we'll know if it works, if you're complaining about being really tired tomorrow because you didn't get any sleep.

Speaker A

Yeah, I purposely did not try to test it this morning.

Speaker A

It's going to be a weekend fun thing, but you couldn't do that a few years ago.

Speaker A

So these capabilities are always going to be, well, at least in the short term, they're going to be increasing rapidly, which is why that current phase of development phrasing is so interesting.

Speaker B

Yeah.

Speaker B

And I think that example you gave is important for a couple of reasons.

Speaker B

One is this constitution is subject to change as things develop, there may be reasons why they change it or purposefulness.

Speaker B

But also your example of with very simple prompt going into Claude and creating something is a great best practice I would recommend for anyone is to step in and do that because that's how as people who are working with students that are playing and exploring, if we're not doing the same thing, we're not going to be on the same page as them.

Speaker B

And the one thing I would look at in this constitution that I have questions about is the, the idea that it's going to be safe and ethical is I've read of people's anecdotes of where using cloud based tools, it's gone out and wiped hard drives and it's done things that would arguably be not helpful at all.

Speaker B

And so is it capable of staying within those guardrails of what the constitution says is the way things are?

Speaker A

Well, that's really interesting.

Speaker A

And so I want to come back to that.

Speaker A

But first I want to emphasize the first little bit of what you just said.

Speaker A

If you're not using these tools, you're making a huge mistake.

Speaker A

So even if you don't like AI, even if you're anti AI, you have to be using these tools so that you can understand what your students are doing.

Speaker A

Did I paraphrase that correctly?

Speaker A

So that could be one of the most important things you've ever said or I've ever said on this podcast is you've got to play with this stuff.

Speaker A

Now, you don't need to be necessarily out on the edge like we are, because I'd be willing to bet it's a pretty tiny portion of students that are using something like Claude Code or Codex or Anti Gravity, but you need to be using these tools.

Speaker A

The helpfulness bit, we have to take it to a different level.

Speaker A

And it's not whether or not it actually is helpful, it's whether or not it thinks it's being helpful.

Speaker A

So when it wipes a directory, it thinks it's doing something on behalf of the user.

Speaker A

So it's not, oh, I'm going to do this thing that I know is unhelpful.

Speaker A

It's going to be like somebody that throws out a bunch of paperwork that you needed and they thought you didn't need it anymore.

Speaker A

I mean, that's kind of what it's doing.

Speaker A

And so I think we cannot, to your larger point, we can't rely on, on Claude's version of what helpful is to map entirely 100% on what our version of helpful is.

Speaker A

So we have to put our own guardrails in place.

Speaker B

Yep.

Speaker B

And what I think is great about this, and I had this epiphany yesterday when I was reading about horror stories from AI and things that are happening, is one of the things that happens when we're training up, whether it's information system students or computer science students, is how to think like the computer does, how to understand, how to talk to it, to control it, if you will.

Speaker B

And now that we've created these large language models, these clods where you can go off and with four sentences, create something that executes and works, people who don't have that training on what could go wrong if you do something terribly wrong are creating opportunities for just that to happen.

Speaker B

And, and so I think that begins to inform.

Speaker B

How are we teaching the intro to MIS class to all business students?

Speaker B

How do we get to where marketing students and accounting students and students who may not take much more than an introductory to MIS class, the right ways to interact with these systems such that deleting a directory is not viewed as your desired behavior.

Speaker A

Yeah, yeah.

Speaker A

And for those of you who are not in a business school, MIS is Management Information Systems.

Speaker A

And it's kind of computers in the business school.

Speaker B

Yeah.

Speaker A

For some reason, I was transported way back, which is something old guys do fairly frequently.

Speaker A

My first job in computers was in the 1980s, mid-80s when I was working at a retail store, MicroAge.

Speaker A

It was not unusual for somebody to put in a floppy disk, remember the old floppy disk, try to format it and not be paying attention, and they wipe out their hard drive.

Speaker A

They format their hard drive.

Speaker A

So this kind of thing, maybe it's at a bigger scale now, and maybe some of the tools like OpenClaw or Molt, whatever they're calling it today, they changed the name again.

Speaker A

I think tools like that Claude, Cowork, some of these agentic tools, they open up that kind of exposure again.

Speaker A

But there'll be a way to solve it.

Speaker A

And if you're paying attention.

Speaker A

It shouldn't happen.

Speaker A

Do you follow good backup practices and that sort of thing?

Speaker A

But people don't.

Speaker A

We probably don't either.

Speaker A

There's another really interesting thing in the Constitution.

Speaker A

And then I think I've got two more points and then we can move on.

Speaker A

So I'm going to read again.

Speaker A

One of the things that CLAUDE prioritizes is the user's long term well being, which we should probably look at in depth in another episode because I know you in particular have concerns about people using CLAUDE as a counselor and the effects that AI, not just Claude, but AI, may be having on mental health.

Speaker A

But the Constitution says concern for user well being means that CLAUDE should avoid being sycophantic or trying to foster excessive engagement or reliance on itself if this isn't in the person's genuine interest.

Speaker A

So I like that.

Speaker A

Avoid being sycophantic.

Speaker A

And that basically means quit being a suck up.

Speaker A

Trying to foster excessive engagement, I think is a shout out to the antisocial media folks.

Speaker A

But then this idea, if it isn't in the person's genuine self interest.

Speaker A

So again, who gets to decide that?

Speaker A

And then it goes on.

Speaker A

I'm going to paraphrase.

Speaker A

It goes on to describe acceptable forms of reliance.

Speaker A

So you don't want to be reliant on AI, but here are some ways that might be okay.

Speaker A

Someone asks for a given piece of code rather than being taught how to code it themselves, for example.

Speaker A

And then the situation is different if a person has expressed a desire to improve their own abilities.

Speaker A

Or in other cases where CLAUDE can reasonably infer that engagement or dependent independence isn't in there the user's interest.

Speaker A

So that's a really muddled kind of view.

Speaker A

So it's kind of, yeah, don't rely on AI unless you want to rely on AI, unless you tell us you don't want to rely on AI.

Speaker A

And it's like, okay, I'm frankly not sure what to make of that.

Speaker A

And then it throws in this little bit at the end of the paragraph.

Speaker A

For example, if a person relies on CLAUDE for emotional support, CLAUDE can provide this support while showing that it cares about the person having other beneficial sources of support in their life.

Speaker A

Good night.

Speaker B

The anthromorph, does it show that it cares?

Speaker B

The anthropomorphized that pretty strongly.

Speaker A

Yeah.

Speaker A

I got aha.

Speaker A

Out of CLAUDE code, by the way, which I was pretty proud of.

Speaker A

Yeah.

Speaker A

And then it can provide this emotional support, but it also wants you to have other sources of emotional support.

Speaker A

So I'm just going to kind of leave that one there because I don't think we've got time in this episode to get into that, but I think we should circle back to that.

Speaker B

Yeah, no, I think it's worth talking about that one.

Speaker B

I think there's a lot of trails we can take that to that perspective.

Speaker A

Yeah.

Speaker A

And so here's the big payoff.

Speaker A

When I went through the Constitution, I set up a notebook, LM notebook, and kind of went through it, and I'm slowly making my way through the document in detail.

Speaker A

The thing that kept coming back to me is that we are on our own when it comes to academic integrity.

Speaker A

And AI Anthropic is not going to help us.

Speaker A

And if arguably the most ethical and transparent of the big AI companies is not going to help us, OpenAI is not going to help.

Speaker A

Meta is not going to help.

Speaker A

Google's not going to help.

Speaker A

We are on our own.

Speaker A

It's not going to block students from using this in ways that violate academic integrity.

Speaker A

It's just not going to happen.

Speaker B

Craig.

Speaker B

To that point.

Speaker B

I saw a post on LinkedIn yesterday where someone was sharing a new tool out there called Einstein.

Speaker B

And what this tool does is it will actually watch all of your videos for you in Canvas, make your posts for you, reply to your posts, and do your homework for you.

Speaker B

And all you have to do is give it your credentials to log into Canvas.

Speaker A

What could go wrong?

Speaker B

What could go wrong there?

Speaker B

But tools are being created that exactly reinforce the point that you just made is we have to find ways to ensure that that academic integrity exists.

Speaker B

And the ways we've always done things are going to be made irrelevant and it's going to become a game of cat and mouse.

Speaker B

Much like the cybersecurity world is that as soon as you fix one problem, somebody else is finding a new way to get access to your data, to your information, or whatever those things may be.

Speaker A

Yeah, yeah, absolutely.

Speaker A

So we have to rethink higher ed.

Speaker A

I know that's a big thing.

Speaker A

I know it's really hard to do.

Speaker A

I know there are lots of forces that will get in the way, but if we don't, we're at best going to be largely irrelevant in a lot of what we do.

Speaker A

So I think we have to rethink that.

Speaker A

And one of the things we can do more in the short term is to use a term that you coined, make learning activities AI resilient.

Speaker A

Yep.

Speaker A

So let's put the Constitution aside for now.

Speaker A

Wait,

Speaker B

Are you bringing politics into this?

Speaker A

This is not a political Podcast.

Speaker A

Let's put Claude's constitution aside for now.

Speaker A

Here goes some listeners, but we could gain some others and talk about AI resilience.

Speaker A

Save me, Rob.

Speaker B

Yeah, so AI resilience, Craig and I talk about this a lot, is the idea that when we are putting together our learning outcomes, our learning goals, and mapping how we're going to get to those within whatever class we're teaching, we should do it with an idea that those learning outcomes can be achieved resiliently in the face of AI and the changing nature of AI.

Speaker B

So regardless of what AI is capable of, we need to be looking at what it is we are doing in that learning journey to help students truly learn what it is.

Speaker B

And if we don't do this, Craig, we lose our value proposition.

Speaker B

I like to say it's preventing people's ability to cheat their way through our classes by using AI.

Speaker A

Okay, that all makes sense.

Speaker A

All sounds good.

Speaker A

But I don't know exactly what AI resilient means in an operational way, so help us out there.

Speaker B

So I think it depends on what you're doing.

Speaker B

I would say that if your assignments are as simple as, here is a prompt for an essay, go and write it, and you have a large amount of the points of what students are going to earn in the class assigned to the writing of that essay, that would be an example of something that didn't have a lot of resiliency, because you'd have no way if they learned what it was they were supposed to be learning through writing that document.

Speaker B

But with AI and some of the tools that AI has, what if we're able to begin looking into the process of creating that document?

Speaker B

And so one thing that I've seen when people are like, well, that creation of that document still has value, is that if we ask for the prompts that were used with AI, how those prompts were refined through an example of the critical thinking, on a reflection of those prompts and different things, we can begin to then grade that process.

Speaker B

So it's not so much about what you create, it becomes about how you create.

Speaker B

Where I've got a colleague who teaches a programming class, and it used to be, could you write the program much like Craig's been able to do it an hour in that previous example?

Speaker B

That used to be the hard thing in this class, and now if you have the right tools, it becomes easy.

Speaker B

Well, the evaluation has moved to the recording of a video the students make of them describing what it is their code does.

Speaker B

So it becomes more about being able to understand it, explain it and so forth.

Speaker B

So it's my ability to grade what you've done takes a different lens that necessarily might be enhanced by AI.

Speaker B

It might help you to learn those things.

Speaker B

But at the end of the day, you know, do I feel confident that I can look at, you know, whatever it is that I'm evaluating and say, yes, the student has demonstrated an ability to achieve this outcome.

Speaker A

It sounds like that's a little bit of an opportunity to level up the student skills too.

Speaker A

But.

Speaker A

All right, so I have a huge issue with that.

Speaker A

An operational issue, not a philosophical issue.

Speaker A

How does that scale?

Speaker A

You've heard me whine about this before.

Speaker A

When I teach Undergraduates, I teach 90 or 100 undergraduate students, so I don't know how to scale that.

Speaker B

Well, in some ways I think AI helps us scale it because AI can help us with grading it.

Speaker B

I would say that as academics of these tools out there, we're not hands off in them either.

Speaker B

Right?

Speaker B

So I think using the tools can help us be pointed to where our time can be used most efficiently in grading those sorts of things.

Speaker B

You know, even a document that becomes 25, 35, 50 pages long, because they created one version of the document, they gave you your critical reflection in their process, you're not reading every iteration of the document.

Speaker B

You're actually spending more of your time looking at their reflection statements and, you know, looking at those things that give you idea of the processes.

Speaker B

So you might begin to look at different aspects of things to say, okay, here's where it is purposeful, meaningful to give that feedback, where I want to give that feedback.

Speaker B

So we have to change our paradigm a little bit and how we grade things and what we're speaking into as well.

Speaker A

Yeah, I think that's a good point.

Speaker A

AI could be part of the cure because frankly, I don't know.

Speaker A

Does it matter if a well tuned AI system gives the feedback or you or I give the feedback?

Speaker B

Well, here's where I think it matters is I have to be willing to stand behind the feedback that is given.

Speaker B

And for those of you who are privileged enough to be able to work with doctoral students as teaching assistants, when a doctoral student gives feedback and I'm training them up into how to be teachers, eventually professors, if they mark something wrong and they graded it poorly, I have to own that too and say, okay, I did a bad job, so there are control mechanisms that I'd put into place when I'm having some other person grade for me.

Speaker B

Those same sorts of control mechanisms should be in place to ensure that the grades that are showing up in students gradebooks adequately reflect what I would do.

Speaker B

The other thing I would ask people to think about is are we moving to a world where, okay, I need to spend more time grading the things that you turn in, but I'm going to ask you to turn in less things that are going to get a grade.

Speaker B

At which point it's just shifting that time to say okay, at midterms and at finals.

Speaker B

Those are two blocks of time where I'm heads down doing extra grading.

Speaker B

But throughout the process it's peer grading, it's other sorts of things.

Speaker B

It's getting feedback to really push those students.

Speaker B

It moves us away from what I think we've called the grade economy, where it's the grade economy, do this, do that, get that, get that to.

Speaker B

We're going to go through a semester and this is kind of how a doctoral seminar often works.

Speaker B

We go through all semester long.

Speaker B

And if you've been doing all the readings and engaging in all the conversations, that end paper or that end exam that I give you is your grade in the class.

Speaker B

And some students rise to the challenge and others don't.

Speaker B

Much, much different world for undergraduate students.

Speaker B

And if I went to a single final exam, I would freak out 75% of the students in my class.

Speaker B

But that may be a viable solution depending on how you manage the relationships with the students.

Speaker A

I can see a little bit of a variation on that.

Speaker A

If you have these scaffolding assignments that aren't graded that give again well tuned AI feedback to the students, you take the grade out and it changes things.

Speaker A

If it becomes more like.

Speaker A

It's almost like playing scales.

Speaker A

If you play an instrument, you play scales not because you want to play scales, you play scales because it helps you play the music you want to play.

Speaker A

And so it becomes that kind of a thing.

Speaker A

And then I don't know.

Speaker A

Did that metaphor work?

Speaker A

I think that metaphor worked.

Speaker A

But then the big thing is where we spend our time.

Speaker A

Maybe you end up with some lumpy demand on your time.

Speaker A

But then the other thing I plan to do when I teach undergrads again is I'm going to use AI in a huge way to really nail down my examination.

Speaker A

I teach in person, so I'm going to have an in person midterm and an in person final.

Speaker A

And I'm going to make sure those are really good and that what we do builds into that final.

Speaker A

There are some students that have test anxiety and those kinds of things, but I'm not sure what we do about that.

Speaker B

Well, so you know, one of the things that I've been talking to people around here a lot about is what about more live client projects where the goal this semester, whether it's a programming class and we're going to ultimately create a solution for somebody out in the world that has a real problem that's not constrained by what we write down in a textbook or so forth, or what if it is a business plan pitch for whatever business that you came up with, and you've got to go through all these steps along the way that's going to result in this great presentation at the end of the semester.

Speaker B

And it's that presentation of what brought everything together that is the ultimate, highly graded outcome with those little pieces along the way that if you learned how to do them and you did them well, then that final project presentation is going to be like, wow.

Speaker B

The client's happy because they feel like you understood the problem and you created a solution that actually met what they needed.

Speaker A

I'm going to go out on the edge here because my initial reaction was, oh, my God, no.

Speaker A

Those kind of things, when they work, they're awesome.

Speaker A

Getting them to work well is an absolute nightmare for everybody involved.

Speaker A

If you don't have an engaged client, if you can't find enough clients, problems aren't scoped well, which is often the case.

Speaker A

There's a lot that can go wrong there, but there's no reason you couldn't have AI clients that are available that put the right amount of fuzziness in.

Speaker A

In our field, the big problem is the clients can't articulate exactly what they want.

Speaker A

Although vibe coding can help a lot with that sort of refining those sorts of things.

Speaker A

And then they want the world in 10 weeks or 14 weeks or whatever it is.

Speaker A

So scoping that kind of thing is a problem.

Speaker A

But you could absolutely make a custom GPT or put some sort of a wrapper around a language model, large language model, that would address those issues.

Speaker B

Yeah, and that's where our conversation is beginning to go, is it's scary to have live clients for all the reasons that you just said.

Speaker B

If things go sideways, what do you do as a faculty member and how do you salvage a semester when your client decided to peace out on you and just disappear?

Speaker B

But if your AI becomes your client or you're able to use AI as the faculty member, where the faculty member is the client, but they're getting everything through prompts and they kind of become that filter to ensure that they're seeing kind of what's going there, it becomes way More doable than if I were to suggest the faculty member service client in anything before.

Speaker A

Yeah, I've used a lot of canned cases where it's controlled and scoped, but they lose some of the messiness.

Speaker A

That could totally be done.

Speaker A

If you can make a customer service chatbot, you can do this.

Speaker A

It's not all that different.

Speaker A

I think this idea of resilience is the way to go.

Speaker A

It's part of the answer.

Speaker A

It's not the entire answer.

Speaker A

It's one that individual faculty can start using right now.

Speaker A

We can't change the grade economy and the factory customer service mindset.

Speaker A

All of that we baked into higher ed over the last 40 plus years.

Speaker A

But you can change your assignments and you can change the structure of grading within your class.

Speaker A

I guess there are some exceptions, people that are doing coordinated classes, that sort of thing, but most of us can do at least something along those lines.

Speaker A

But I want to pull this to a little bit of a close before we move on to NotebookLM.

Speaker A

This is a huge challenge, but it's also an opportunity.

Speaker A

I know that's corny, but we've been whining about the state of higher ed for a very, very, very long time, and we've seen it get worse and worse.

Speaker A

Various forces have pushed us into this factory line mentality where it's all about credentialing for starting a career in this students customers mindset.

Speaker A

Just nonsense grades rather than learning being the important thing.

Speaker A

We've done this over decades, so we're not going to fix it overnight.

Speaker A

But this could be the lever.

Speaker A

AI could be the lever that we need to fix the stuff that we should have been fixing a long time ago.

Speaker A

It's not going to be easy, but we're going to quote mom and dad, nothing worthwhile comes easy.

Speaker A

You said that to your kids.

Speaker A

You have, haven't you?

Speaker B

Guilty.

Speaker B

At least some variation of that statement.

Speaker A

Yep.

Speaker A

So, all right.

Speaker A

Any last words on resilience?

Speaker B

Yeah, one last word, Craig, and I think you laid that out.

Speaker B

Where are we going and what do we need to be doing and using this as the lever to start having those conversations?

Speaker B

We don't exactly know the answer.

Speaker B

We're all figuring this out as we go.

Speaker B

But I would say that AI resiliency and putting that floor in place that we can validate that students who come through our programs have learned what they need to learn and they didn't cheat their way through is the important floor that we put in place that will allow us to experiment and springboard into what the next phase of higher education looks like.

Speaker A

All right, well said.

Speaker A

Okay, let's get down to the tool level here.

Speaker A

I love NotebookLM.

Speaker A

I use it for all kinds of stuff.

Speaker A

It kind of got on people's radar because of the audio overview.

Speaker A

They call it the podcast feature, which is cool.

Speaker A

There's nothing wrong with it.

Speaker A

It's pretty slick that it can do it.

Speaker A

And now it does videos, explainer type videos that are actually pretty good most of the time.

Speaker A

Let me just run through what NotebookLM is.

Speaker A

And so it's a retrieval augmented generation system, which basically means you give it source documents and it will create things, including responses to your prompts, based on those documents.

Speaker A

In theory at least, it will not go outside of those documents for anything that's kind of factual.

Speaker A

And like I said, the first big thing that kind of made it, oh, wow, look at this.

Speaker A

Was the audio overview.

Speaker A

But people miss the point.

Speaker A

It's this idea that it answers based on your source documents.

Speaker A

And you can have up to 300 sources.

Speaker A

There's some limits, like 50,000 words per source.

Speaker A

And you can do videos, websites, PDFs, text files.

Speaker A

You can connect it to a Google Drive folder, which is really useful.

Speaker A

You can copy and paste text in.

Speaker A

So it's really a pretty big range of things.

Speaker A

It will actually do Gemini based deep research within the tool now, which is kind of cool.

Speaker A

It'll search the web.

Speaker A

So it's really great.

Speaker A

But what they've done recently is they've given you more control over things in what it calls the studio.

Speaker A

So the way it's set up is you've got your source documents.

Speaker A

This is why we should be on video.

Speaker A

Occasionally you've got your source documents over on the left in the middle, it's a chat Interface, just like ChatGPT or Gemini.

Speaker A

And then on the right, they've got this studio, which is where you can park notes.

Speaker A

Like if you have the chat session produce something you really like, you can tell it to save it to a note.

Speaker A

It parks it over in the studio.

Speaker A

And that's also where you can produce audio overviews, video overviews, mind maps, which are really interesting, reports of various kinds, flashcards, quizzes, infographics, which can often be pretty neat, a slide deck and a data table.

Speaker A

So not all of those would be appropriate for every type of source data, but it's a pretty wide range.

Speaker A

They made a couple of changes recently.

Speaker A

First, they've given you a lot more control over the audio overview, the flashcards, the quizzes, the infographics, the slide decks and the data tables.

Speaker A

And also the reports.

Speaker A

Now if you want to do a report, it gives you the ability to create your own so you can specify exactly what topics you want focused in on the level of detail, the structure of the report.

Speaker A

It's really pretty wide open.

Speaker A

And it also gives you some suggested reports you can run.

Speaker A

The canned ones are a briefing document which is just an overview of what the sources say, A study guide which gives you some quiz questions and essay questions and a glossary and some things like that, A blog post which is just the key takeaways from the sources.

Speaker A

And then it gives you some context specific suggested reports you can do.

Speaker A

Like for the Claude Constitution, I set up a notebook and the suggested formats are an organizational policy white paper, a strategic accountability manual, whatever in the world that is an educational primer, and then a conceptual framework overview.

Speaker A

So that's kind of cool that you can produce pretty much any kind of report you want.

Speaker A

All right, any.

Speaker B

So I'm going to, I'm going to give a comment that I think takes this notebook LLM to a higher level of thinking as we begin thinking about what is possible.

Speaker B

And when you were describing all those things, it made me think about what are all the tasks that an entry level person might be doing because it's important.

Speaker B

Putting those spreadsheets together, putting those data tables together, having those as part of a story you're trying to tell, as part of an informed decision you're trying to make is important to have those things.

Speaker B

But it is a lot of busy work and figuring out how to put all of those things together.

Speaker B

And so when we start putting these things at the fingertips, what that's going to do is it's going to highlight and focus.

Speaker B

Where is that place that the human has value add?

Speaker B

Figuring out what really should be in that slide deck, how to take what was created and get that story told that you wanted to tell, how to get that data to do what you need that data to do in a way that becomes very, very accessible to people who are trying to get a job done.

Speaker B

And so it begins changing.

Speaker B

I think what it is we're training up students to be able to do, which is to not necessarily crunch the numbers at the very minutia level, but understanding what those numbers mean, how we can take those numbers and use them to make decisions, to put them into some sort of a presentation that lets me, with confidence, stand in front of people and say, here's what we need to do and why it's making, I think, the technical available to many.

Speaker B

But with that comes A different kind of responsibility in preparing them for how to understand that and how to do something with it.

Speaker A

Yeah, exactly.

Speaker A

So a couple of things there.

Speaker A

One is that we're changing the nature of the skills that are necessary.

Speaker B

Yep.

Speaker A

Like most business schools, we've got a hands on class where they learn Excel, PowerPoint, et cetera, et cetera.

Speaker A

You know, I mean, I think Excel still has a lot of value because you can do so much with it.

Speaker A

And it also embeds some analytical thinking when it's not so much about the tool.

Speaker A

But okay, you can make a pretty slide deck.

Speaker B

I mean, well, but can you stand in front of executives and deliver your slide deck?

Speaker A

And can you structure a slide deck in a way that communicates what you want to communicate?

Speaker B

And do you know it well enough to be able to talk to it?

Speaker B

Because I've seen a lot of really nice slide decks.

Speaker B

So when it comes to presentation time, nobody knows what's in it.

Speaker A

When I used to teach analytics, half my class was about data visualization.

Speaker A

We spent a lot of time on what is your story?

Speaker A

You have to figure out the story you're trying to tell and that drives everything else.

Speaker A

You know, what's better?

Speaker A

This chart or this chart?

Speaker A

I don't know what's the story.

Speaker A

And so I think we are really moving students up that stack, as those of us who are in the IT world like to say, where it's at a higher level of thinking.

Speaker A

But it's going to be a little bit of a transition to get there.

Speaker A

You mentioned slide decks.

Speaker A

That's the other big change in NotebookLM.

Speaker A

So it used to be you could not export the slide decks, but now you can.

Speaker A

It's still a little bit cludgy to be able to edit them.

Speaker A

You have to pull it into something like canva that can extract the text.

Speaker A

Right now each slide is just kind of an Image in the PowerPoint file.

Speaker A

Have you ever used that?

Speaker B

I haven't.

Speaker A

It's kind of cool.

Speaker A

It gives you some ideas how you might want to structure things.

Speaker A

So the mind map is really good for that.

Speaker A

But I have two big messages and two reasons I wanted to cover NotebookLM.

Speaker A

First of all, if you're not using NotebookLM, you should, because it really is pretty unique out there in the marketplace.

Speaker A

It does things that I'm.

Speaker A

I can see bits and pieces of it and other tools, but nothing pulls it together.

Speaker A

And if you've got a Google account, you've got access.

Speaker A

It's really remarkable what it does.

Speaker A

The other big point is I see them in this phase where they demonstrate something works and then they give the user more control over it.

Speaker A

And so that's a reason to keep coming back to NotebookLM.

Speaker A

There's a lot I don't like.

Speaker A

The organizational level of the notebooks is pretty crappy.

Speaker A

Ironically for a notebook, they don't have a good way to organize all the notebooks.

Speaker A

And so there are some things I don't like about it, but it is a fantastically beneficial tool for anybody in knowledge work, but especially for those of us in higher ed, even if you're more on the administrative side of things.

Speaker A

Take your policy documents, your procedure documents, put them into NotebookLM.

Speaker A

Rob, how much time have you wasted trying to figure out how to do a stupid grade change or something you do once every other year?

Speaker B

Well, I don't waste a lot of time on it because I've created an agent in Copilot to do exactly that.

Speaker B

Which copilot does similar things to what notebook does.

Speaker B

And they've actually added a basically a notebook feature in Copilot as well, which has been pretty handy.

Speaker A

Yeah, I've heard that.

Speaker A

I've not tried it yet, but I need to even something that simple.

Speaker A

When I teach, I'm going to put some of the course documents.

Speaker A

I'm on sabbatical, those of you who didn't already know.

Speaker A

When I start teaching again, I'm going to put syllabus and policy guidelines and that sort of thing in a notebook and say, before you ask me, go in and check this out.

Speaker A

And I think that's going to be helpful for students and for me.

Speaker A

All right, Rob, any last thoughts?

Speaker B

No.

Speaker B

I think we touched a lot, a lot today.

Speaker A

We did.

Speaker A

We did.

Speaker A

We should do a comparison between Copilot notebook and notebooklm.

Speaker B

That'd be fun.

Speaker A

That could be interesting.

Speaker A

All right, well, Rob, if you don't have anything else, we'll say goodbye and join us again next time on AI Goes to College.

Speaker A

Thank you.