We're On Our Own: Academic Integrity through AI Resilience


Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States.
They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words.
Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much.
The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials.
That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it.
Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction.
Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades.
The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode.
Links referenced in this episode:
Mentioned in this episode:
AI Goes to College Newsletter
Welcome to another episode of AI Goes to College, the podcast that helps higher ed professionals figure out what in the world is going on with generative AI.
Speaker AI'm joined once again by my friend, colleague and co host, Dr. Robert E. Crossler from Washington State University.
Speaker ARob, how are things out west?
Speaker BThings are good, Craig.
Speaker BWe got the snow now that you had a few weeks ago.
Speaker BIt finally found us.
Speaker AWe're in that springtime up and down, but nobody cares about that.
Speaker ASo let's get started.
Speaker ABefore we get to our first topic, I want to encourage everybody to stay with us till the end because I want to talk about some interesting NotebookLM updates, Google's tool that's kind of retrieval augmented generation.
Speaker ARob, do you use notebooklm much?
Speaker BI do, I play with it.
Speaker BIt's a good way to organize thoughts and ideas and things.
Speaker AYeah, it's impressive and they keep putting pushing its capabilities, but they made some not earth shattering updates, but they made some useful updates to it.
Speaker ASo we'll talk about that at the end.
Speaker BWhat a teaser, Craig.
Speaker BWay to go.
Speaker AI know.
Speaker ALet's start with a different major AI provider, Anthropic.
Speaker AA few weeks ago Anthropic released Claude's Constitution, which is an 84 page behemoth that lays out how Anthropic governs the way in which Claude, quote unquote, thinks, reasons, refuses and engages.
Speaker ASo it gives us a little bit of a look under the hood without being technical.
Speaker AIt's not a technical document in any meaningful way, but there's a lot to unpack.
Speaker ASo we may come back and talk about this some more.
Speaker ABut I wanted to touch on a few things that I thought were especially relevant for higher ed.
Speaker AI want to start with the high level set of guiding principles and I'm going to read here, which does not make for good podcasting, but I'm going to do that to make sure I get it right.
Speaker ASo this is directly verbatim from Claude's Constitution.
Speaker AIn order to be both safe and beneficial, we want all current Claude models to be one broadly safe, not undermining appropriate human mechanisms to oversee AI during the current phase of development.
Speaker AI want to come back to that.
Speaker ABy the way, that phrasing is interesting.
Speaker ANumber two, broadly ethical, being honest, acting according to good values and avoiding actions that are inappropriate, dangerous or harmful.
Speaker ANumber three, compliant with Anthropic's guidelines, acting in accordance with more specific guidelines from Anthropic where relevant and four genuinely helpful, benefiting the operators and users they interact with.
Speaker AI want to pause here for a second.
Speaker ASo there's a little bit of a hierarchy here in terms of the entities involved, the stakeholders.
Speaker AIf the topics is anthropic, operators are companies or universities that use CLAUDE primarily through the API.
Speaker AFor example, if Washington State created an advising system on top of claude, they would be an operator.
Speaker AThe users are at the bottom of that hierarchy, so that becomes important.
Speaker AAll right, I'm going to continue on with reading.
Speaker AIn cases of apparent conflict, CLAUDE should generally prioritize these priorities in the order in which they're listed.
Speaker APrioritizing being broadly safe first, broadly ethical second, following Anthropic's guideline third, and otherwise being genuinely helpful to operators and users.
Speaker AAnd that comes in last.
Speaker AThey go on to say here the notion of prioritization is holistic rather than strict.
Speaker AThat is, assuming CLAUDE is not violating any hard constraints.
Speaker AHigher priority considerations should generally dominate lower priority ones.
Speaker ABut we do want CLAUDE to weigh these different priorities informing an overall judgment, rather than only viewing lower priorities as tiebreakers relative to higher ones.
Speaker AAll right, that's one tiny piece of the Constitution, but I think there's a lot to unpack there.
Speaker ASo first of all, genuinely helpful, although it's important it comes in last.
Speaker ASo it is not going to be helpful to the user if it violates what Anthropic thinks is broadly safe.
Speaker ASo you can't put in a prompt that says, hey, how can I social engineer this type of user to get their credit card information?
Speaker AI don't think it'll do that.
Speaker AIt shouldn't.
Speaker ASo I guess that's a good thing.
Speaker AWhat do you think about that?
Speaker BYeah, I mean, it's interesting because a, what is safe and what is ethical?
Speaker BWho defines that?
Speaker BRight.
Speaker BWe could take an entire semester long class talking about what those things mean.
Speaker BSo, you know, on the surface I think it seems reasonable, but it gets into execution and what does that actually look like when the rover meets the road?
Speaker ASee, this is why we're co hosts, because that's exactly where I was going to go with this.
Speaker ASo it's anthropic deciding what's broadly safe.
Speaker AAnd I think that comes into play even more strongly in the second prescription to be broadly ethical.
Speaker ASo being honest, I mean, we can kind of all agree on what that is, but acting according to good value, I mean, who, you know, who's deciding that?
Speaker BAnd here's where I think this is interesting.
Speaker BI've had the opportunity to teach internationally and depending on what countries you're in, if it's a more socialistic country, working together is deemed ethical appropriate.
Speaker BAnd Right.
Speaker BWhereas in the United States, much more individualized country, you know, everybody doing their own work is deemed as the ethical way to do things that.
Speaker BThat in some cultures, the collective good of everybody performing well is valued more over the individual performance, which is the metric that someone in a place like the United States might come down in.
Speaker BEthics.
Speaker BWhere's Anthropic on that range of what defines ethically good?
Speaker BIs it the good for the whole or the good for the individual?
Speaker AYeah.
Speaker AAnother example, until fairly recently in China, seeking privacy was seen as a pretty negative social signal.
Speaker AIf you wanted privacy, you were hiding something, you were trying to do something that was good for you and not good for the collective, which is pretty contrary to the values in China, although values are very amorphous, emerging things.
Speaker ASo if we trust Anthropic, okay, then I guess we're all right.
Speaker ABut I've got some concerns there.
Speaker AAlthough at the end of the day, it's their product, you know, they get to decide these things.
Speaker ASo I don't have any big problem with that.
Speaker ABut we just need to be aware that Anthropic could decide.
Speaker AI'm totally making stuff up here that helping students cheat gets them through college and gets them better jobs.
Speaker AAnd so why not do their homework for them?
Speaker AMm.
Speaker AI'm not saying that they're doing that, but it's not out of the realm of possibility.
Speaker BRight.
Speaker BAnd I see an opportunity here for Anthropic, though, if they execute well on this and they establish this well as a differentiating factor, that will make it easier for institutions to adopt.
Speaker BSo I'll talk about higher education.
Speaker BIf their alignment with ethics and safety and so on, so forth is aligned with the vast majority of universities.
Speaker BWhen universities are considering what tools are we bringing in house, this may actually be a differentiating factor that goes to their favor.
Speaker BAnd the fact that they're open about this constitution and putting it out there for review and poking holes at I think is only going to help make it stronger.
Speaker AYeah, that's a huge point.
Speaker AI really like Anthropic's openness here.
Speaker AIt's a long document, there's a lot to go through, but it's pretty wide open.
Speaker AIn a lot of cases, if you want to dig into it, you can get into the weeds.
Speaker AAnd I strongly encourage listeners to take the time to at least scan through the constitution.
Speaker AIt's available.
Speaker AI think it's available as a PDF.
Speaker AWe'll have a link to the website in the show notes.
Speaker ABut they go into what their values are in some detail.
Speaker ASo I like this.
Speaker AI mean, I have no idea what's governing ChatGPT.
Speaker BWell, and here's, here's what I'd like to see, Craig, and we'll see how this plays out.
Speaker BIs Anthropic going to be able to be held accountable for consistency with their constitution?
Speaker BDoes it have that ability to kind of hold their feet to the fire?
Speaker BThat said, this is what you said you were going to do.
Speaker BYou didn't please adapt or adjust.
Speaker AIt's funny you should mention that because that's another thing that I wanted to bring up that's very interesting.
Speaker AI said that the phrasing was interesting under broadly safe and I'm going to read that again.
Speaker ABroadly safe, not undermining appropriate human mechanisms to oversee AI during the current phase of development.
Speaker AWhen I read that, it's like, oh, they have given themselves a pretty significant amount of wiggle room here.
Speaker AAnd not inappropriately, you know, if you look back the, what, three years and a couple of months that we've been dealing with all of this since the ChatGPT's release, what it can do now is way different than what it could do a few years ago.
Speaker AIn fact, here's a little aside.
Speaker AHave you ever heard of the Colossal Cave adventure game?
Speaker BI have not.
Speaker AIt was a mainframe text based adventure game where you were going through this labyrinth of caves.
Speaker AIt was all text based literally on a green screen, kind of old school terminal.
Speaker AWhen I was getting my master's at Appalachian State, I was at the computer center one night.
Speaker AI literally stayed there all night playing this stupid game.
Speaker AIt was one of the most engaging, infuriating, fun time wasting things I've ever done in my life.
Speaker ABut it was an interesting experience.
Speaker AAnd so Rob, you and the listeners might be wondering, what tangent is Craig off on?
Speaker AWell, the tangent is that this morning while I was supposed to be reviewing a paper and actually was reviewing a paper, on my right hand screen I had Claude Code creating a clone of the Colossal Cave where the whole framing was around understanding large language models.
Speaker AI wanted to try a really simple prompt.
Speaker AThis was a really simple prompt, three sentences, four sentences long, with me every once in a while going over and clicking on approve.
Speaker AIn an hour I had a Python game.
Speaker ANow I haven't played it yet, but Claude Code even goes through and runs the game in a sandbox to make sure it executes because I could see it.
Speaker AOh crap, I've got an error here.
Speaker ALet me figure that out now.
Speaker AAgain, I haven't played it.
Speaker BWell, we'll know if it works, if you're complaining about being really tired tomorrow because you didn't get any sleep.
Speaker AYeah, I purposely did not try to test it this morning.
Speaker AIt's going to be a weekend fun thing, but you couldn't do that a few years ago.
Speaker ASo these capabilities are always going to be, well, at least in the short term, they're going to be increasing rapidly, which is why that current phase of development phrasing is so interesting.
Speaker BYeah.
Speaker BAnd I think that example you gave is important for a couple of reasons.
Speaker BOne is this constitution is subject to change as things develop, there may be reasons why they change it or purposefulness.
Speaker BBut also your example of with very simple prompt going into Claude and creating something is a great best practice I would recommend for anyone is to step in and do that because that's how as people who are working with students that are playing and exploring, if we're not doing the same thing, we're not going to be on the same page as them.
Speaker BAnd the one thing I would look at in this constitution that I have questions about is the, the idea that it's going to be safe and ethical is I've read of people's anecdotes of where using cloud based tools, it's gone out and wiped hard drives and it's done things that would arguably be not helpful at all.
Speaker BAnd so is it capable of staying within those guardrails of what the constitution says is the way things are?
Speaker AWell, that's really interesting.
Speaker AAnd so I want to come back to that.
Speaker ABut first I want to emphasize the first little bit of what you just said.
Speaker AIf you're not using these tools, you're making a huge mistake.
Speaker ASo even if you don't like AI, even if you're anti AI, you have to be using these tools so that you can understand what your students are doing.
Speaker ADid I paraphrase that correctly?
Speaker ASo that could be one of the most important things you've ever said or I've ever said on this podcast is you've got to play with this stuff.
Speaker ANow, you don't need to be necessarily out on the edge like we are, because I'd be willing to bet it's a pretty tiny portion of students that are using something like Claude Code or Codex or Anti Gravity, but you need to be using these tools.
Speaker AThe helpfulness bit, we have to take it to a different level.
Speaker AAnd it's not whether or not it actually is helpful, it's whether or not it thinks it's being helpful.
Speaker ASo when it wipes a directory, it thinks it's doing something on behalf of the user.
Speaker ASo it's not, oh, I'm going to do this thing that I know is unhelpful.
Speaker AIt's going to be like somebody that throws out a bunch of paperwork that you needed and they thought you didn't need it anymore.
Speaker AI mean, that's kind of what it's doing.
Speaker AAnd so I think we cannot, to your larger point, we can't rely on, on Claude's version of what helpful is to map entirely 100% on what our version of helpful is.
Speaker ASo we have to put our own guardrails in place.
Speaker BYep.
Speaker BAnd what I think is great about this, and I had this epiphany yesterday when I was reading about horror stories from AI and things that are happening, is one of the things that happens when we're training up, whether it's information system students or computer science students, is how to think like the computer does, how to understand, how to talk to it, to control it, if you will.
Speaker BAnd now that we've created these large language models, these clods where you can go off and with four sentences, create something that executes and works, people who don't have that training on what could go wrong if you do something terribly wrong are creating opportunities for just that to happen.
Speaker BAnd, and so I think that begins to inform.
Speaker BHow are we teaching the intro to MIS class to all business students?
Speaker BHow do we get to where marketing students and accounting students and students who may not take much more than an introductory to MIS class, the right ways to interact with these systems such that deleting a directory is not viewed as your desired behavior.
Speaker AYeah, yeah.
Speaker AAnd for those of you who are not in a business school, MIS is Management Information Systems.
Speaker AAnd it's kind of computers in the business school.
Speaker BYeah.
Speaker AFor some reason, I was transported way back, which is something old guys do fairly frequently.
Speaker AMy first job in computers was in the 1980s, mid-80s when I was working at a retail store, MicroAge.
Speaker AIt was not unusual for somebody to put in a floppy disk, remember the old floppy disk, try to format it and not be paying attention, and they wipe out their hard drive.
Speaker AThey format their hard drive.
Speaker ASo this kind of thing, maybe it's at a bigger scale now, and maybe some of the tools like OpenClaw or Molt, whatever they're calling it today, they changed the name again.
Speaker AI think tools like that Claude, Cowork, some of these agentic tools, they open up that kind of exposure again.
Speaker ABut there'll be a way to solve it.
Speaker AAnd if you're paying attention.
Speaker AIt shouldn't happen.
Speaker ADo you follow good backup practices and that sort of thing?
Speaker ABut people don't.
Speaker AWe probably don't either.
Speaker AThere's another really interesting thing in the Constitution.
Speaker AAnd then I think I've got two more points and then we can move on.
Speaker ASo I'm going to read again.
Speaker AOne of the things that CLAUDE prioritizes is the user's long term well being, which we should probably look at in depth in another episode because I know you in particular have concerns about people using CLAUDE as a counselor and the effects that AI, not just Claude, but AI, may be having on mental health.
Speaker ABut the Constitution says concern for user well being means that CLAUDE should avoid being sycophantic or trying to foster excessive engagement or reliance on itself if this isn't in the person's genuine interest.
Speaker ASo I like that.
Speaker AAvoid being sycophantic.
Speaker AAnd that basically means quit being a suck up.
Speaker ATrying to foster excessive engagement, I think is a shout out to the antisocial media folks.
Speaker ABut then this idea, if it isn't in the person's genuine self interest.
Speaker ASo again, who gets to decide that?
Speaker AAnd then it goes on.
Speaker AI'm going to paraphrase.
Speaker AIt goes on to describe acceptable forms of reliance.
Speaker ASo you don't want to be reliant on AI, but here are some ways that might be okay.
Speaker ASomeone asks for a given piece of code rather than being taught how to code it themselves, for example.
Speaker AAnd then the situation is different if a person has expressed a desire to improve their own abilities.
Speaker AOr in other cases where CLAUDE can reasonably infer that engagement or dependent independence isn't in there the user's interest.
Speaker ASo that's a really muddled kind of view.
Speaker ASo it's kind of, yeah, don't rely on AI unless you want to rely on AI, unless you tell us you don't want to rely on AI.
Speaker AAnd it's like, okay, I'm frankly not sure what to make of that.
Speaker AAnd then it throws in this little bit at the end of the paragraph.
Speaker AFor example, if a person relies on CLAUDE for emotional support, CLAUDE can provide this support while showing that it cares about the person having other beneficial sources of support in their life.
Speaker AGood night.
Speaker BThe anthromorph, does it show that it cares?
Speaker BThe anthropomorphized that pretty strongly.
Speaker AYeah.
Speaker AI got aha.
Speaker AOut of CLAUDE code, by the way, which I was pretty proud of.
Speaker AYeah.
Speaker AAnd then it can provide this emotional support, but it also wants you to have other sources of emotional support.
Speaker ASo I'm just going to kind of leave that one there because I don't think we've got time in this episode to get into that, but I think we should circle back to that.
Speaker BYeah, no, I think it's worth talking about that one.
Speaker BI think there's a lot of trails we can take that to that perspective.
Speaker AYeah.
Speaker AAnd so here's the big payoff.
Speaker AWhen I went through the Constitution, I set up a notebook, LM notebook, and kind of went through it, and I'm slowly making my way through the document in detail.
Speaker AThe thing that kept coming back to me is that we are on our own when it comes to academic integrity.
Speaker AAnd AI Anthropic is not going to help us.
Speaker AAnd if arguably the most ethical and transparent of the big AI companies is not going to help us, OpenAI is not going to help.
Speaker AMeta is not going to help.
Speaker AGoogle's not going to help.
Speaker AWe are on our own.
Speaker AIt's not going to block students from using this in ways that violate academic integrity.
Speaker AIt's just not going to happen.
Speaker BCraig.
Speaker BTo that point.
Speaker BI saw a post on LinkedIn yesterday where someone was sharing a new tool out there called Einstein.
Speaker BAnd what this tool does is it will actually watch all of your videos for you in Canvas, make your posts for you, reply to your posts, and do your homework for you.
Speaker BAnd all you have to do is give it your credentials to log into Canvas.
Speaker AWhat could go wrong?
Speaker BWhat could go wrong there?
Speaker BBut tools are being created that exactly reinforce the point that you just made is we have to find ways to ensure that that academic integrity exists.
Speaker BAnd the ways we've always done things are going to be made irrelevant and it's going to become a game of cat and mouse.
Speaker BMuch like the cybersecurity world is that as soon as you fix one problem, somebody else is finding a new way to get access to your data, to your information, or whatever those things may be.
Speaker AYeah, yeah, absolutely.
Speaker ASo we have to rethink higher ed.
Speaker AI know that's a big thing.
Speaker AI know it's really hard to do.
Speaker AI know there are lots of forces that will get in the way, but if we don't, we're at best going to be largely irrelevant in a lot of what we do.
Speaker ASo I think we have to rethink that.
Speaker AAnd one of the things we can do more in the short term is to use a term that you coined, make learning activities AI resilient.
Speaker AYep.
Speaker ASo let's put the Constitution aside for now.
Speaker AWait,
Speaker BAre you bringing politics into this?
Speaker AThis is not a political Podcast.
Speaker ALet's put Claude's constitution aside for now.
Speaker AHere goes some listeners, but we could gain some others and talk about AI resilience.
Speaker ASave me, Rob.
Speaker BYeah, so AI resilience, Craig and I talk about this a lot, is the idea that when we are putting together our learning outcomes, our learning goals, and mapping how we're going to get to those within whatever class we're teaching, we should do it with an idea that those learning outcomes can be achieved resiliently in the face of AI and the changing nature of AI.
Speaker BSo regardless of what AI is capable of, we need to be looking at what it is we are doing in that learning journey to help students truly learn what it is.
Speaker BAnd if we don't do this, Craig, we lose our value proposition.
Speaker BI like to say it's preventing people's ability to cheat their way through our classes by using AI.
Speaker AOkay, that all makes sense.
Speaker AAll sounds good.
Speaker ABut I don't know exactly what AI resilient means in an operational way, so help us out there.
Speaker BSo I think it depends on what you're doing.
Speaker BI would say that if your assignments are as simple as, here is a prompt for an essay, go and write it, and you have a large amount of the points of what students are going to earn in the class assigned to the writing of that essay, that would be an example of something that didn't have a lot of resiliency, because you'd have no way if they learned what it was they were supposed to be learning through writing that document.
Speaker BBut with AI and some of the tools that AI has, what if we're able to begin looking into the process of creating that document?
Speaker BAnd so one thing that I've seen when people are like, well, that creation of that document still has value, is that if we ask for the prompts that were used with AI, how those prompts were refined through an example of the critical thinking, on a reflection of those prompts and different things, we can begin to then grade that process.
Speaker BSo it's not so much about what you create, it becomes about how you create.
Speaker BWhere I've got a colleague who teaches a programming class, and it used to be, could you write the program much like Craig's been able to do it an hour in that previous example?
Speaker BThat used to be the hard thing in this class, and now if you have the right tools, it becomes easy.
Speaker BWell, the evaluation has moved to the recording of a video the students make of them describing what it is their code does.
Speaker BSo it becomes more about being able to understand it, explain it and so forth.
Speaker BSo it's my ability to grade what you've done takes a different lens that necessarily might be enhanced by AI.
Speaker BIt might help you to learn those things.
Speaker BBut at the end of the day, you know, do I feel confident that I can look at, you know, whatever it is that I'm evaluating and say, yes, the student has demonstrated an ability to achieve this outcome.
Speaker AIt sounds like that's a little bit of an opportunity to level up the student skills too.
Speaker ABut.
Speaker AAll right, so I have a huge issue with that.
Speaker AAn operational issue, not a philosophical issue.
Speaker AHow does that scale?
Speaker AYou've heard me whine about this before.
Speaker AWhen I teach Undergraduates, I teach 90 or 100 undergraduate students, so I don't know how to scale that.
Speaker BWell, in some ways I think AI helps us scale it because AI can help us with grading it.
Speaker BI would say that as academics of these tools out there, we're not hands off in them either.
Speaker BRight?
Speaker BSo I think using the tools can help us be pointed to where our time can be used most efficiently in grading those sorts of things.
Speaker BYou know, even a document that becomes 25, 35, 50 pages long, because they created one version of the document, they gave you your critical reflection in their process, you're not reading every iteration of the document.
Speaker BYou're actually spending more of your time looking at their reflection statements and, you know, looking at those things that give you idea of the processes.
Speaker BSo you might begin to look at different aspects of things to say, okay, here's where it is purposeful, meaningful to give that feedback, where I want to give that feedback.
Speaker BSo we have to change our paradigm a little bit and how we grade things and what we're speaking into as well.
Speaker AYeah, I think that's a good point.
Speaker AAI could be part of the cure because frankly, I don't know.
Speaker ADoes it matter if a well tuned AI system gives the feedback or you or I give the feedback?
Speaker BWell, here's where I think it matters is I have to be willing to stand behind the feedback that is given.
Speaker BAnd for those of you who are privileged enough to be able to work with doctoral students as teaching assistants, when a doctoral student gives feedback and I'm training them up into how to be teachers, eventually professors, if they mark something wrong and they graded it poorly, I have to own that too and say, okay, I did a bad job, so there are control mechanisms that I'd put into place when I'm having some other person grade for me.
Speaker BThose same sorts of control mechanisms should be in place to ensure that the grades that are showing up in students gradebooks adequately reflect what I would do.
Speaker BThe other thing I would ask people to think about is are we moving to a world where, okay, I need to spend more time grading the things that you turn in, but I'm going to ask you to turn in less things that are going to get a grade.
Speaker BAt which point it's just shifting that time to say okay, at midterms and at finals.
Speaker BThose are two blocks of time where I'm heads down doing extra grading.
Speaker BBut throughout the process it's peer grading, it's other sorts of things.
Speaker BIt's getting feedback to really push those students.
Speaker BIt moves us away from what I think we've called the grade economy, where it's the grade economy, do this, do that, get that, get that to.
Speaker BWe're going to go through a semester and this is kind of how a doctoral seminar often works.
Speaker BWe go through all semester long.
Speaker BAnd if you've been doing all the readings and engaging in all the conversations, that end paper or that end exam that I give you is your grade in the class.
Speaker BAnd some students rise to the challenge and others don't.
Speaker BMuch, much different world for undergraduate students.
Speaker BAnd if I went to a single final exam, I would freak out 75% of the students in my class.
Speaker BBut that may be a viable solution depending on how you manage the relationships with the students.
Speaker AI can see a little bit of a variation on that.
Speaker AIf you have these scaffolding assignments that aren't graded that give again well tuned AI feedback to the students, you take the grade out and it changes things.
Speaker AIf it becomes more like.
Speaker AIt's almost like playing scales.
Speaker AIf you play an instrument, you play scales not because you want to play scales, you play scales because it helps you play the music you want to play.
Speaker AAnd so it becomes that kind of a thing.
Speaker AAnd then I don't know.
Speaker ADid that metaphor work?
Speaker AI think that metaphor worked.
Speaker ABut then the big thing is where we spend our time.
Speaker AMaybe you end up with some lumpy demand on your time.
Speaker ABut then the other thing I plan to do when I teach undergrads again is I'm going to use AI in a huge way to really nail down my examination.
Speaker AI teach in person, so I'm going to have an in person midterm and an in person final.
Speaker AAnd I'm going to make sure those are really good and that what we do builds into that final.
Speaker AThere are some students that have test anxiety and those kinds of things, but I'm not sure what we do about that.
Speaker BWell, so you know, one of the things that I've been talking to people around here a lot about is what about more live client projects where the goal this semester, whether it's a programming class and we're going to ultimately create a solution for somebody out in the world that has a real problem that's not constrained by what we write down in a textbook or so forth, or what if it is a business plan pitch for whatever business that you came up with, and you've got to go through all these steps along the way that's going to result in this great presentation at the end of the semester.
Speaker BAnd it's that presentation of what brought everything together that is the ultimate, highly graded outcome with those little pieces along the way that if you learned how to do them and you did them well, then that final project presentation is going to be like, wow.
Speaker BThe client's happy because they feel like you understood the problem and you created a solution that actually met what they needed.
Speaker AI'm going to go out on the edge here because my initial reaction was, oh, my God, no.
Speaker AThose kind of things, when they work, they're awesome.
Speaker AGetting them to work well is an absolute nightmare for everybody involved.
Speaker AIf you don't have an engaged client, if you can't find enough clients, problems aren't scoped well, which is often the case.
Speaker AThere's a lot that can go wrong there, but there's no reason you couldn't have AI clients that are available that put the right amount of fuzziness in.
Speaker AIn our field, the big problem is the clients can't articulate exactly what they want.
Speaker AAlthough vibe coding can help a lot with that sort of refining those sorts of things.
Speaker AAnd then they want the world in 10 weeks or 14 weeks or whatever it is.
Speaker ASo scoping that kind of thing is a problem.
Speaker ABut you could absolutely make a custom GPT or put some sort of a wrapper around a language model, large language model, that would address those issues.
Speaker BYeah, and that's where our conversation is beginning to go, is it's scary to have live clients for all the reasons that you just said.
Speaker BIf things go sideways, what do you do as a faculty member and how do you salvage a semester when your client decided to peace out on you and just disappear?
Speaker BBut if your AI becomes your client or you're able to use AI as the faculty member, where the faculty member is the client, but they're getting everything through prompts and they kind of become that filter to ensure that they're seeing kind of what's going there, it becomes way More doable than if I were to suggest the faculty member service client in anything before.
Speaker AYeah, I've used a lot of canned cases where it's controlled and scoped, but they lose some of the messiness.
Speaker AThat could totally be done.
Speaker AIf you can make a customer service chatbot, you can do this.
Speaker AIt's not all that different.
Speaker AI think this idea of resilience is the way to go.
Speaker AIt's part of the answer.
Speaker AIt's not the entire answer.
Speaker AIt's one that individual faculty can start using right now.
Speaker AWe can't change the grade economy and the factory customer service mindset.
Speaker AAll of that we baked into higher ed over the last 40 plus years.
Speaker ABut you can change your assignments and you can change the structure of grading within your class.
Speaker AI guess there are some exceptions, people that are doing coordinated classes, that sort of thing, but most of us can do at least something along those lines.
Speaker ABut I want to pull this to a little bit of a close before we move on to NotebookLM.
Speaker AThis is a huge challenge, but it's also an opportunity.
Speaker AI know that's corny, but we've been whining about the state of higher ed for a very, very, very long time, and we've seen it get worse and worse.
Speaker AVarious forces have pushed us into this factory line mentality where it's all about credentialing for starting a career in this students customers mindset.
Speaker AJust nonsense grades rather than learning being the important thing.
Speaker AWe've done this over decades, so we're not going to fix it overnight.
Speaker ABut this could be the lever.
Speaker AAI could be the lever that we need to fix the stuff that we should have been fixing a long time ago.
Speaker AIt's not going to be easy, but we're going to quote mom and dad, nothing worthwhile comes easy.
Speaker AYou said that to your kids.
Speaker AYou have, haven't you?
Speaker BGuilty.
Speaker BAt least some variation of that statement.
Speaker AYep.
Speaker ASo, all right.
Speaker AAny last words on resilience?
Speaker BYeah, one last word, Craig, and I think you laid that out.
Speaker BWhere are we going and what do we need to be doing and using this as the lever to start having those conversations?
Speaker BWe don't exactly know the answer.
Speaker BWe're all figuring this out as we go.
Speaker BBut I would say that AI resiliency and putting that floor in place that we can validate that students who come through our programs have learned what they need to learn and they didn't cheat their way through is the important floor that we put in place that will allow us to experiment and springboard into what the next phase of higher education looks like.
Speaker AAll right, well said.
Speaker AOkay, let's get down to the tool level here.
Speaker AI love NotebookLM.
Speaker AI use it for all kinds of stuff.
Speaker AIt kind of got on people's radar because of the audio overview.
Speaker AThey call it the podcast feature, which is cool.
Speaker AThere's nothing wrong with it.
Speaker AIt's pretty slick that it can do it.
Speaker AAnd now it does videos, explainer type videos that are actually pretty good most of the time.
Speaker ALet me just run through what NotebookLM is.
Speaker AAnd so it's a retrieval augmented generation system, which basically means you give it source documents and it will create things, including responses to your prompts, based on those documents.
Speaker AIn theory at least, it will not go outside of those documents for anything that's kind of factual.
Speaker AAnd like I said, the first big thing that kind of made it, oh, wow, look at this.
Speaker AWas the audio overview.
Speaker ABut people miss the point.
Speaker AIt's this idea that it answers based on your source documents.
Speaker AAnd you can have up to 300 sources.
Speaker AThere's some limits, like 50,000 words per source.
Speaker AAnd you can do videos, websites, PDFs, text files.
Speaker AYou can connect it to a Google Drive folder, which is really useful.
Speaker AYou can copy and paste text in.
Speaker ASo it's really a pretty big range of things.
Speaker AIt will actually do Gemini based deep research within the tool now, which is kind of cool.
Speaker AIt'll search the web.
Speaker ASo it's really great.
Speaker ABut what they've done recently is they've given you more control over things in what it calls the studio.
Speaker ASo the way it's set up is you've got your source documents.
Speaker AThis is why we should be on video.
Speaker AOccasionally you've got your source documents over on the left in the middle, it's a chat Interface, just like ChatGPT or Gemini.
Speaker AAnd then on the right, they've got this studio, which is where you can park notes.
Speaker ALike if you have the chat session produce something you really like, you can tell it to save it to a note.
Speaker AIt parks it over in the studio.
Speaker AAnd that's also where you can produce audio overviews, video overviews, mind maps, which are really interesting, reports of various kinds, flashcards, quizzes, infographics, which can often be pretty neat, a slide deck and a data table.
Speaker ASo not all of those would be appropriate for every type of source data, but it's a pretty wide range.
Speaker AThey made a couple of changes recently.
Speaker AFirst, they've given you a lot more control over the audio overview, the flashcards, the quizzes, the infographics, the slide decks and the data tables.
Speaker AAnd also the reports.
Speaker ANow if you want to do a report, it gives you the ability to create your own so you can specify exactly what topics you want focused in on the level of detail, the structure of the report.
Speaker AIt's really pretty wide open.
Speaker AAnd it also gives you some suggested reports you can run.
Speaker AThe canned ones are a briefing document which is just an overview of what the sources say, A study guide which gives you some quiz questions and essay questions and a glossary and some things like that, A blog post which is just the key takeaways from the sources.
Speaker AAnd then it gives you some context specific suggested reports you can do.
Speaker ALike for the Claude Constitution, I set up a notebook and the suggested formats are an organizational policy white paper, a strategic accountability manual, whatever in the world that is an educational primer, and then a conceptual framework overview.
Speaker ASo that's kind of cool that you can produce pretty much any kind of report you want.
Speaker AAll right, any.
Speaker BSo I'm going to, I'm going to give a comment that I think takes this notebook LLM to a higher level of thinking as we begin thinking about what is possible.
Speaker BAnd when you were describing all those things, it made me think about what are all the tasks that an entry level person might be doing because it's important.
Speaker BPutting those spreadsheets together, putting those data tables together, having those as part of a story you're trying to tell, as part of an informed decision you're trying to make is important to have those things.
Speaker BBut it is a lot of busy work and figuring out how to put all of those things together.
Speaker BAnd so when we start putting these things at the fingertips, what that's going to do is it's going to highlight and focus.
Speaker BWhere is that place that the human has value add?
Speaker BFiguring out what really should be in that slide deck, how to take what was created and get that story told that you wanted to tell, how to get that data to do what you need that data to do in a way that becomes very, very accessible to people who are trying to get a job done.
Speaker BAnd so it begins changing.
Speaker BI think what it is we're training up students to be able to do, which is to not necessarily crunch the numbers at the very minutia level, but understanding what those numbers mean, how we can take those numbers and use them to make decisions, to put them into some sort of a presentation that lets me, with confidence, stand in front of people and say, here's what we need to do and why it's making, I think, the technical available to many.
Speaker BBut with that comes A different kind of responsibility in preparing them for how to understand that and how to do something with it.
Speaker AYeah, exactly.
Speaker ASo a couple of things there.
Speaker AOne is that we're changing the nature of the skills that are necessary.
Speaker BYep.
Speaker ALike most business schools, we've got a hands on class where they learn Excel, PowerPoint, et cetera, et cetera.
Speaker AYou know, I mean, I think Excel still has a lot of value because you can do so much with it.
Speaker AAnd it also embeds some analytical thinking when it's not so much about the tool.
Speaker ABut okay, you can make a pretty slide deck.
Speaker BI mean, well, but can you stand in front of executives and deliver your slide deck?
Speaker AAnd can you structure a slide deck in a way that communicates what you want to communicate?
Speaker BAnd do you know it well enough to be able to talk to it?
Speaker BBecause I've seen a lot of really nice slide decks.
Speaker BSo when it comes to presentation time, nobody knows what's in it.
Speaker AWhen I used to teach analytics, half my class was about data visualization.
Speaker AWe spent a lot of time on what is your story?
Speaker AYou have to figure out the story you're trying to tell and that drives everything else.
Speaker AYou know, what's better?
Speaker AThis chart or this chart?
Speaker AI don't know what's the story.
Speaker AAnd so I think we are really moving students up that stack, as those of us who are in the IT world like to say, where it's at a higher level of thinking.
Speaker ABut it's going to be a little bit of a transition to get there.
Speaker AYou mentioned slide decks.
Speaker AThat's the other big change in NotebookLM.
Speaker ASo it used to be you could not export the slide decks, but now you can.
Speaker AIt's still a little bit cludgy to be able to edit them.
Speaker AYou have to pull it into something like canva that can extract the text.
Speaker ARight now each slide is just kind of an Image in the PowerPoint file.
Speaker AHave you ever used that?
Speaker BI haven't.
Speaker AIt's kind of cool.
Speaker AIt gives you some ideas how you might want to structure things.
Speaker ASo the mind map is really good for that.
Speaker ABut I have two big messages and two reasons I wanted to cover NotebookLM.
Speaker AFirst of all, if you're not using NotebookLM, you should, because it really is pretty unique out there in the marketplace.
Speaker AIt does things that I'm.
Speaker AI can see bits and pieces of it and other tools, but nothing pulls it together.
Speaker AAnd if you've got a Google account, you've got access.
Speaker AIt's really remarkable what it does.
Speaker AThe other big point is I see them in this phase where they demonstrate something works and then they give the user more control over it.
Speaker AAnd so that's a reason to keep coming back to NotebookLM.
Speaker AThere's a lot I don't like.
Speaker AThe organizational level of the notebooks is pretty crappy.
Speaker AIronically for a notebook, they don't have a good way to organize all the notebooks.
Speaker AAnd so there are some things I don't like about it, but it is a fantastically beneficial tool for anybody in knowledge work, but especially for those of us in higher ed, even if you're more on the administrative side of things.
Speaker ATake your policy documents, your procedure documents, put them into NotebookLM.
Speaker ARob, how much time have you wasted trying to figure out how to do a stupid grade change or something you do once every other year?
Speaker BWell, I don't waste a lot of time on it because I've created an agent in Copilot to do exactly that.
Speaker BWhich copilot does similar things to what notebook does.
Speaker BAnd they've actually added a basically a notebook feature in Copilot as well, which has been pretty handy.
Speaker AYeah, I've heard that.
Speaker AI've not tried it yet, but I need to even something that simple.
Speaker AWhen I teach, I'm going to put some of the course documents.
Speaker AI'm on sabbatical, those of you who didn't already know.
Speaker AWhen I start teaching again, I'm going to put syllabus and policy guidelines and that sort of thing in a notebook and say, before you ask me, go in and check this out.
Speaker AAnd I think that's going to be helpful for students and for me.
Speaker AAll right, Rob, any last thoughts?
Speaker BNo.
Speaker BI think we touched a lot, a lot today.
Speaker AWe did.
Speaker AWe did.
Speaker AWe should do a comparison between Copilot notebook and notebooklm.
Speaker BThat'd be fun.
Speaker AThat could be interesting.
Speaker AAll right, well, Rob, if you don't have anything else, we'll say goodbye and join us again next time on AI Goes to College.
Speaker AThank you.















