June 11, 2025

AI's Disruption: What It Means for Knowledge Workers and Higher Ed

AI's Disruption: What It Means for Knowledge Workers and Higher Ed
The player is loading ...
AI's Disruption: What It Means for Knowledge Workers and Higher Ed

The recent discussion between Craig Van Slyke and Robert E. Crossler centered around the alarming prediction from Anthropic's CEO regarding the potential displacement of up to 50% of entry-level knowledge work positions within the next five years due to advancements in generative AI. This assertion prompts a critical examination of the implications for higher education, particularly concerning the preparedness of graduates entering an increasingly automated workforce. Both hosts express skepticism about the immediacy and extent of such disruptions, emphasizing the necessity for educational institutions to adapt curricula to cultivate higher skill levels among students. They highlight the importance of fostering AI discernment and ethical considerations in the use of AI technologies, advocating for a proactive approach that prepares students for evolving job market demands. As the conversation unfolds, they underscore the urgent need for educators to engage in thoughtful dialogue and innovative practices to effectively equip students for the future.

Takeaways:

  • In recent discussions, a warning was issued stating that potentially half of knowledge work jobs may be eliminated due to AI advancements within the next five years, prompting significant concern among educators and industry professionals.
  • The conversation emphasized the importance of preparing students for a future job market that increasingly favors higher-level skills, particularly in light of the potential displacement of entry-level positions by generative AI technologies.
  • It was noted that while AI may lead to job displacement, it is also anticipated to create new job opportunities, suggesting a complex landscape where education must adapt to these shifting dynamics.
  • The hosts discussed the necessity for higher education institutions to begin incorporating AI discernment into their curricula, ensuring that students understand the ethical implications and operational realities of AI usage in the workplace.
  • The episode highlighted the unprecedented grassroots adoption of AI technologies, as individual workers leverage AI tools independently, often circumventing organizational policies or restrictions.
  • The hosts concluded with a call to action for educators to embrace AI in their teaching, encouraging experimentation and risk-taking as essential components of evolving educational practices.

Links referenced in this episode:


Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - Untitled

00:41 - Untitled

00:42 - Introduction to the Podcast

01:00 - The Impact of AI on Employment and Education

11:16 - AI Discernment in Higher Education

19:43 - Redesigning Education for AI Integration

30:14 - Creating an AI Learning Activity Repository

37:21 - Introduction to NotebookLM and Mind Mapping

38:51 - Exploring Mind Maps in AI Education

Transcript
Speaker A

Welcome to another episode of AI Goes to College, the podcast that helps higher ed professionals figure out just what in the world is going on with generative AI.

Speaker A

I am joined once again by my friend, colleague and co host, Dr.

Speaker A

Robert E.

Speaker A

Crossler of Washington State University.

Speaker A

Rob, how are you?

Speaker B

I'm doing great.

Speaker B

How are you doing today, Craig?

Speaker A

Doing okay?

Speaker A

All right, well, let's get right to it.

Speaker A

So the big news over the last couple of weeks is from Anthropic.

Speaker A

So they released their new Claude 4.

Speaker A

The cynical among us would say it wasn't getting enough attention.

Speaker A

And so their CEO, Dario Amodi went on the kind of talk show news circuit with this warning in which he claimed that up to to 50% of knowledge work white collar entry level jobs are going to be gone in the next five years.

Speaker A

So, Rob, what's your hop take?

Speaker A

Hot take on that.

Speaker B

Hot take.

Speaker B

I'm skeptical as well.

Speaker B

Anytime I hear the person selling the product telling me how it's going to disrupt everything and change everything, the first thing to do is to step back and say, do I believe it?

Speaker B

And I'm skeptical with a lot of the claims about how fast all this is going to happen.

Speaker B

And I think a lot of that goes back to what I've seen happen with information systems deployments over the years, is every time a great new thing comes out and it's going to revolutionize and change everything, it always takes longer than people say.

Speaker B

It doesn't mean it never happens.

Speaker B

It doesn't mean it never comes to fruition.

Speaker B

It just takes longer because our complex, our systems are so complicated.

Speaker B

It takes so much to deploy them and to do them right that I think this is a good thing to pay attention to and to plan for.

Speaker B

But to live in a world of Chicken Little, the sky is falling.

Speaker B

I think it probably is a bit of an over promise by the person who wants you to buy his product.

Speaker A

Yeah, it was an interesting comment if you read any of the news coverage on it, because he was also advocating for a token tax where the big AI companies would get taxed to help do something, I don't know, UBI or something to help deal with this problem.

Speaker B

What's ubi, Craig?

Speaker A

Universal basic income.

Speaker A

I wasn't quite sure what was going on with that.

Speaker A

And then it's up to 50%, so it could be zero, so it's up to 50%.

Speaker A

But it turns out there's a fair amount of consensus on it having significant impacts.

Speaker A

And so the research that I've done, kind of the consensus number is around 93 million knowledge workers displaced, but maybe up to 150, 170 million new jobs created.

Speaker A

So it's the typical kind of a thing.

Speaker A

We're going to lose some, we're going to gain some.

Speaker A

Optimistically, it'll be a net gain.

Speaker A

But what I think we need to focus in on is the fact that it's going to be lower skilled and entry level knowledge workers that are the most disproportionately negatively affected.

Speaker A

So I think those of us in higher ed have really got to start paying attention to this because that seems plausible to me.

Speaker A

The technology can do what kind of the lower level workers can do.

Speaker A

So I don't know.

Speaker A

What do you think?

Speaker B

Yeah, I agree.

Speaker B

And one of the thoughts that I have pondered and struggled with, and I don't know I have a solution yet, but if we look at entry level jobs, kind of the jobs that right now students who would come out of our programs would be stepping into, if those jobs are potentially disappearing.

Speaker B

Right, making it harder for our students leaving our programs to be employed, what can we do within our programs to the point that our students at the end of their time at our institutions are capable of doing that next level job beyond what is currently an entry level job?

Speaker B

How do we in education prepare them for that marketplace where the kickoff line, if you will, where that's positioned on the field is being changed?

Speaker B

So I think that forces us as educators to step back and say, how can we prepare our students better, to push them further, to be prepared for a marketplace that's expecting higher level skills, different skills, as our students leave their time with us?

Speaker A

Yeah, and it's a tough problem.

Speaker A

Should we scale back, have fewer students?

Speaker A

I don't think that's going to happen.

Speaker A

We may not through choice, but through demographics.

Speaker A

We might argue that we should lean into the humanity of things, including the humanities.

Speaker A

But although I'm a huge believer in a classical liberal arts education, that doesn't really resonate beyond lip service with a lot of employers.

Speaker A

They'll say we want good communication skills, we want good critical thinking skills.

Speaker A

And then they hire about how well, you know, Python or, you know, whatever it might be in a different field.

Speaker A

So I'm a little bit skeptical of that, although I think that should be part of it.

Speaker A

Should we look at some kind of thing like the old school co op programs where you go to school for a term and work for a term, almost like an apprenticeship.

Speaker A

But I think now is the time to start thinking, thinking about a lot of these things because it's happening now and it's going to accelerate.

Speaker A

Where I think AI may be a little bit different than some of the other types of technologies that you were talking about earlier, Rob, is that individual workers and individual departments can do a lot with AI without having a lot of higher level organizational coordination.

Speaker A

So it reminds me a little bit of I'm going to go way back here like the apple II and VisiCalc.

Speaker A

So it was the first kind of PC based microcomputer back then based spreadsheet.

Speaker A

And a lot of people in accounting and finance went out and spent significant amounts of money to buy an apple II and VisiCalc because you didn't have to have somebody above you telling you what to do.

Speaker A

You could just get this calculator, this visible calculator, spreadsheet and start doing stuff with it.

Speaker A

As opposed to if you're going to put in an enterprise system or you're going to put in some big widespread system, it's got to be centrally coordinated.

Speaker A

And I think AI is different.

Speaker A

And we're seeing a lot of what we might call grassroots adoption where maybe AI is even banned in the organization, but they're hot spotting their personal laptop to their phone because they can get their work done half the time.

Speaker A

So I think it feels a little bit different this time.

Speaker A

I'm going to go out on a limb and think we need and say that I think we need to start teaching AI supervision or some might call it AI orchestration.

Speaker A

How do you hire AI agents?

Speaker A

How do you oversee those agents?

Speaker A

How do you perform the human checks?

Speaker A

Necessary, know when those checks are necessary, all those kinds of things, how do you do that?

Speaker A

And I think that's one of the things now we're in business, but I think business schools absolutely need to be pursuing that.

Speaker A

But that's a lot.

Speaker A

That's a pretty tall order.

Speaker B

Yeah.

Speaker B

And when I think of history repeating itself, because that's where you bring up VisiCalc and the Apple II and I think back to the days of Microsoft Excel and we think about what happened with Enron, Sarbanes, Oxley, some of those sorts of things where we got some real regulation that created a lot of work for accountants actually, because people throughout organizations were using these new tools and creating their own little ecosystem of data that they knew about, that they took care of, that weren't part of the corporate ecosystem.

Speaker B

And it created problems because numbers were not accurate.

Speaker B

People were using them to fudge things, they were doing things that was unethical.

Speaker B

And I see a lot of potential in the world of AI for unethical behavior, doing things outside of the purview of the organization.

Speaker B

And that's probably going to happen, and we're going to see it.

Speaker B

We are seeing it happen right now.

Speaker B

But there's going to come a point where something really bad happens.

Speaker B

I hate to, you know, be the bearer of bad news, but something bad is going to happen that I think is going to wake up legislative bodies that oversee things and begin requiring and mandating certain types of reporting, certain types of implementation that are going to force some framework, some level of controls around how this is deployed.

Speaker B

And again, I think it goes back to we need to train our students to learn how to, you know, not only, as you said, to have some sort of an oversight over what things are doing, but how do they begin thinking about these new technologies that can do something that just because you can, should you.

Speaker B

What are the ethics in those decisions?

Speaker B

What are the.

Speaker B

The best practices in how to lean into these new technologies in a way that are going to help the business do good things aligned with the best interest of their stakeholders as opposed to the best thing?

Speaker B

Maybe for me to be the most productive in my job, but that might take shortcuts around the proper controls that my organization would want to have in place to ensure what's being created, what's being done, is the best thing that's being done for.

Speaker B

For that organization.

Speaker A

So let's tie that into higher ed after two quick sidebar comments.

Speaker A

One is that it still amazes me that a problem caused by accountants led to the need for more accountants.

Speaker A

I'll have to give our accounting colleagues a little tip of the hat on pulling that one off.

Speaker A

But the other thing is people are still using spreadsheets to do those same things today, and not necessarily just unethically.

Speaker A

Matter of fact, the unethical piece is a tiny but dramatic portion of it.

Speaker A

But what does that mean for higher ed?

Speaker A

What it means for higher ed is regardless of the kind of organization that your graduates are going to go into, they need to gain a skill of what maybe we could call it AI discernment.

Speaker A

Should you be using AI?

Speaker A

What kind of oversight should it have?

Speaker A

What are the ramifications of AI being wrong or being more or less efficient or more or less effective?

Speaker A

All those kinds of things.

Speaker A

And I think that's one of the things that we need to be trying to figure out how to teach and frankly, trying to figure out AI discernment ourselves.

Speaker A

I think very few faculty, me included, really had that figured out.

Speaker A

But I think that's one of the things that we're going to need to give a lot of thought to and pretty quickly because it's coming and it's coming fast.

Speaker A

So I have another kind of.

Speaker A

This is a little bit weird, so stay with me here.

Speaker A

One of my doctoral students is defending his dissertation proposal tomorrow and he's looking at AI anthropomorphism, which is just assigning human like characteristics to AI and it's something we do.

Speaker A

We've got my stupid car, hates me, that kind of thing.

Speaker A

We do that just as humans.

Speaker A

But I wonder if we shouldn't lean into this a little bit and start thinking about AI as being almost like an employee where we have oversight until we get to where we can trust the employee and we train the employee and we put guardrails in place.

Speaker A

I mean, you talked about regulation and kind of implied governance, but all those are guardrails, you know, kind of, to use the AI terminology.

Speaker A

So I'm wondering if we.

Speaker A

I know this is very controversial, but I wonder if we shouldn't kind of really lean into that anthropomorphism.

Speaker A

So am I way off base there or what do you think?

Speaker B

Well, let me ask one clarifying question before I respond, which is, what do you mean by lean into?

Speaker B

What does that look like?

Speaker A

I don't know.

Speaker A

I'm making this stuff up as we go along.

Speaker A

No, no, seriously.

Speaker A

What I think that means is that mental models, especially for the types of AI that you and I talk about, and even to an extent, the more agentic AI where AI kind of goes off and does its own thing, I think our mental models ought to be of those being human, like, because then we'll do the things that we do with human employees.

Speaker A

I mean, I hear people say, well, AI is no good because it makes logic errors and it makes stuff up and it gets facts wrong.

Speaker A

I got news for folks.

Speaker A

Humans do that too.

Speaker A

All the time.

Speaker A

All the time.

Speaker A

And so what do we do to safeguard our colleges, our universities, our businesses, whatever it is, against those kinds of harms with humans?

Speaker A

Well, maybe we need to think about doing the same sorts of things for AI.

Speaker A

But.

Speaker A

But I'm really.

Speaker A

This is really fuzzy in my brain right now, as you can tell.

Speaker B

Yeah.

Speaker B

And I think this is one of the challenges of AI, you know, replacing workers or augmenting workers or whatever that looks like.

Speaker B

I think we have a human tendency to be more forgiving of another human doing exactly those things than when it's this machine that's been created that's doing those exact Same things that our, our tolerance of.

Speaker B

It just seems to be, oh yeah, I would have made the same mistake versus, oh, it was a computer that made the mistake.

Speaker B

Obviously I'd rather be dealing with the person.

Speaker B

And so I do think that's going to be one of the challenges of, of, of things going forward.

Speaker B

And if we think of the machine more like a human, perhaps that starts to change our perspective and how we respond and we become a bit more accepting of it.

Speaker B

I could see that happening.

Speaker B

Yeah.

Speaker B

The other challenge, I think that's, that's related.

Speaker B

One of my kids recently has started doordashing, so I've gotten to learn more about the algorithms and I'm pretty sure they use AI in that.

Speaker B

And Doordash, from as best I can tell, has designed their algorithms in a way to be very capitalistic, profit driven.

Speaker A

And I'm shocked.

Speaker A

Right.

Speaker B

Shocking.

Speaker B

But what I see in that is a lot of frustration in my kid who's like, why am I working for so long and making so little money?

Speaker B

And the algorithm is deprioritizing me for this, that and the other.

Speaker B

But it's that computer doing something, trying to make the most money.

Speaker B

But when we think about even the larger deployment of AI technologies and algorithms to many more places, it's going to affect many more people in various different roles.

Speaker B

How does the human respond to feeling devalued, set aside by the machine and so forth?

Speaker B

And what does that process look like in a way to where human dignity and human respect in the workplace and so on remains?

Speaker B

And it doesn't just become a, you know, how can we make the most money for the least amount of cost?

Speaker A

Yeah.

Speaker A

Although that's really not new, is it?

Speaker A

What I think I'm hearing you, and really both of us say, is that we need to start thinking about this and having conversations about this now because there's a lot of ground to cover and we've got to catch up.

Speaker A

That's what really makes this different in a lot of ways, is that administrations, faculty, student support staff, this is new for all of us.

Speaker A

And I mean, you talk to a lot of people about AI.

Speaker A

I would say that, oh, maybe what, 20% of the faculty have really spent much time at all with AI.

Speaker A

And I'd say a smaller percentage than that.

Speaker A

Spend much time thinking about it.

Speaker B

Yeah.

Speaker B

And I think so.

Speaker B

I think about as an information systems scholar and teacher, one of the things that we share with our students, or at least I share with my students, is what's great about being an information system student is I'M going to teach you some things while you're here with us for a couple of years in our program and two years after you leave, you're going to have to learn all new things, because that's what happens in the world of information systems.

Speaker B

Things change.

Speaker B

You've got to retool, you've got to relearn.

Speaker B

I think that's becoming more and more true for every discipline that we need to, instead of necessarily learning a particular tool and how it's going to work today, thinking that we can rely that for 10, 20 years down the road, what's more important is that we figure out how to learn how to use a tool that we didn't know how to use before and that we have a process that's critical for us to be able to do it, whether it's in writing, whether it's in information systems, whether it's in programming, and begin to learn how to, as opposed to learning just how to apply a particular tool and then to rinse, repeat for a while.

Speaker B

And I think where this is really interesting as we go back to even the impact these changes are going to have on society is how do we prepare a generation of students to be involved in the conversation of what does it look like to regulate this, to put frameworks on it?

Speaker B

How do we, you know, prepare, whether they're business students or liberal arts students or engineering students, so that way they can be part of the conversation, leaving higher education to say, yeah, this is what we need to do as a society so a we can reap the benefits, we can see what benefits we're going to get, but how can we do it in such a way to where we're comfortable with what the deployment of that looks like from a societal perspective?

Speaker A

Yeah.

Speaker A

Yeah, I couldn't agree more.

Speaker A

All right, any last thoughts on this one?

Speaker A

And then we're going to change topics.

Speaker A

Nope.

Speaker B

No.

Speaker A

Okay.

Speaker A

Summer plans.

Speaker A

So it's summertime.

Speaker A

What sort of summer plans do you have, Rob, when it comes to AI?

Speaker B

Yeah.

Speaker B

So I actually decided at the beginning of the summer to completely redesign how I'm teaching my undergraduate class this semester and to 100% lean into how do I do this in an AI sort of way?

Speaker B

What does that look like?

Speaker B

Well, I have created an agent, a bot, if you will, that is my textbook, seated with some websites with some really good information that I think would be exactly what you would find in a textbook.

Speaker B

And I'm going to use that with some guidance for my students to get them used to working with agents.

Speaker B

To get information, to acquire information, to do some critical thinking around that.

Speaker B

How do they even validate that they believe what this agent is telling them?

Speaker B

So that way there's some, some checks in that.

Speaker B

And then for a class project I'm working on designing, I'm planning on having my students create a tool themselves around the topic of cybersecurity, which is what I'm teaching, that requires them to use generative AI to create some sort of a tool, whether it's an agent or otherwise.

Speaker B

Really challenging them to begin doing this and begin in hopefully a safe environment.

Speaker B

Perfection of the tools.

Speaker B

Not what's going to be looked at, but more how creative were you in a process of trying to use these new tools that came out?

Speaker B

So this is kind of me taking the all in approach to what does an all in approach look like for the class that I'm teaching?

Speaker A

So, a couple of questions that sounds very interesting.

Speaker A

First is, how are you going to get students to make sure that they're having the proper oversight?

Speaker A

Like, I was using chatgpt4oh for a little task today, and it gave me a paper that I'd never heard of before, but it was precisely what I needed, which those of you who have used AI for doing much research, it's like, yep, that is pretty suspect.

Speaker A

And sure enough, it was a complete hallucination or confabulation.

Speaker A

Confabulation is so much more of a fun word than hallucination.

Speaker A

But I mean, I kind of knew if that paper was out there, I would have seen it.

Speaker A

But students aren't going to know that.

Speaker A

So what are you going to do to kind of teach them how to have the right kind of oversight?

Speaker B

So for the course itself, I'm using copilot.

Speaker B

And you can create agents within Copilot and it plugs in nicely with themes.

Speaker B

So it allows me to keep the agent I've designed to my institution.

Speaker B

And as part of this agent, I actually gave it three or four web links, which is where it goes and gets that information from.

Speaker B

Unfortunately, you can break out of the limits of those four web links where it should be getting that information from to get something provided that's not in the that space.

Speaker B

So what I'm asking students to do is to share with me whenever they turn something in.

Speaker B

What is your prompt?

Speaker B

What are the results?

Speaker B

What is your reflection on that prompt?

Speaker B

How did you validate that?

Speaker B

You could believe it because it does give you links to where it found that information.

Speaker B

So I'm trying to create an environment where students have to go out and say, okay, this is what it told me.

Speaker B

It's good, but let me go review the source material and make sure that it's believable still.

Speaker B

And so in my eyes, it's that process that I care more about, you know, assigning points to, if you will, than it is what is the actual outcome.

Speaker B

And then hopefully in class, we can create a critical thinking environment through class participation and those sorts of things where I question them, and I asked them probing questions, digging into things to see where does the extent of our learning hit a wall or where do we need clarification, and so on and so forth.

Speaker B

So it becomes more about process than the production of some sort of a document.

Speaker A

Great, great.

Speaker A

So one thing, I don't think this is going to apply to your exact example, but one thing that I want listeners to be aware of, if they try to take something like an open educational resource textbook and upload it into AI, as the context window gets closer and closer and closer to being full, it starts making up more stuff, so it becomes less reliable on where it goes to source from within the document you provided.

Speaker A

So just be aware of that if you're trying some of these experiments.

Speaker A

So the second question I had is what kinds of agents do you expect your students to produce?

Speaker A

Give us an example.

Speaker B

One of the things that, as I've read some headlines lately, a lot of elderly are being scammed by people out in the world and they're sucking all sorts of their life savings out of their bank accounts because scammers are getting really, really good.

Speaker B

I'm pretty sure they're using AI tools to do it.

Speaker B

Can we create agents that can help detect when something might be a scam and when something's not?

Speaker B

You know, that that's one area if you know, tools to even check emails, you know, are they spam or are they not?

Speaker B

Because those are getting better and better at how things are done.

Speaker B

Or maybe it's as, you know, if they want to get into the world of looking at data and all the data that might come from a tool that's monitoring network traffic, which network traffic creates a ton of data, and a human, if they looked at everything, can't discern what's good, what's not good.

Speaker B

And there's tools to help you to do that, but are there agency tools that they could create that would allow them to play in that space?

Speaker B

So I want to, I want to give them freedom to do anything related to security when it comes to that.

Speaker B

But that's kind of, you know, just A few ideas off the top of my head that I'm like, it'd be kind of cool if a student could.

Speaker A

And these are kind of like custom GPTs rather than multi stage agents that control your computer and that kind of thing.

Speaker B

Right, that's what I'm thinking.

Speaker B

But if a student went multi stage and did something crazy, awesome, I would love to see that.

Speaker B

So I really want to create a safe space for students to explore and to play where failure is not going to be punished.

Speaker B

Because that's, to me, that's how you're going to learn.

Speaker B

Something is reflecting on something that you thought was a good idea that then didn't quite turn out the way you wanted it.

Speaker A

I want you to say that again because it's really important.

Speaker B

Failure is not a bad thing.

Speaker B

I want to create an environment where students can experiment, can come up with a really good idea, and even if the execution of it isn't perfect, as long as they learn from their failure, I want that to be rewarded.

Speaker A

I think that's a really critical message for faculty directed towards students and for administrators to think about towards faculty that are experimenting with these kinds of things that you're doing.

Speaker A

Because frankly, Rob, this could work out really, really well, or it could be a complete bust.

Speaker A

But if you, I mean, I know your dean, so I know she's not like this, but if your dean was going to just kind of rake you over the coals because you had bad evaluations due to a failure like that, then you wouldn't do it.

Speaker A

I think that's really important for the faculty to keep in mind towards students and for administrators to keep in mind towards faculty.

Speaker A

I'm going to be interested to see how you think about this as you go through the, the fall semester.

Speaker A

So I'm going in a little bit different direction because I'm on sabbatical.

Speaker B

Lucky.

Speaker A

Yeah, yeah, yeah.

Speaker A

I've been thinking about this at kind of not a higher level, but a more macro level.

Speaker A

And this is what I would like to encourage listeners to be thinking about, especially faculty.

Speaker A

What do you need to change in your courses that will disincentivize inappropriate AI use?

Speaker A

And then how can you leverage AI to help students learn more effectively?

Speaker A

And I don't think you have to go kind of all in like you did, Rob.

Speaker A

I mean, I don't think people need to do that, but come up with a couple of learning activities that you can kind of recast to take one of those two paths.

Speaker A

Either make it highly contextualized or make it more process focused.

Speaker A

Where it's harder to use AI inappropriately or do what you're doing and figure out how to get AI to help them learn, help students learn more effectively.

Speaker A

So those are two big things that I'd like to see our listeners kind of word of the day lean into as they go through the rest of the summer.

Speaker A

Any last thoughts on summer?

Speaker B

Yeah, I would actually like to challenge our listeners, Greg.

Speaker B

I'd like to challenge them to at least do one thing that brings purposeful AI use into their classroom.

Speaker B

And if you've never done it before, it might be scary.

Speaker B

But again, I go back to the give yourself an opportunity to take a risk, to try and be willing to say you got it wrong and to learn from getting it wrong in front of your students.

Speaker B

And I guarantee you, your students will respect you if you can admit, yeah, I tried this because I thought it was a good idea.

Speaker B

It didn't work, but here's what I learned.

Speaker B

But try to do one thing, if nothing else, because I think what you'll find is it's not as hard as it seems, and it's actually kind of exciting when you see how powerful it is.

Speaker A

Well, once you get started on one, it's easier to do the second one and the third one.

Speaker A

And I'll take that one step further.

Speaker A

I think we're happy to help you with that.

Speaker A

If you get stuck on something you need a little bit of help, just email me craigi goes to college.com and I'll share it with Rob.

Speaker A

And yeah, we're happy to weigh in on that.

Speaker A

We enjoy thinking about this kind of stuff, which is why we do this podcast.

Speaker A

All right, that brings us to a related topic, our survey.

Speaker A

So if you are interested in participating in an AI learning activity repository, we need your help.

Speaker A

And so let me back up.

Speaker A

We talked about this a few episodes ago.

Speaker A

Rob and I have been of the opinion for a while that we've got pockets of people working on a lot of different things, but not enough sharing going on.

Speaker A

So I was able to secure a small grant through our Dean Chris Martin's Just Business grant program that's going to fund the creation of an AI learning activity repository.

Speaker A

So basically what we're going to do is try to set up a repository, a big database, searchable database, hashtags, that kind of thing.

Speaker A

That will be a place where people can share and find assignments.

Speaker A

Learning activities.

Speaker A

I'm saying assignments, but I really mean learning activities because these things are going to come in different forms.

Speaker A

So we might be talking about information systems, but it really might not be all that different for biology.

Speaker A

The same kind of principles apply.

Speaker A

So even if it's a wildly different discipline, I think consulting such a repository would be a really good idea.

Speaker A

And so what we would like for you to do is fill out this survey.

Speaker A

Did you fill it out, Rob?

Speaker B

I did not.

Speaker B

I was planning on participating anyways because I figured I was helping administer it and I didn't see.

Speaker A

I think it's going to take, I don't know, maybe two or three minutes to fill out.

Speaker A

I mean, it's got maybe what, eight or 10 questions.

Speaker A

It's not a big deal.

Speaker A

And I'm going to check the URL here.

Speaker A

Want to make sure I've got it right.

Speaker A

And so if you go to aigostocollege.com survey 2025, that's survey 2025.

Speaker A

It's just a little Google Forms survey.

Speaker A

You can leave your email address if you want to, but you do not have to.

Speaker A

It can be completely anonymous.

Speaker A

And it just kind of asks you about a little bit about yourself, what discipline you're in, that sort of thing, and then how you might like to participate.

Speaker A

So, Rob, do you want to go through some of the early results?

Speaker B

Sure, I'll just talk about them.

Speaker A

I've just got to find them on the right screen.

Speaker A

I've got them somewhere.

Speaker B

Too many windows.

Speaker A

Too many windows.

Speaker A

Oh, here it is.

Speaker A

It's in the tab right next to the one that I'm looking at you in.

Speaker A

Okay, so let's just run through these really quickly.

Speaker A

So right now, 76% of the respondents are faculty.

Speaker A

And then we have some instructional designers, grad students, some administrators from a wide variety of disciplines.

Speaker A

Cultural anthropology, English, English as a second language, history, organizational behavior, information systems, translation theory and practice.

Speaker A

And so there are a bunch of them.

Speaker A

So it's a nice breadth of responses so far.

Speaker A

Mostly from research universities, but some community college and regional comprehensive schools, liberal arts.

Speaker B

So, Craig, what I love about the people who are taking this and are listening to it is we almost have an equal distribution of people's experience with generative AI from just starting to explore.

Speaker B

They've tried a few things all the way up through their training and supporting others.

Speaker B

So there's five categories there.

Speaker B

But this is a great space, I think, where there's an opportunity for people to help others and to develop a community, where, you know, we meet people at where they're at.

Speaker A

Yeah, that is really interesting.

Speaker A

It's all.

Speaker A

So it ranges from 14% to 24%.

Speaker A

So it's really pretty tight across the five categories.

Speaker A

So if you're not really doing a lot with AI yet, or if you're teaching others how to use AI, either way, we think you can benefit from the repository and we would really like to see you participate.

Speaker A

Top concerns, not surprisingly, creating assignments that promote ethical AI use that and leveraging AI to promote student learning effects on critical thinking.

Speaker A

Those are the big ones so far.

Speaker A

And 95.2% of the respondents think that such a repository would be valuable.

Speaker A

I don't know about the 4.8.

Speaker A

That said, maybe.

Speaker A

Thankfully nobody said no yet.

Speaker A

So anyway, we're planning to have clearly defined learning objectives, having some hashtags for discipline level, type of AI use, maybe some teaching notes.

Speaker A

So we're kind of open to exactly what's going to be in the repository.

Speaker A

So people are ready to give us full assignments, small in class activity ideas, reflections, syllabus, language, all kinds of things.

Speaker A

So if you are willing to participate or interested in participating or staying abreast with what we're doing, just go to aigostocollege.com Survey 2025 Rob, what are you thinking?

Speaker B

I'm excited about this, Craig.

Speaker B

I think if we deliver on what we're promising and we do a good job with it, I see this as a sort of tool where it'll be a lot easier for us in the hallways to help our peers and to help our colleagues to do more.

Speaker B

And hopefully this is one step of many and helping to define and develop a community of people that are trying to prepare our students for the new tomorrow.

Speaker A

Great.

Speaker A

And I forgot to mention this is going to be 100% absolutely free, open source.

Speaker A

Everything there is going to be under some sort of Creative Commons attribution license.

Speaker A

So we're not trying to make any money off of this.

Speaker A

This is not going to be behind a paywall.

Speaker A

It's just something that between the two of us and Louisiana Tech University are going to make available to the public and we'll have some sort of mechanism in place to protect it from students getting into it and that sort of thing.

Speaker A

So, I mean, I don't know that we can do that 100%, but I think we'll come close.

Speaker A

All right, so I hope you will fill out the survey again.

Speaker A

That's aigostocollege.com survey 2025.

Speaker A

All right, one more topic.

Speaker A

Are you ready?

Speaker A

So NotebookLM, which is one of the more useful kind of specialized tools out there on the AI in the landscape, have you played with their mind maps?

Speaker B

I have not played with it like you have, Craig?

Speaker B

I did a little bit, but I've not been a person who thinks like mind maps, so it didn't get me as excited as it got you.

Speaker A

Yeah, I was skeptical of this when they first rolled it out.

Speaker A

So, NotebookLM is a form of retrieval, augmented generation AI, which basically means that you upload knowledge, resources, documents, links to Google Docs or YouTube videos.

Speaker A

There's a wide array of things that it can handle.

Speaker A

Then you can chat with NotebookLM about those documents.

Speaker A

So it will go out to kind of get its understanding of grammar and general knowledge and that sort of thing from its training data from the regular model that's already there, but it will augment with retrieval from those documents that you provided.

Speaker A

So basically it'll answer questions based on your documents and it'll cite the exact spot in the document where it's pulling information from.

Speaker A

It's really fantastic.

Speaker A

If you haven't checked it out, you should.

Speaker A

But when I heard about the mind maps, I thought, wait a second.

Speaker A

The whole thing about a mind map is for you to put together your cognitive model, your mental model about this topic.

Speaker A

And I love them.

Speaker A

You know, they really helped me kind of organize my thinking about things.

Speaker A

I thought, I don't need AI to do that.

Speaker A

But it's really pretty cool.

Speaker A

So when you get into NotebookLM, it'll have a little button that says mind map, and you click on that and surprisingly, it creates a mind map.

Speaker A

This is where it gets really cool.

Speaker A

So it creates the mind map and it's interactive.

Speaker A

So I'm looking at one that I created on AI bias, and it has five different things.

Speaker A

Sources of bias, types of bias, impact of bias, addressing bias, and AI applications.

Speaker A

Well, I want to look at types of bias so I can click on types of bias, and it gives me a list of different types of bias.

Speaker A

And I'm not going to run through those because there are a bunch of them, but here's where it gets really cool.

Speaker A

So the first one on its list is statistical bias.

Speaker A

I click on that and it automatically goes into the chat window and says, discuss what the sources say about statistical bias and the larger context of types of bias.

Speaker A

So now I jump straight into a pretty good answer about statistical bias.

Speaker A

So I know this is a little bit tough to follow on audio, but hopefully you'll go out and try this for yourself.

Speaker A

So all you need to do, go into NotebookLM, upload some resources, create the mind map and start playing around.

Speaker A

Rob, you were talking about kind of using AI for your class.

Speaker A

If you could find some Good source documents, whether it's an open textbook, whether it's a bunch of PDF files.

Speaker A

Nist.

Speaker A

I'm sorry?

Speaker A

Nist, National Institute of.

Speaker A

Help me with that.

Speaker A

What's the last two?

Speaker A

Science and Technology has a bunch of reports for geeks like us.

Speaker A

Whatever it might be for your field, upload those, create the mind map and it gives you a nice organization for an entire course or for a module for a course, whatever it might be.

Speaker A

But if you haven't played around with it, I really encourage you to do so.

Speaker A

I will put a link in the show notes to an article for the AI Goes to College newsletter that I wrote about it that has some screenshots and that thing that may make it a little bit easier to follow.

Speaker A

So, Rob, did I make a complete mess of that or did you understand what I'm talking about?

Speaker B

No, I followed what you talked about and I would encourage people to read your newsletter that you wrote about this because I think it complements it nicely and it lets you see exactly what you were talking about.

Speaker B

And I love the connection of this to what I was talking about, what I'm doing with my class, because I could totally see with the right material that this could be what drives, whether it's a week or two of class lecture or an entire semester, depending on what body of knowledge you're able to develop.

Speaker B

And again, Bryant, it took you how long to create this?

Speaker A

Craig?

Speaker B

Maybe.

Speaker A

Oh, if that.

Speaker A

I mean, if you have the knowledge resources already there.

Speaker A

So let me back up.

Speaker A

So what I really like doing is doing a deep research report, or maybe several.

Speaker A

Like I did one earlier today where I had a deep research report from Claude, which Now has research ChatGPT and Gemini.

Speaker A

Put all of those into NotebookLM and then started creating some resources from that, including using Mind Map.

Speaker A

And it's really, really great to have that sort of workflow to get a good handle on a topic quickly.

Speaker A

So I would encourage listeners to play around.

Speaker A

I'd love to hear what you're doing with it.

Speaker A

Craigi goes to college.com feel free to email me and let me know what you are doing.

Speaker A

All right.

Speaker A

Anything else, Rob?

Speaker B

Nothing else, Craig.

Speaker B

I think this was a good episode and again, I encourage you to take some risks and be willing to fail.

Speaker A

Now I'm wondering what it means when you don't say, I think this is a good episode.

Speaker A

So I don't know.

Speaker B

Every episode's a good episode, Craig.

Speaker B

I mean, it's you, me, what do you want?

Speaker A

And again, please fill out the survey about the repository Even if it's not something you're interested in, we'd still like to kind of hear what you're thinking.

Speaker A

Aigostocollege.com survey2025 I think I mentioned that before.

Speaker A

All right, that's it for this time.

Speaker A

Again, all things AI goes to college are available at aigostocollege.com and thank you very much.

Speaker A

We will talk to you next time.