AI's Disruption: What It Means for Knowledge Workers and Higher Ed


The recent discussion between Craig Van Slyke and Robert E. Crossler centered around the alarming prediction from Anthropic's CEO regarding the potential displacement of up to 50% of entry-level knowledge work positions within the next five years due to advancements in generative AI. This assertion prompts a critical examination of the implications for higher education, particularly concerning the preparedness of graduates entering an increasingly automated workforce. Both hosts express skepticism about the immediacy and extent of such disruptions, emphasizing the necessity for educational institutions to adapt curricula to cultivate higher skill levels among students. They highlight the importance of fostering AI discernment and ethical considerations in the use of AI technologies, advocating for a proactive approach that prepares students for evolving job market demands. As the conversation unfolds, they underscore the urgent need for educators to engage in thoughtful dialogue and innovative practices to effectively equip students for the future.
Takeaways:
- In recent discussions, a warning was issued stating that potentially half of knowledge work jobs may be eliminated due to AI advancements within the next five years, prompting significant concern among educators and industry professionals.
- The conversation emphasized the importance of preparing students for a future job market that increasingly favors higher-level skills, particularly in light of the potential displacement of entry-level positions by generative AI technologies.
- It was noted that while AI may lead to job displacement, it is also anticipated to create new job opportunities, suggesting a complex landscape where education must adapt to these shifting dynamics.
- The hosts discussed the necessity for higher education institutions to begin incorporating AI discernment into their curricula, ensuring that students understand the ethical implications and operational realities of AI usage in the workplace.
- The episode highlighted the unprecedented grassroots adoption of AI technologies, as individual workers leverage AI tools independently, often circumventing organizational policies or restrictions.
- The hosts concluded with a call to action for educators to embrace AI in their teaching, encouraging experimentation and risk-taking as essential components of evolving educational practices.
Links referenced in this episode:
- Survey: aigostocollege.com/survey2025
- Article on NotebookLM's Mind Map: https://aigoestocollege.substack.com/p/notebooklms-mind-map-a-hidden-gem
Mentioned in this episode:
AI Goes to College Newsletter
00:00 - Untitled
00:41 - Untitled
00:42 - Introduction to the Podcast
01:00 - The Impact of AI on Employment and Education
11:16 - AI Discernment in Higher Education
19:43 - Redesigning Education for AI Integration
30:14 - Creating an AI Learning Activity Repository
37:21 - Introduction to NotebookLM and Mind Mapping
38:51 - Exploring Mind Maps in AI Education
Welcome to another episode of AI Goes to College, the podcast that helps higher ed professionals figure out just what in the world is going on with generative AI.
Speaker AI am joined once again by my friend, colleague and co host, Dr.
Speaker ARobert E.
Speaker ACrossler of Washington State University.
Speaker ARob, how are you?
Speaker BI'm doing great.
Speaker BHow are you doing today, Craig?
Speaker ADoing okay?
Speaker AAll right, well, let's get right to it.
Speaker ASo the big news over the last couple of weeks is from Anthropic.
Speaker ASo they released their new Claude 4.
Speaker AThe cynical among us would say it wasn't getting enough attention.
Speaker AAnd so their CEO, Dario Amodi went on the kind of talk show news circuit with this warning in which he claimed that up to to 50% of knowledge work white collar entry level jobs are going to be gone in the next five years.
Speaker ASo, Rob, what's your hop take?
Speaker AHot take on that.
Speaker BHot take.
Speaker BI'm skeptical as well.
Speaker BAnytime I hear the person selling the product telling me how it's going to disrupt everything and change everything, the first thing to do is to step back and say, do I believe it?
Speaker BAnd I'm skeptical with a lot of the claims about how fast all this is going to happen.
Speaker BAnd I think a lot of that goes back to what I've seen happen with information systems deployments over the years, is every time a great new thing comes out and it's going to revolutionize and change everything, it always takes longer than people say.
Speaker BIt doesn't mean it never happens.
Speaker BIt doesn't mean it never comes to fruition.
Speaker BIt just takes longer because our complex, our systems are so complicated.
Speaker BIt takes so much to deploy them and to do them right that I think this is a good thing to pay attention to and to plan for.
Speaker BBut to live in a world of Chicken Little, the sky is falling.
Speaker BI think it probably is a bit of an over promise by the person who wants you to buy his product.
Speaker AYeah, it was an interesting comment if you read any of the news coverage on it, because he was also advocating for a token tax where the big AI companies would get taxed to help do something, I don't know, UBI or something to help deal with this problem.
Speaker BWhat's ubi, Craig?
Speaker AUniversal basic income.
Speaker AI wasn't quite sure what was going on with that.
Speaker AAnd then it's up to 50%, so it could be zero, so it's up to 50%.
Speaker ABut it turns out there's a fair amount of consensus on it having significant impacts.
Speaker AAnd so the research that I've done, kind of the consensus number is around 93 million knowledge workers displaced, but maybe up to 150, 170 million new jobs created.
Speaker ASo it's the typical kind of a thing.
Speaker AWe're going to lose some, we're going to gain some.
Speaker AOptimistically, it'll be a net gain.
Speaker ABut what I think we need to focus in on is the fact that it's going to be lower skilled and entry level knowledge workers that are the most disproportionately negatively affected.
Speaker ASo I think those of us in higher ed have really got to start paying attention to this because that seems plausible to me.
Speaker AThe technology can do what kind of the lower level workers can do.
Speaker ASo I don't know.
Speaker AWhat do you think?
Speaker BYeah, I agree.
Speaker BAnd one of the thoughts that I have pondered and struggled with, and I don't know I have a solution yet, but if we look at entry level jobs, kind of the jobs that right now students who would come out of our programs would be stepping into, if those jobs are potentially disappearing.
Speaker BRight, making it harder for our students leaving our programs to be employed, what can we do within our programs to the point that our students at the end of their time at our institutions are capable of doing that next level job beyond what is currently an entry level job?
Speaker BHow do we in education prepare them for that marketplace where the kickoff line, if you will, where that's positioned on the field is being changed?
Speaker BSo I think that forces us as educators to step back and say, how can we prepare our students better, to push them further, to be prepared for a marketplace that's expecting higher level skills, different skills, as our students leave their time with us?
Speaker AYeah, and it's a tough problem.
Speaker AShould we scale back, have fewer students?
Speaker AI don't think that's going to happen.
Speaker AWe may not through choice, but through demographics.
Speaker AWe might argue that we should lean into the humanity of things, including the humanities.
Speaker ABut although I'm a huge believer in a classical liberal arts education, that doesn't really resonate beyond lip service with a lot of employers.
Speaker AThey'll say we want good communication skills, we want good critical thinking skills.
Speaker AAnd then they hire about how well, you know, Python or, you know, whatever it might be in a different field.
Speaker ASo I'm a little bit skeptical of that, although I think that should be part of it.
Speaker AShould we look at some kind of thing like the old school co op programs where you go to school for a term and work for a term, almost like an apprenticeship.
Speaker ABut I think now is the time to start thinking, thinking about a lot of these things because it's happening now and it's going to accelerate.
Speaker AWhere I think AI may be a little bit different than some of the other types of technologies that you were talking about earlier, Rob, is that individual workers and individual departments can do a lot with AI without having a lot of higher level organizational coordination.
Speaker ASo it reminds me a little bit of I'm going to go way back here like the apple II and VisiCalc.
Speaker ASo it was the first kind of PC based microcomputer back then based spreadsheet.
Speaker AAnd a lot of people in accounting and finance went out and spent significant amounts of money to buy an apple II and VisiCalc because you didn't have to have somebody above you telling you what to do.
Speaker AYou could just get this calculator, this visible calculator, spreadsheet and start doing stuff with it.
Speaker AAs opposed to if you're going to put in an enterprise system or you're going to put in some big widespread system, it's got to be centrally coordinated.
Speaker AAnd I think AI is different.
Speaker AAnd we're seeing a lot of what we might call grassroots adoption where maybe AI is even banned in the organization, but they're hot spotting their personal laptop to their phone because they can get their work done half the time.
Speaker ASo I think it feels a little bit different this time.
Speaker AI'm going to go out on a limb and think we need and say that I think we need to start teaching AI supervision or some might call it AI orchestration.
Speaker AHow do you hire AI agents?
Speaker AHow do you oversee those agents?
Speaker AHow do you perform the human checks?
Speaker ANecessary, know when those checks are necessary, all those kinds of things, how do you do that?
Speaker AAnd I think that's one of the things now we're in business, but I think business schools absolutely need to be pursuing that.
Speaker ABut that's a lot.
Speaker AThat's a pretty tall order.
Speaker BYeah.
Speaker BAnd when I think of history repeating itself, because that's where you bring up VisiCalc and the Apple II and I think back to the days of Microsoft Excel and we think about what happened with Enron, Sarbanes, Oxley, some of those sorts of things where we got some real regulation that created a lot of work for accountants actually, because people throughout organizations were using these new tools and creating their own little ecosystem of data that they knew about, that they took care of, that weren't part of the corporate ecosystem.
Speaker BAnd it created problems because numbers were not accurate.
Speaker BPeople were using them to fudge things, they were doing things that was unethical.
Speaker BAnd I see a lot of potential in the world of AI for unethical behavior, doing things outside of the purview of the organization.
Speaker BAnd that's probably going to happen, and we're going to see it.
Speaker BWe are seeing it happen right now.
Speaker BBut there's going to come a point where something really bad happens.
Speaker BI hate to, you know, be the bearer of bad news, but something bad is going to happen that I think is going to wake up legislative bodies that oversee things and begin requiring and mandating certain types of reporting, certain types of implementation that are going to force some framework, some level of controls around how this is deployed.
Speaker BAnd again, I think it goes back to we need to train our students to learn how to, you know, not only, as you said, to have some sort of an oversight over what things are doing, but how do they begin thinking about these new technologies that can do something that just because you can, should you.
Speaker BWhat are the ethics in those decisions?
Speaker BWhat are the.
Speaker BThe best practices in how to lean into these new technologies in a way that are going to help the business do good things aligned with the best interest of their stakeholders as opposed to the best thing?
Speaker BMaybe for me to be the most productive in my job, but that might take shortcuts around the proper controls that my organization would want to have in place to ensure what's being created, what's being done, is the best thing that's being done for.
Speaker BFor that organization.
Speaker ASo let's tie that into higher ed after two quick sidebar comments.
Speaker AOne is that it still amazes me that a problem caused by accountants led to the need for more accountants.
Speaker AI'll have to give our accounting colleagues a little tip of the hat on pulling that one off.
Speaker ABut the other thing is people are still using spreadsheets to do those same things today, and not necessarily just unethically.
Speaker AMatter of fact, the unethical piece is a tiny but dramatic portion of it.
Speaker ABut what does that mean for higher ed?
Speaker AWhat it means for higher ed is regardless of the kind of organization that your graduates are going to go into, they need to gain a skill of what maybe we could call it AI discernment.
Speaker AShould you be using AI?
Speaker AWhat kind of oversight should it have?
Speaker AWhat are the ramifications of AI being wrong or being more or less efficient or more or less effective?
Speaker AAll those kinds of things.
Speaker AAnd I think that's one of the things that we need to be trying to figure out how to teach and frankly, trying to figure out AI discernment ourselves.
Speaker AI think very few faculty, me included, really had that figured out.
Speaker ABut I think that's one of the things that we're going to need to give a lot of thought to and pretty quickly because it's coming and it's coming fast.
Speaker ASo I have another kind of.
Speaker AThis is a little bit weird, so stay with me here.
Speaker AOne of my doctoral students is defending his dissertation proposal tomorrow and he's looking at AI anthropomorphism, which is just assigning human like characteristics to AI and it's something we do.
Speaker AWe've got my stupid car, hates me, that kind of thing.
Speaker AWe do that just as humans.
Speaker ABut I wonder if we shouldn't lean into this a little bit and start thinking about AI as being almost like an employee where we have oversight until we get to where we can trust the employee and we train the employee and we put guardrails in place.
Speaker AI mean, you talked about regulation and kind of implied governance, but all those are guardrails, you know, kind of, to use the AI terminology.
Speaker ASo I'm wondering if we.
Speaker AI know this is very controversial, but I wonder if we shouldn't kind of really lean into that anthropomorphism.
Speaker ASo am I way off base there or what do you think?
Speaker BWell, let me ask one clarifying question before I respond, which is, what do you mean by lean into?
Speaker BWhat does that look like?
Speaker AI don't know.
Speaker AI'm making this stuff up as we go along.
Speaker ANo, no, seriously.
Speaker AWhat I think that means is that mental models, especially for the types of AI that you and I talk about, and even to an extent, the more agentic AI where AI kind of goes off and does its own thing, I think our mental models ought to be of those being human, like, because then we'll do the things that we do with human employees.
Speaker AI mean, I hear people say, well, AI is no good because it makes logic errors and it makes stuff up and it gets facts wrong.
Speaker AI got news for folks.
Speaker AHumans do that too.
Speaker AAll the time.
Speaker AAll the time.
Speaker AAnd so what do we do to safeguard our colleges, our universities, our businesses, whatever it is, against those kinds of harms with humans?
Speaker AWell, maybe we need to think about doing the same sorts of things for AI.
Speaker ABut.
Speaker ABut I'm really.
Speaker AThis is really fuzzy in my brain right now, as you can tell.
Speaker BYeah.
Speaker BAnd I think this is one of the challenges of AI, you know, replacing workers or augmenting workers or whatever that looks like.
Speaker BI think we have a human tendency to be more forgiving of another human doing exactly those things than when it's this machine that's been created that's doing those exact Same things that our, our tolerance of.
Speaker BIt just seems to be, oh yeah, I would have made the same mistake versus, oh, it was a computer that made the mistake.
Speaker BObviously I'd rather be dealing with the person.
Speaker BAnd so I do think that's going to be one of the challenges of, of, of things going forward.
Speaker BAnd if we think of the machine more like a human, perhaps that starts to change our perspective and how we respond and we become a bit more accepting of it.
Speaker BI could see that happening.
Speaker BYeah.
Speaker BThe other challenge, I think that's, that's related.
Speaker BOne of my kids recently has started doordashing, so I've gotten to learn more about the algorithms and I'm pretty sure they use AI in that.
Speaker BAnd Doordash, from as best I can tell, has designed their algorithms in a way to be very capitalistic, profit driven.
Speaker AAnd I'm shocked.
Speaker ARight.
Speaker BShocking.
Speaker BBut what I see in that is a lot of frustration in my kid who's like, why am I working for so long and making so little money?
Speaker BAnd the algorithm is deprioritizing me for this, that and the other.
Speaker BBut it's that computer doing something, trying to make the most money.
Speaker BBut when we think about even the larger deployment of AI technologies and algorithms to many more places, it's going to affect many more people in various different roles.
Speaker BHow does the human respond to feeling devalued, set aside by the machine and so forth?
Speaker BAnd what does that process look like in a way to where human dignity and human respect in the workplace and so on remains?
Speaker BAnd it doesn't just become a, you know, how can we make the most money for the least amount of cost?
Speaker AYeah.
Speaker AAlthough that's really not new, is it?
Speaker AWhat I think I'm hearing you, and really both of us say, is that we need to start thinking about this and having conversations about this now because there's a lot of ground to cover and we've got to catch up.
Speaker AThat's what really makes this different in a lot of ways, is that administrations, faculty, student support staff, this is new for all of us.
Speaker AAnd I mean, you talk to a lot of people about AI.
Speaker AI would say that, oh, maybe what, 20% of the faculty have really spent much time at all with AI.
Speaker AAnd I'd say a smaller percentage than that.
Speaker ASpend much time thinking about it.
Speaker BYeah.
Speaker BAnd I think so.
Speaker BI think about as an information systems scholar and teacher, one of the things that we share with our students, or at least I share with my students, is what's great about being an information system student is I'M going to teach you some things while you're here with us for a couple of years in our program and two years after you leave, you're going to have to learn all new things, because that's what happens in the world of information systems.
Speaker BThings change.
Speaker BYou've got to retool, you've got to relearn.
Speaker BI think that's becoming more and more true for every discipline that we need to, instead of necessarily learning a particular tool and how it's going to work today, thinking that we can rely that for 10, 20 years down the road, what's more important is that we figure out how to learn how to use a tool that we didn't know how to use before and that we have a process that's critical for us to be able to do it, whether it's in writing, whether it's in information systems, whether it's in programming, and begin to learn how to, as opposed to learning just how to apply a particular tool and then to rinse, repeat for a while.
Speaker BAnd I think where this is really interesting as we go back to even the impact these changes are going to have on society is how do we prepare a generation of students to be involved in the conversation of what does it look like to regulate this, to put frameworks on it?
Speaker BHow do we, you know, prepare, whether they're business students or liberal arts students or engineering students, so that way they can be part of the conversation, leaving higher education to say, yeah, this is what we need to do as a society so a we can reap the benefits, we can see what benefits we're going to get, but how can we do it in such a way to where we're comfortable with what the deployment of that looks like from a societal perspective?
Speaker AYeah.
Speaker AYeah, I couldn't agree more.
Speaker AAll right, any last thoughts on this one?
Speaker AAnd then we're going to change topics.
Speaker ANope.
Speaker BNo.
Speaker AOkay.
Speaker ASummer plans.
Speaker ASo it's summertime.
Speaker AWhat sort of summer plans do you have, Rob, when it comes to AI?
Speaker BYeah.
Speaker BSo I actually decided at the beginning of the summer to completely redesign how I'm teaching my undergraduate class this semester and to 100% lean into how do I do this in an AI sort of way?
Speaker BWhat does that look like?
Speaker BWell, I have created an agent, a bot, if you will, that is my textbook, seated with some websites with some really good information that I think would be exactly what you would find in a textbook.
Speaker BAnd I'm going to use that with some guidance for my students to get them used to working with agents.
Speaker BTo get information, to acquire information, to do some critical thinking around that.
Speaker BHow do they even validate that they believe what this agent is telling them?
Speaker BSo that way there's some, some checks in that.
Speaker BAnd then for a class project I'm working on designing, I'm planning on having my students create a tool themselves around the topic of cybersecurity, which is what I'm teaching, that requires them to use generative AI to create some sort of a tool, whether it's an agent or otherwise.
Speaker BReally challenging them to begin doing this and begin in hopefully a safe environment.
Speaker BPerfection of the tools.
Speaker BNot what's going to be looked at, but more how creative were you in a process of trying to use these new tools that came out?
Speaker BSo this is kind of me taking the all in approach to what does an all in approach look like for the class that I'm teaching?
Speaker ASo, a couple of questions that sounds very interesting.
Speaker AFirst is, how are you going to get students to make sure that they're having the proper oversight?
Speaker ALike, I was using chatgpt4oh for a little task today, and it gave me a paper that I'd never heard of before, but it was precisely what I needed, which those of you who have used AI for doing much research, it's like, yep, that is pretty suspect.
Speaker AAnd sure enough, it was a complete hallucination or confabulation.
Speaker AConfabulation is so much more of a fun word than hallucination.
Speaker ABut I mean, I kind of knew if that paper was out there, I would have seen it.
Speaker ABut students aren't going to know that.
Speaker ASo what are you going to do to kind of teach them how to have the right kind of oversight?
Speaker BSo for the course itself, I'm using copilot.
Speaker BAnd you can create agents within Copilot and it plugs in nicely with themes.
Speaker BSo it allows me to keep the agent I've designed to my institution.
Speaker BAnd as part of this agent, I actually gave it three or four web links, which is where it goes and gets that information from.
Speaker BUnfortunately, you can break out of the limits of those four web links where it should be getting that information from to get something provided that's not in the that space.
Speaker BSo what I'm asking students to do is to share with me whenever they turn something in.
Speaker BWhat is your prompt?
Speaker BWhat are the results?
Speaker BWhat is your reflection on that prompt?
Speaker BHow did you validate that?
Speaker BYou could believe it because it does give you links to where it found that information.
Speaker BSo I'm trying to create an environment where students have to go out and say, okay, this is what it told me.
Speaker BIt's good, but let me go review the source material and make sure that it's believable still.
Speaker BAnd so in my eyes, it's that process that I care more about, you know, assigning points to, if you will, than it is what is the actual outcome.
Speaker BAnd then hopefully in class, we can create a critical thinking environment through class participation and those sorts of things where I question them, and I asked them probing questions, digging into things to see where does the extent of our learning hit a wall or where do we need clarification, and so on and so forth.
Speaker BSo it becomes more about process than the production of some sort of a document.
Speaker AGreat, great.
Speaker ASo one thing, I don't think this is going to apply to your exact example, but one thing that I want listeners to be aware of, if they try to take something like an open educational resource textbook and upload it into AI, as the context window gets closer and closer and closer to being full, it starts making up more stuff, so it becomes less reliable on where it goes to source from within the document you provided.
Speaker ASo just be aware of that if you're trying some of these experiments.
Speaker ASo the second question I had is what kinds of agents do you expect your students to produce?
Speaker AGive us an example.
Speaker BOne of the things that, as I've read some headlines lately, a lot of elderly are being scammed by people out in the world and they're sucking all sorts of their life savings out of their bank accounts because scammers are getting really, really good.
Speaker BI'm pretty sure they're using AI tools to do it.
Speaker BCan we create agents that can help detect when something might be a scam and when something's not?
Speaker BYou know, that that's one area if you know, tools to even check emails, you know, are they spam or are they not?
Speaker BBecause those are getting better and better at how things are done.
Speaker BOr maybe it's as, you know, if they want to get into the world of looking at data and all the data that might come from a tool that's monitoring network traffic, which network traffic creates a ton of data, and a human, if they looked at everything, can't discern what's good, what's not good.
Speaker BAnd there's tools to help you to do that, but are there agency tools that they could create that would allow them to play in that space?
Speaker BSo I want to, I want to give them freedom to do anything related to security when it comes to that.
Speaker BBut that's kind of, you know, just A few ideas off the top of my head that I'm like, it'd be kind of cool if a student could.
Speaker AAnd these are kind of like custom GPTs rather than multi stage agents that control your computer and that kind of thing.
Speaker BRight, that's what I'm thinking.
Speaker BBut if a student went multi stage and did something crazy, awesome, I would love to see that.
Speaker BSo I really want to create a safe space for students to explore and to play where failure is not going to be punished.
Speaker BBecause that's, to me, that's how you're going to learn.
Speaker BSomething is reflecting on something that you thought was a good idea that then didn't quite turn out the way you wanted it.
Speaker AI want you to say that again because it's really important.
Speaker BFailure is not a bad thing.
Speaker BI want to create an environment where students can experiment, can come up with a really good idea, and even if the execution of it isn't perfect, as long as they learn from their failure, I want that to be rewarded.
Speaker AI think that's a really critical message for faculty directed towards students and for administrators to think about towards faculty that are experimenting with these kinds of things that you're doing.
Speaker ABecause frankly, Rob, this could work out really, really well, or it could be a complete bust.
Speaker ABut if you, I mean, I know your dean, so I know she's not like this, but if your dean was going to just kind of rake you over the coals because you had bad evaluations due to a failure like that, then you wouldn't do it.
Speaker AI think that's really important for the faculty to keep in mind towards students and for administrators to keep in mind towards faculty.
Speaker AI'm going to be interested to see how you think about this as you go through the, the fall semester.
Speaker ASo I'm going in a little bit different direction because I'm on sabbatical.
Speaker BLucky.
Speaker AYeah, yeah, yeah.
Speaker AI've been thinking about this at kind of not a higher level, but a more macro level.
Speaker AAnd this is what I would like to encourage listeners to be thinking about, especially faculty.
Speaker AWhat do you need to change in your courses that will disincentivize inappropriate AI use?
Speaker AAnd then how can you leverage AI to help students learn more effectively?
Speaker AAnd I don't think you have to go kind of all in like you did, Rob.
Speaker AI mean, I don't think people need to do that, but come up with a couple of learning activities that you can kind of recast to take one of those two paths.
Speaker AEither make it highly contextualized or make it more process focused.
Speaker AWhere it's harder to use AI inappropriately or do what you're doing and figure out how to get AI to help them learn, help students learn more effectively.
Speaker ASo those are two big things that I'd like to see our listeners kind of word of the day lean into as they go through the rest of the summer.
Speaker AAny last thoughts on summer?
Speaker BYeah, I would actually like to challenge our listeners, Greg.
Speaker BI'd like to challenge them to at least do one thing that brings purposeful AI use into their classroom.
Speaker BAnd if you've never done it before, it might be scary.
Speaker BBut again, I go back to the give yourself an opportunity to take a risk, to try and be willing to say you got it wrong and to learn from getting it wrong in front of your students.
Speaker BAnd I guarantee you, your students will respect you if you can admit, yeah, I tried this because I thought it was a good idea.
Speaker BIt didn't work, but here's what I learned.
Speaker BBut try to do one thing, if nothing else, because I think what you'll find is it's not as hard as it seems, and it's actually kind of exciting when you see how powerful it is.
Speaker AWell, once you get started on one, it's easier to do the second one and the third one.
Speaker AAnd I'll take that one step further.
Speaker AI think we're happy to help you with that.
Speaker AIf you get stuck on something you need a little bit of help, just email me craigi goes to college.com and I'll share it with Rob.
Speaker AAnd yeah, we're happy to weigh in on that.
Speaker AWe enjoy thinking about this kind of stuff, which is why we do this podcast.
Speaker AAll right, that brings us to a related topic, our survey.
Speaker ASo if you are interested in participating in an AI learning activity repository, we need your help.
Speaker AAnd so let me back up.
Speaker AWe talked about this a few episodes ago.
Speaker ARob and I have been of the opinion for a while that we've got pockets of people working on a lot of different things, but not enough sharing going on.
Speaker ASo I was able to secure a small grant through our Dean Chris Martin's Just Business grant program that's going to fund the creation of an AI learning activity repository.
Speaker ASo basically what we're going to do is try to set up a repository, a big database, searchable database, hashtags, that kind of thing.
Speaker AThat will be a place where people can share and find assignments.
Speaker ALearning activities.
Speaker AI'm saying assignments, but I really mean learning activities because these things are going to come in different forms.
Speaker ASo we might be talking about information systems, but it really might not be all that different for biology.
Speaker AThe same kind of principles apply.
Speaker ASo even if it's a wildly different discipline, I think consulting such a repository would be a really good idea.
Speaker AAnd so what we would like for you to do is fill out this survey.
Speaker ADid you fill it out, Rob?
Speaker BI did not.
Speaker BI was planning on participating anyways because I figured I was helping administer it and I didn't see.
Speaker AI think it's going to take, I don't know, maybe two or three minutes to fill out.
Speaker AI mean, it's got maybe what, eight or 10 questions.
Speaker AIt's not a big deal.
Speaker AAnd I'm going to check the URL here.
Speaker AWant to make sure I've got it right.
Speaker AAnd so if you go to aigostocollege.com survey 2025, that's survey 2025.
Speaker AIt's just a little Google Forms survey.
Speaker AYou can leave your email address if you want to, but you do not have to.
Speaker AIt can be completely anonymous.
Speaker AAnd it just kind of asks you about a little bit about yourself, what discipline you're in, that sort of thing, and then how you might like to participate.
Speaker ASo, Rob, do you want to go through some of the early results?
Speaker BSure, I'll just talk about them.
Speaker AI've just got to find them on the right screen.
Speaker AI've got them somewhere.
Speaker BToo many windows.
Speaker AToo many windows.
Speaker AOh, here it is.
Speaker AIt's in the tab right next to the one that I'm looking at you in.
Speaker AOkay, so let's just run through these really quickly.
Speaker ASo right now, 76% of the respondents are faculty.
Speaker AAnd then we have some instructional designers, grad students, some administrators from a wide variety of disciplines.
Speaker ACultural anthropology, English, English as a second language, history, organizational behavior, information systems, translation theory and practice.
Speaker AAnd so there are a bunch of them.
Speaker ASo it's a nice breadth of responses so far.
Speaker AMostly from research universities, but some community college and regional comprehensive schools, liberal arts.
Speaker BSo, Craig, what I love about the people who are taking this and are listening to it is we almost have an equal distribution of people's experience with generative AI from just starting to explore.
Speaker BThey've tried a few things all the way up through their training and supporting others.
Speaker BSo there's five categories there.
Speaker BBut this is a great space, I think, where there's an opportunity for people to help others and to develop a community, where, you know, we meet people at where they're at.
Speaker AYeah, that is really interesting.
Speaker AIt's all.
Speaker ASo it ranges from 14% to 24%.
Speaker ASo it's really pretty tight across the five categories.
Speaker ASo if you're not really doing a lot with AI yet, or if you're teaching others how to use AI, either way, we think you can benefit from the repository and we would really like to see you participate.
Speaker ATop concerns, not surprisingly, creating assignments that promote ethical AI use that and leveraging AI to promote student learning effects on critical thinking.
Speaker AThose are the big ones so far.
Speaker AAnd 95.2% of the respondents think that such a repository would be valuable.
Speaker AI don't know about the 4.8.
Speaker AThat said, maybe.
Speaker AThankfully nobody said no yet.
Speaker ASo anyway, we're planning to have clearly defined learning objectives, having some hashtags for discipline level, type of AI use, maybe some teaching notes.
Speaker ASo we're kind of open to exactly what's going to be in the repository.
Speaker ASo people are ready to give us full assignments, small in class activity ideas, reflections, syllabus, language, all kinds of things.
Speaker ASo if you are willing to participate or interested in participating or staying abreast with what we're doing, just go to aigostocollege.com Survey 2025 Rob, what are you thinking?
Speaker BI'm excited about this, Craig.
Speaker BI think if we deliver on what we're promising and we do a good job with it, I see this as a sort of tool where it'll be a lot easier for us in the hallways to help our peers and to help our colleagues to do more.
Speaker BAnd hopefully this is one step of many and helping to define and develop a community of people that are trying to prepare our students for the new tomorrow.
Speaker AGreat.
Speaker AAnd I forgot to mention this is going to be 100% absolutely free, open source.
Speaker AEverything there is going to be under some sort of Creative Commons attribution license.
Speaker ASo we're not trying to make any money off of this.
Speaker AThis is not going to be behind a paywall.
Speaker AIt's just something that between the two of us and Louisiana Tech University are going to make available to the public and we'll have some sort of mechanism in place to protect it from students getting into it and that sort of thing.
Speaker ASo, I mean, I don't know that we can do that 100%, but I think we'll come close.
Speaker AAll right, so I hope you will fill out the survey again.
Speaker AThat's aigostocollege.com survey 2025.
Speaker AAll right, one more topic.
Speaker AAre you ready?
Speaker ASo NotebookLM, which is one of the more useful kind of specialized tools out there on the AI in the landscape, have you played with their mind maps?
Speaker BI have not played with it like you have, Craig?
Speaker BI did a little bit, but I've not been a person who thinks like mind maps, so it didn't get me as excited as it got you.
Speaker AYeah, I was skeptical of this when they first rolled it out.
Speaker ASo, NotebookLM is a form of retrieval, augmented generation AI, which basically means that you upload knowledge, resources, documents, links to Google Docs or YouTube videos.
Speaker AThere's a wide array of things that it can handle.
Speaker AThen you can chat with NotebookLM about those documents.
Speaker ASo it will go out to kind of get its understanding of grammar and general knowledge and that sort of thing from its training data from the regular model that's already there, but it will augment with retrieval from those documents that you provided.
Speaker ASo basically it'll answer questions based on your documents and it'll cite the exact spot in the document where it's pulling information from.
Speaker AIt's really fantastic.
Speaker AIf you haven't checked it out, you should.
Speaker ABut when I heard about the mind maps, I thought, wait a second.
Speaker AThe whole thing about a mind map is for you to put together your cognitive model, your mental model about this topic.
Speaker AAnd I love them.
Speaker AYou know, they really helped me kind of organize my thinking about things.
Speaker AI thought, I don't need AI to do that.
Speaker ABut it's really pretty cool.
Speaker ASo when you get into NotebookLM, it'll have a little button that says mind map, and you click on that and surprisingly, it creates a mind map.
Speaker AThis is where it gets really cool.
Speaker ASo it creates the mind map and it's interactive.
Speaker ASo I'm looking at one that I created on AI bias, and it has five different things.
Speaker ASources of bias, types of bias, impact of bias, addressing bias, and AI applications.
Speaker AWell, I want to look at types of bias so I can click on types of bias, and it gives me a list of different types of bias.
Speaker AAnd I'm not going to run through those because there are a bunch of them, but here's where it gets really cool.
Speaker ASo the first one on its list is statistical bias.
Speaker AI click on that and it automatically goes into the chat window and says, discuss what the sources say about statistical bias and the larger context of types of bias.
Speaker ASo now I jump straight into a pretty good answer about statistical bias.
Speaker ASo I know this is a little bit tough to follow on audio, but hopefully you'll go out and try this for yourself.
Speaker ASo all you need to do, go into NotebookLM, upload some resources, create the mind map and start playing around.
Speaker ARob, you were talking about kind of using AI for your class.
Speaker AIf you could find some Good source documents, whether it's an open textbook, whether it's a bunch of PDF files.
Speaker ANist.
Speaker AI'm sorry?
Speaker ANist, National Institute of.
Speaker AHelp me with that.
Speaker AWhat's the last two?
Speaker AScience and Technology has a bunch of reports for geeks like us.
Speaker AWhatever it might be for your field, upload those, create the mind map and it gives you a nice organization for an entire course or for a module for a course, whatever it might be.
Speaker ABut if you haven't played around with it, I really encourage you to do so.
Speaker AI will put a link in the show notes to an article for the AI Goes to College newsletter that I wrote about it that has some screenshots and that thing that may make it a little bit easier to follow.
Speaker ASo, Rob, did I make a complete mess of that or did you understand what I'm talking about?
Speaker BNo, I followed what you talked about and I would encourage people to read your newsletter that you wrote about this because I think it complements it nicely and it lets you see exactly what you were talking about.
Speaker BAnd I love the connection of this to what I was talking about, what I'm doing with my class, because I could totally see with the right material that this could be what drives, whether it's a week or two of class lecture or an entire semester, depending on what body of knowledge you're able to develop.
Speaker BAnd again, Bryant, it took you how long to create this?
Speaker ACraig?
Speaker BMaybe.
Speaker AOh, if that.
Speaker AI mean, if you have the knowledge resources already there.
Speaker ASo let me back up.
Speaker ASo what I really like doing is doing a deep research report, or maybe several.
Speaker ALike I did one earlier today where I had a deep research report from Claude, which Now has research ChatGPT and Gemini.
Speaker APut all of those into NotebookLM and then started creating some resources from that, including using Mind Map.
Speaker AAnd it's really, really great to have that sort of workflow to get a good handle on a topic quickly.
Speaker ASo I would encourage listeners to play around.
Speaker AI'd love to hear what you're doing with it.
Speaker ACraigi goes to college.com feel free to email me and let me know what you are doing.
Speaker AAll right.
Speaker AAnything else, Rob?
Speaker BNothing else, Craig.
Speaker BI think this was a good episode and again, I encourage you to take some risks and be willing to fail.
Speaker ANow I'm wondering what it means when you don't say, I think this is a good episode.
Speaker ASo I don't know.
Speaker BEvery episode's a good episode, Craig.
Speaker BI mean, it's you, me, what do you want?
Speaker AAnd again, please fill out the survey about the repository Even if it's not something you're interested in, we'd still like to kind of hear what you're thinking.
Speaker AAigostocollege.com survey2025 I think I mentioned that before.
Speaker AAll right, that's it for this time.
Speaker AAgain, all things AI goes to college are available at aigostocollege.com and thank you very much.
Speaker AWe will talk to you next time.