Building Resilience in the AI Era: What Faculty Need to Know (Live from ICISER)
Episode Description
Join Craig and Rob for the very first live stream of AI Goes to College, recorded at the International Conference on Information Systems Education Research Workshop. In this special episode, we explore how generative AI is fundamentally changing knowledge work, starting with our own field of Information Systems as the "canary in the coal mine."
Craig shares his surprising experience with vibe coding—creating deployable web applications and productivity tools in hours rather than days—and explains why this signals a massive shift coming for all knowledge workers. We also tackle the troubling trend of students using AI to avoid productive learning friction, discuss the dark side of AI monetization and data privacy, and wrestle with difficult questions about AI companionship in an increasingly lonely society.
This conversation moves beyond the hype to examine both the genuine opportunities and serious concerns that educators and technologists need to grapple with as AI becomes embedded in every aspect of work and learning.
Key Topics & Timestamps
- Welcome and introduction to the live format
- Rob's surprising AI use case: Students creating machine-voiced presentations to avoid public speaking
- Craig introduces vibe coding and creating deployable apps in minutes
- Information Systems as the "canary in the coal mine" for knowledge work disruption
- When vibe coding works (and when it doesn't): Simple vs. enterprise applications
- The 50% principle: "50% is greater than 100%"
- How AI changes systems analysis and prototyping
- The job market reality: Entry-level positions disappearing
- What should we be teaching students now?
- Privacy concerns and institutional AI tools
- The monetization problem: When AI platforms need to make money
- AI companionship and mental health concerns
- Using AI for 24/7 policy questions and course support
- Should we accept AI as a solution to technology-created loneliness?
Key Insights
The 50% Principle: Stop trying to get AI to do 100% of a task. Instead, focus on tools that save you half the effort—that's where the real value lies.
Vibe Coding Reality: It's not for enterprise-scale applications, but it's revolutionary for rapid prototyping and creating simple, personal productivity tools without needing current coding skills.
Productive Friction: Students are increasingly using AI to avoid uncomfortable but necessary learning experiences, like public speaking, removing the "friction points" that actually drive growth.
The IS Canary: Information Systems professionals are experiencing AI disruption first, but similar transformations are coming for accounting, finance, law, and virtually all knowledge work.
Privacy Warning: As AI companies struggle to monetize, expect increased data harvesting and advertising. Consider running local models for sensitive work.
Resources Mentioned
- AI Goes to College website: aigostocollege.com
- LM Studio: Tool for running large language models locally
- Claude Code, Codex, Anti Gravity: Professional coding environments mentioned
- Meta's LLAMA: Open-source AI model (though future releases uncertain)
Credits
Hosts: Craig Van Slyke and Rob Crossler
Audio: Hazel Crossler
Sponsored by: Association for Information Systems Special Interest Group on Education (SIG ED) https://ais-siged.org/
Event: International Conference on Information Systems Education Research Workshop
Special thanks to: Conference organizers Tanya McGill and Rosetta Romano
Companies mentioned in this episode:
- Washington State University
- AIS SIG ED
- OpenAI
- Claude Code
- Gemini
- Copilot
- Prospect Press of Vermont
Mentioned in this episode:
AI Goes to College Newsletter
00:00 - Untitled
00:41 - Untitled
00:41 - Introduction to AI Goes to College
01:06 - The Impact of Generative AI on Knowledge Work
16:01 - Addressing AI Resilience in Education
22:31 - Scaling Higher Education: Challenges and Solutions
28:31 - Encouraging Faculty to Embrace Change
30:47 - Understanding AI in Education
42:49 - The Impact of AI on Companionship and Loneliness
Welcome to the very first live stream of AI Goes to
Speaker:College, the podcast that helps you figure out just what in the world is going
Speaker:on with generative AI and higher ed.
Speaker:So we want to thank the organizers of the International Conference on
Speaker:Information Systems Education Research Workshop, the
Speaker:conference committee. Tanya over there, who. You can't. Well, it's
Speaker:a 360 camera. You might see her back there in the back. And Rosetta
Speaker:Romano, who is online. She was the one who came up with this idea.
Speaker:All we did was say, okay. And this is sponsored
Speaker:by the association for Information Systems Special Interest Group on Education,
Speaker:better known as SIG ED because I'm not saying that again. So this is
Speaker:sponsored by SIG ed and we want to thank the SIG ED board for helping
Speaker:us. And we also want to thank Hazel Crossler for
Speaker:handling the sound. That's a really clever way
Speaker:of saying, if there's any problem with a sound, blame it on
Speaker:Hazel Crossler, who is, yes,
Speaker:the child of Rob Crossler and. A
Speaker:master's student in music at Middle Tennessee State University, which makes it all. The
Speaker:more embarrassing if the sound really stinks.
Speaker:We want to make this very loose. As you can tell, we've got some
Speaker:questions, but we sincerely welcome questions from the audience,
Speaker:either online or those of you who, for some reason just
Speaker:have to sit down instead of all these tables that we set up our entire
Speaker:arrangement around. Well, let's get started. So, Rob,
Speaker:what are some surprising generative AI
Speaker:Uses that you've run into? Surprising. None of the normal stuff.
Speaker:Surprising. The surprising one that made faculty
Speaker:squirmish was students learn how
Speaker:to create transcripts for their presentations
Speaker:and then feed those transcripts for their presentations into an AI
Speaker:tool that automatically created voices for
Speaker:their presentations, which stayed under the five minute limits and then were the most
Speaker:boring presentation you ever saw, perfectly dictated
Speaker:by a machine with a transcript created by a
Speaker:machine. Squirmish is a word. Now,
Speaker:is that okay? Okay. We've coined a word
Speaker:that is surprising. What's surprising is that had to be
Speaker:at least as much work as just recording the stupid video. It
Speaker:was, but the students were comfortable because they didn't have to stand up in front
Speaker:of people and talk themselves. They were able to push off
Speaker:what I would call that friction point of professionally teaching and push that
Speaker:to the machine. Yeah, that. That's a problem.
Speaker:That's a problem. We're going to get into that sort of problem a little bit
Speaker:more later. I've been doing vibe coding.
Speaker:We'll talk. Thank you. Yes. By the way, all things AI goes to
Speaker:college are available@aigostocollege.com so who is
Speaker:Vibe coded? It is the strangest
Speaker:thing. You tell the
Speaker:AI interface what you want and it creates it.
Speaker:So do you all know what a landing page is? So you just have this
Speaker:landing page. It's a website, it's not a big deal to create. But if you
Speaker:want one that really looks good and that drives the
Speaker:kind of action you want it to drive, it takes a while to
Speaker:do. I created one in about 15
Speaker:minutes. Deployable out there on the web,
Speaker:ready to go in 15 minutes. And I did several other things. I did these
Speaker:micro apps, which I think is a made up word,
Speaker:like squirmish. The little personal
Speaker:world of new technologies. Craig, we're allowed to make up new words. That's right. That's
Speaker:what we do. And so these, these are little productivity apps that I don't
Speaker:want to pay a subscription for. A time blocking app, one that
Speaker:helps me keep track of my projects with like a Kanban
Speaker:board. These are things that I was paying 20 or 30 bucks a month to
Speaker:do over Thanksgiving weekend. I
Speaker:created these apps. And there's a payoff to this.
Speaker:I'm not bragging because anybody could have done this. I think
Speaker:this fundamentally changes the game for knowledge work.
Speaker:That's what's the most surprising thing to me. I'm convinced
Speaker:that information systems, our field, is
Speaker:what I call the canary in the coal mine. That's the
Speaker:metaphor that I'm using. It's the early signal for
Speaker:what's going to happen with knowledge work. This thing that used to take
Speaker:somebody with deep expertise days to do,
Speaker:I did in a matter of a few hours. And I quit
Speaker:coding 30 years ago. So I see this
Speaker:coming for knowledge work of all sorts. So let me ask a follow up question
Speaker:to that because I've seen that argument made. But the counter argument
Speaker:is it's easy for the simple things, but when you take that to
Speaker:the level of the enterprise and do
Speaker:complicated enterprise apps, it falls down. And people spend more time
Speaker:debugging and getting it to work than if they would have just done it themselves.
Speaker:How do you see those two views coming together? So vibe coding is not for
Speaker:that. That's not what it's for. What vibe coding is for
Speaker:is for doing little stuff quickly. There are other
Speaker:systems like Claude Code and codecs from
Speaker:OpenAI and Google just released theirs, which has a very
Speaker:weird name, Anti Gravity. Those
Speaker:are environments where professional coders can make themselves
Speaker:more efficient. But that's not what vibe
Speaker:coding is. But One of the things we do, those of you who may not
Speaker:be in information systems, one of the things we've been trying to do
Speaker:for a long time is to not try to spend a lot of
Speaker:time figuring out what the requirements for a system are by talking to users,
Speaker:because they never know. You build these prototypes and let the
Speaker:users use the prototype and say, wait, it doesn't do this, or I don't like
Speaker:the way it does that. That's what you can do with vibe coding, because that
Speaker:never gets rolled out up to scale with all the security controls
Speaker:and that kind of thing. But it takes hours and hours and hours to do,
Speaker:and now it doesn't. So what I hear you saying is that the way we
Speaker:do systems analysis and design, that process is
Speaker:completely changing and is able to do things
Speaker:perhaps differently than we've ever done before. I think that's the case.
Speaker:But I think that we're going to see this with other types of knowledge work.
Speaker:You know, right now it's us because, you know, we're gearheads and we were the
Speaker:early adopters for is and I mean for AI and
Speaker:it just kind of makes sense for us. But you're going to see this go
Speaker:into accounting, into finance, into law,
Speaker:and it's not going to be. This does everything it's going to be.
Speaker:It does this one thing really well,
Speaker:small scale, or it makes other things easier. One of the
Speaker:biggest mistakes people make with AI is they want it to do 100% of something.
Speaker:That's the wrong way to think about it. When I give talks, I have a
Speaker:slide that says 50% is greater than 100%. Try to get
Speaker:AI to save you half of the effort. We have
Speaker:faculty in the audience. What would you pay for something that cuts
Speaker:your grading time down in half? If I
Speaker:had kids, I don't know if it'd be my firstborn, but my second born would
Speaker:be in trouble. I have to say the same thing right now. My
Speaker:firstborn standing right next to. So you're, you're cool, but, you know,
Speaker:I don't know, but, but that, that's what vibe coding does. It
Speaker:may not do the whole scalability thing,
Speaker:and it shouldn't, but it can do a lot. And I
Speaker:really think that's what's coming for virtually
Speaker:everybody that gets a degree to get a job. Yeah, I think
Speaker:we're going to see some real changes. The prototyping, I
Speaker:think, is going to be the first easy one. And, and where I see that
Speaker:being great for information systems students in some ways and is it
Speaker:doesn't require them to necessarily work with the engineers to
Speaker:create the prototype solutions. No
Speaker:offense to you engineers out there, it's not that we don't love you. But you
Speaker:think about then where does the engineer's time get spent? If we
Speaker:aren't quite yet at that point where we are able to
Speaker:completely redo the building of those enterprise systems, the focus can
Speaker:be on there. Another thing that I think is really important to point out as
Speaker:we think about these things as large language models
Speaker:which are part of the generative AI systems is there's a lot of pieces
Speaker:to the generative AI systems that need the technical know how to make
Speaker:sure that they work. And a lot of engineering thinking as
Speaker:businesses start to customize and make their systems work just for
Speaker:them, are going to be around that engine and that system that helps to
Speaker:automate this in ways that are specific
Speaker:to each company and to each industry. All right,
Speaker:Anything else on that one? No, that was good. All right. Well,
Speaker:we have questions from listeners. First one came from
Speaker:a student who I'm going to refer to as AM because
Speaker:that's their initials. So that's how I'm going to refer to them. So AM
Speaker:sent us an email a couple of weeks ago and he
Speaker:talked about desirable difficulty, which is a term that
Speaker:was introduced by Robert A. Bork. And it's something
Speaker:that we've been calling useful friction. So you don't learn
Speaker:if you don't do any work. So
Speaker:one of the problems with AI used inappropriately is it removes friction.
Speaker:That's useful friction. And so he was commenting on that
Speaker:episode where we talked about that. But here's
Speaker:the question. In your experience as educators, what is necessary
Speaker:for desirable difficulty and how might generative AI
Speaker:be leveraged by the teachers and students to create
Speaker:that kind of difficulty? So we've been
Speaker:talking about this at Washington State University where I work a lot.
Speaker:And one of the solutions that we've been leaning towards
Speaker:and trying to push more towards is live projects in
Speaker:more classes where we take something that is ill defined from
Speaker:organizations that are willing to work with our students and give them these
Speaker:real world problems that can't just be fed into
Speaker:the machine to find an answer. And so the machine might be part of getting
Speaker:there, but that process of large scale problem solving
Speaker:is where we're trying to do more of.
Speaker:Okay, is that trying to maintain the friction? Tell me more. I don't
Speaker:get how that maintains the friction. Well, it helps. The students aren't
Speaker:going to be able to come up with a simple Solution. They're going to have
Speaker:to say, here's the process we need to go through to solve this problem. Here's
Speaker:the inputs we have. They're going to start down a road and then they're going
Speaker:to be given more information that is going to change their thinking. They're going
Speaker:to have to pivot, they're going to have to adjust. And it's not going to
Speaker:be the simple creation of a thing, but it is going to be
Speaker:solving a problem that really doesn't have a known solution. So it really
Speaker:is their process of problem solving that becomes more
Speaker:what you're looking at as opposed to what is created. So it
Speaker:shifts the friction. Correct. And also raises the bar
Speaker:a little bit. This actually came up during the workshop today where we
Speaker:talked about. And again, I hate this term, soft skills
Speaker:and problem solving skills and that sort of thing. Isn't that soft skills?
Speaker:Doesn't that make it sound like they don't matter? I got a new term that
Speaker:I like to use, and I've heard this in a couple places, social
Speaker:skills. It has more to do with their social interaction
Speaker:as opposed to soft skills, which kind of does have a different
Speaker:nomenclature to a different perspective. Yeah. So I like social skills.
Speaker:Humanistic skills. That's way too geeky, but I like that one too.
Speaker:People skills. Yeah, it could be. Could be. But
Speaker:the. Well, I don't know. I'm gonna have to think about
Speaker:that. So social is going to extend to AI here soon. So I don't know.
Speaker:We're gonna have to think about that. The, the thing that what you were talking
Speaker:about, Rob, triggered in my mind is, is this
Speaker:idea of complex thinking and problem solving. You know,
Speaker:you're not talking about just doing some little thing. You're having to think through
Speaker:and kind of push the envelope and come up with a
Speaker:solution to some sort of a problem. Where I see
Speaker:this going with generative AI is what I've been
Speaker:calling co cognition or co
Speaker:created cognition with generative AI. So if you
Speaker:use the AI to help you think
Speaker:in ways that you might not be able to do on your own, but
Speaker:also in ways that AI couldn't do on its own, putting thinks
Speaker:in quotes, then you've created something with the tool
Speaker:that couldn't be created without the tool, and the tool could not create on its
Speaker:own. And I think that's where we need to be pushing. And that's what you're
Speaker:talking about, I think. Yeah. And I think this is really coming up
Speaker:with creative ideas, and that's what I want to empower people to do, encourage
Speaker:people to do is to find ways to look at
Speaker:a expecting more, because I think we can expect more in this world of
Speaker:AI. But to begin looking at walking
Speaker:alongside the students in their processing to be able to
Speaker:evaluate their critical thinking skills, their problem solving skills, all
Speaker:of those things that we've had a hard time doing before, I think the door
Speaker:is open to that. And I think
Speaker:if we were to think about an experimental world where every faculty member gets
Speaker:to run their own experiments and how they're doing things in their classroom, we're
Speaker:going to come up with a lot of things that don't work, but in the
Speaker:process, we're going to come up with a lot of. Things that you're an administrator
Speaker:now. That's a horrifying thought. Every faculty just going off and doing what they want
Speaker:to do. Let's take a tangent on this.
Speaker:So we've been having some email conversations with our colleague Franz
Speaker:Belanger, who's here in the audience and is also
Speaker:our co author on our book Information Systems for Business Colon An
Speaker:Experiential Approach, which is in
Speaker:Edition 5.1 and is published by Prospect Press of
Speaker:Vermont. And the publishers are here with us,
Speaker:Beth and Andy Golub. And we appreciate that how. That's our
Speaker:first commercial. That's our first commercial on AI goes to college. And
Speaker:I was talking about something. Oh, that plug, Craig. I was talking.
Speaker:Yeah, they do pay us for that plug. Yeah, they do. They do quite well.
Speaker:So we were communicating about how
Speaker:you can not
Speaker:AI proof, but how you can either enhance
Speaker:or. We didn't use the word resistant. Resilient. Resilient. AI
Speaker:Resilient. I like that term. And Franz, would you like to tell us about what
Speaker:you've been doing? She would not. So you have to come up to the microphone.
Speaker:I'm going to pay for this later. This is something that we're about to talk
Speaker:about that every professor should be thinking about. And that is the AI
Speaker:resilience of every class they teach. Because if you can cheat your way through higher
Speaker:education, what we offer is worthless. So this is one of the most important
Speaker:things that we're dealing with right now. Well, they are
Speaker:able to cheat their way through and they're really smart. Students
Speaker:will still be able to be what I've come up with this
Speaker:semester. So I teach an online only
Speaker:class, asynchronous. I'm not allowed to have them
Speaker:be there at a specific time, so I can't do any of the live
Speaker:exams. So I have a
Speaker:semester Project. It's real life in organizations, as you were saying. So this
Speaker:works really well. But I also want to know that they're actually reading the
Speaker:materials for each module and that
Speaker:they actually understand it. How do you do that? Well,
Speaker:can't do tests, can't do essays. I've tried every single
Speaker:essay into AI and it gets perfect grades.
Speaker:You do quizzes? Well, they download the
Speaker:PowerPoints, they give the AI to write a
Speaker:script, they run the scripts to the questions and they
Speaker:get full grades. So what do I do? So what
Speaker:I decided to do this semester is I have four cases
Speaker:that they discussed during the semester and
Speaker:I created with the help of the AI, I
Speaker:created knowledge questions. And so the
Speaker:questions make them use the content of
Speaker:the modules and apply it to the case,
Speaker:but really not based on case questions that you would normally have.
Speaker:And so they have to be able to use the content, each of the
Speaker:modules, before I use three or four modules, put a question
Speaker:in. They have to demonstrate the ability to think
Speaker:and then to. Now, can they cheat this? Of course they can.
Speaker:But. Hopefully most of
Speaker:them will demonstrate some not. I have a question for
Speaker:you. This just occurred to me. So
Speaker:we say they can cheat with this and they can cheat. You know,
Speaker:cheating was not invented by generative AI. Ask me how
Speaker:I know. But maybe if they
Speaker:cheat with an assignment like that, they're actually learning something.
Speaker:Yeah, I was thinking about that too. So if they go to the
Speaker:trouble that every step that they need to do
Speaker:to beat my new thing, they're going to learn something.
Speaker:I use AI all the time and I know
Speaker:that I even get to the point of challenging the AI. Why
Speaker:are you telling me this? And it's quite interesting.
Speaker:If they get to that point, that's what they're going to do in their lives
Speaker:later. My problem is in an online
Speaker:class, I don't want the majority of them to just
Speaker:download PowerPoint's lectures and send that
Speaker:to me. As I've given the course.
Speaker:I think what you said is something important. If we're all using
Speaker:AI to help make our jobs more efficient and
Speaker:better at them, why shouldn't we expect our students to? So in many
Speaker:ways, guiding them in proper uses of these tools and
Speaker:how to do that is the other part of what we need to be thinking
Speaker:about. Why wouldn't we want them to use it? I want them to use
Speaker:it. I taught doctoral seminars last year, all
Speaker:year, and I encourage the students to use AI and show them how
Speaker:to use it and told them how I was using it. It's a tool that's
Speaker:going to be out there. It's something they're going
Speaker:to use and if they don't, they're going to fall behind. So I
Speaker:think, I don't think anybody in this audience is just outright banning
Speaker:AI, but if you know people that are, I'd
Speaker:push back on them a little bit or just stay out of it because it's
Speaker:not your business, but that's your choice. One thing I would say though is as
Speaker:you're encouraging students to use AI or whether it's colleagues,
Speaker:transparency I think is crucially important because if we
Speaker:do it in the darkness, we don't know what's going on, we don't know how
Speaker:it's being used and we can't talk about what is
Speaker:the viability. How good is that information that was received
Speaker:if we don't know how it was created? So talking about that
Speaker:openly is important. And I think the first way we get to that level of
Speaker:transparency is to say, yeah, we allow it, but we're going to talk about
Speaker:it. Yeah, I mean that's, that's absolutely
Speaker:the case. If we try to outright ban it, they're going to hide it and
Speaker:we're not going to be able to help them adjust the way they're using it.
Speaker:So I did require disclosures. Tell me how you're using it. And
Speaker:this. They're also very motivated students because
Speaker:if they didn't get what I was putting out, they're going to
Speaker:have a lot of trouble when it comes up to their comprehensive exams. And so
Speaker:I didn't have, I don't have to give them a bunch of assignments and stay
Speaker:on top of them. I don't have quizzes. They're motivated students. And this is something
Speaker:we've talked about in the past. That's our big job is
Speaker:to change higher ed from this point based
Speaker:transactional view to something that taps into
Speaker:their motivation to learn. We don't have to give them points
Speaker:to learn. Cheats on Grand Theft Auto. Is that, is that still a cool game?
Speaker:I don't know. I don't do games well. We got, we got business professors
Speaker:making jokes about Grand Theft Auto before gtx. There you go. Okay. That's right. Oh
Speaker:yeah, that's right. I forgot about that. Delayed. Delayed again, by the way.
Speaker:Delayed again, by the way. But you know, we don't, we don't have to motivate.
Speaker:You don't motivate little kids to learn. They learn because they want to know stuff
Speaker:and we have to. It's A heavy lift. It's going to take a long time
Speaker:to do, but that's what we need to be doing. I'm. I think that's our
Speaker:number one job. Well, and it, you know, one of the things that is
Speaker:essential for students after they enter the business world is
Speaker:to continue learning. Because three or four years ago nobody is
Speaker:using generative AI in their jobs and now it's
Speaker:expected and you've had to figure out how to do it. You've had to learn
Speaker:how to do it. And that's just one example. New technologies come around
Speaker:all the time that change how we do things. And if you can't continue to
Speaker:learn, which really is what I hope you're learning in college more than
Speaker:anything else, you're going to be stuck. One
Speaker:thing I want to ask you about, Craig, is we were talking last
Speaker:night a little bit. Did you see that? He backed away from me. Yeah, I
Speaker:know. When he said it's going to be good. Yeah. Because you kind of asked
Speaker:me and you didn't love my answer last night, so. But
Speaker:one of the things we've seen a lot of is the importance of oral exams
Speaker:and how in an oral exam, a student sitting in front of you, you can
Speaker:get a good sense. Did they learn the things, did they know the things? And
Speaker:we're discussing that and the idea of scale came up
Speaker:and how do you we a lot of
Speaker:higher education to make money, to be able to pay the bills. We have classes
Speaker:that are larger than 10, 10 students, this is easy.
Speaker:100 students, not so easy. How does this scale to be
Speaker:able to do oral exams or to do these creative things that maybe take
Speaker:more one on one? Getting to know your student, send students. Isn'T
Speaker:even the first row of the auditorium. I teach undergrads and this is the
Speaker:other part of the conversation we've been having online about scalability.
Speaker:I don't know the answer to that. I'm really concerned about trying to
Speaker:find a way to scale, especially if you've got a highly
Speaker:constrained set of circumstances like Franz does. I mean,
Speaker:what I'm going to do is just base the big chunk of the grade on
Speaker:two exams they have to come in and take, and then everything
Speaker:else drives them to prepare for the exams. You
Speaker:do this stuff, you're going to do better on the examination. You
Speaker:don't want to do it. You're adults with agency. Don't do it. I don't care.
Speaker:But you don't get to whine at me afterwards if you don't like your grade.
Speaker:And I think we have. God, I'm going to
Speaker:sound so old right now. We've
Speaker:gotten these darn kids to where
Speaker:they expect hand holding and they expect us to make the
Speaker:decisions for them. You want to do the homework, do the homework. You don't want
Speaker:to do the homework, I don't care. It's one fewer that I
Speaker:have to grade. Now I'm exaggerating a little bit, but only a little bit. I
Speaker:think if we treat them like adults, they respond more like adults. And that's not
Speaker:my thought. I'm not the first one to say that. So I'm going to follow
Speaker:up with that because I'm going to put my. Thank God, because I had no
Speaker:idea where I was going. I'm an assistant professor and student
Speaker:evaluations are super important to me. I'm not getting tenure without good student
Speaker:evaluations. And it's easy for the full professor to say, who cares
Speaker:if I fail a bunch of students because they won't rise to the challenge
Speaker:when if that's me, I may not have my job at three or four years.
Speaker:You know, I had the same attitude as an assistant
Speaker:professor. I think they respond, well, as long as you're matter of fact, you don't
Speaker:have to be a jerk about it. It's like, look, if you do what you,
Speaker:what I'm asking you to do, you're going to do well in this class. If
Speaker:you don't, that's kind of on you. I mean,
Speaker:I can't do anything about that. I don't want to. I'm tired of being the
Speaker:homework police, you know, that's not why I got a doctorate.
Speaker:So, I mean, I've been doing this for a long time and I've never
Speaker:really had students push back. But. But you cannot be a jerk.
Speaker:You've got to be matter of fact, you can't be lording it over them,
Speaker:bossing them around and just kind of, look, do it, don't
Speaker:do it. I think you're crazy if you don't do it.
Speaker:And I think a key part of what you say in that is the
Speaker:importance of students understanding the why.
Speaker:If we ask students to do something and we say this is what you need
Speaker:to know to do well in this class, that's one thing. But if we can
Speaker:connect that why a little bit further and help them to understand that knowing this
Speaker:is going to help you with this other thing or
Speaker:it's going to help you to succeed in this aspect of a career path
Speaker:that takes you this way. When students understand the why? I found
Speaker:that they are more willing to actually
Speaker:do the hard thing because they set the payoff at the end. Not just busy
Speaker:work. My professor asked me. Right. And that's another thing I think we need to
Speaker:work on. We need to lean out a lot of our classes. Things get added
Speaker:and added and added and added, and it's not
Speaker:surprising that students don't see how it all fits together. And
Speaker:so maybe we need to cut back on the number of topics or.
Speaker:Okay, you need to really understand this stuff. Well,
Speaker:these four things, when I used to teach database, it was like, you need
Speaker:to understand conceptual modeling, logical modeling, and SQL.
Speaker:We're going to cover a bunch of other stuff. You need to know what it
Speaker:is so you can go look it up or ask somebody if you hear the
Speaker:term. And that kind of thing works well because it's like, look,
Speaker:when you go out, this is what you're going to do. And so you need
Speaker:to know this stuff. But if we can't say that about every one of
Speaker:the, I don't know, 50 or 100 topics we have in a class, because it's
Speaker:kind of not true. And this is really hard in the Intro to Information
Speaker:Systems class, because in is we could go
Speaker:all the way back to teaching about how a mouse works. At what point do
Speaker:we no longer need to talk about how does a mouse work? What are the
Speaker:inputs and the outputs? And I think that's true even beyond those topics. But in
Speaker:that intro class, it can get really, really big if we're not purposeful about saying,
Speaker:what do students absolutely need? What do they really need to
Speaker:know? An exercise I learned about that one university
Speaker:was pushing their faculty to do. I'm not sure they did it across the board
Speaker:or just in a subset, but they had their professors
Speaker:throw away their syllabus and create brand new
Speaker:ones, so that way they weren't anchoring on.
Speaker:This is how we have always done it.
Speaker:How do I tweak it to make it work? But it was truly building a
Speaker:new set of scaffolding, if you will, or topical areas
Speaker:based on what they really wanted to do in that course.
Speaker:That's a great idea. Although I think we still need to teach the mouse
Speaker:because I just spent like 45 seconds wondering why the touchscreen
Speaker:on my Mac wasn't working well. But then you use the touchpad, Craig, and not
Speaker:a mouse. Yes, I know, but it's close enough. Close enough. So
Speaker:how do you pull this off? You're an administrator now.
Speaker:What do you do for faculty that encourages them to
Speaker:experiment. That encourages them to rethink
Speaker:their entire course. That's a lot of work. Well,
Speaker:a couple of things that I've done.
Speaker:Have I convinced people to redevelop their class? I don't think I've succeeded with that.
Speaker:But one of the things I think is important
Speaker:is I challenged my faculty to do one thing that
Speaker:was purposeful, stepping into AI, and to feel like they
Speaker:could do it without the risk of poor teaching
Speaker:evaluations, assuming they were purposeful in
Speaker:learning from what it is that they changed. So creating a
Speaker:safe space for people to implement change,
Speaker:I think is important. Now that if you punish
Speaker:failed experiments, you won't get any change. So we have a question
Speaker:from the audience. I'm going to answer the other part of that question, and then
Speaker:we'll get our question. Was there another part of that question? It's a real quick
Speaker:one. The other thing I would say is we need to encourage
Speaker:faculty not to be in silos and to get people talking
Speaker:to each other. So we're not all reinventing the same wheel. Yeah,
Speaker:absolutely. For both of you. First
Speaker:of all, I want to congratulate you for jumping onto the
Speaker:AI train really fast.
Speaker:I was on camera and you weren't. I don't want to be on camera. Okay,
Speaker:so congratulations for jumping on the AI train really
Speaker:early. Now, what about other faculty?
Speaker:So I know a lot of faculty who are not necessarily as keen
Speaker:on keeping up with the AI
Speaker:Everything. How much
Speaker:of an effort is it? How much should they get involved
Speaker:if they're not? And is. Is AI going to be one of the other tools
Speaker:that's just going to become a tool? So are we ready for a
Speaker:fishing metaphor from somebody who doesn't fish? So have you ever
Speaker:watched them fish for tuna? They take. They slam
Speaker:the hook in to a big, big old school of tuna, and they try
Speaker:to yank a giant tuna fish out. It's
Speaker:a very violent thing. It's forced. It's really
Speaker:kind of makes me not want to eat tuna. And then there's
Speaker:fly fishing. Does anybody fly fish? Fly
Speaker:fishing? You put the fly out there, and if you have the right fly, it
Speaker:looks attractive, and you kind of slowly bring it in and
Speaker:twitch it around, and it's like, oh, that's a bug. I
Speaker:like bugs. And eventually your trout, I think you fly fish
Speaker:over trout. I don't know. You know, eventually your trout comes and hits the
Speaker:lure. If you do it right, and then you reel it in. I think
Speaker:that's the approach. This is fly fishing. This is not Tuna fishing.
Speaker:So if you have colleagues like that, show them one or two things
Speaker:they can do. Creating exam questions is great
Speaker:because we all hate doing it. AI is pretty good at it.
Speaker:Especially if you tell them you don't have to take every question AI comes up
Speaker:with. It'll come up with 10 and two of them are good. But now you've
Speaker:got two questions you didn't have before. But pick that low hanging pain in the
Speaker:butt fruit that. Okay,
Speaker:just do this one thing. That's it. Just do this one
Speaker:thing. Don't try to drag them kicking and screaming because a. It won't
Speaker:work and be. It's way too much work for any of us to do. So
Speaker:just show them, show them. Do a little bit more
Speaker:and then they'll find something else and they'll do a little bit more and they'll
Speaker:discover these uses for AI because that's what we did,
Speaker:you know. Have you heard this question before? I'm going to ask
Speaker:Hazel. What do you think my very first chat with Chat
Speaker:GPT was?
Speaker:Get your mind out of the gutter.
Speaker:Okay?
Speaker:Yeah, yeah, sure. So it was to write a
Speaker:poem about my cat Taz. My cat
Speaker:Taz is this little gray fuzzball. Everything she does is funny. You've probably known
Speaker:cats like that. Everything she lays down on something and
Speaker:it's just funny. And it's not just me, it's not just her father. Everybody
Speaker:thinks she's funny. So I wrote this. I said, write a poem about
Speaker:this cat Taz. And it wrote a poem about the cat Taz. But it was
Speaker:pretty generic, you know, it was kind of. Yeah, it was about a cat, but
Speaker:it wasn't about Taz. So then I said, oh, she's a gray and white
Speaker:cat. She's small, she squeaks instead of meows. She likes
Speaker:to do this. And all of a sudden I got this poem about
Speaker:Taz. Not about some random cat, about Taz.
Speaker:That told me you have to have some context and some
Speaker:specificity when you prompt. I started learning
Speaker:and then it just went from there. And I think that's what you do. Get
Speaker:them to do something either fun or
Speaker:that that is that low hanging fruit that solves some problem that they
Speaker:just something they just hate doing and let them come along and.
Speaker:And if they don't, they don't. That's really. That's their
Speaker:loss. Let them be like that.
Speaker:So Craig, I think that works for 50,
Speaker:maybe of people. 75. We can, by
Speaker:not staying in our silos, we can get people to come along. I think at
Speaker:some level, administrators need to
Speaker:at least ensure that every class is AI resilient.
Speaker:Because if you are cheating through someone's class today, let's say you let
Speaker:someone cheat. You knew a faculty member just let people copy off everybody's exams and
Speaker:you knew that we would have those hard conversations and say, your class
Speaker:is not doing this thing right. We need to
Speaker:work on fixing that. We need to be looking at every class and making sure
Speaker:that it's being done and where they get the
Speaker:AI tools. I'm not convinced it has to happen in every single
Speaker:class. I think it probably needs to be happening in every single major because
Speaker:there is some knowledge that learning, it is important to
Speaker:understanding to be able to use those tools later on, perhaps
Speaker:in another class. So I think part of it is where is
Speaker:the tools being learned in the major? But the bigger part
Speaker:that we need to, whether it's through part of the annual evaluation
Speaker:process is some sort of a curriculum mapping that helps us understand
Speaker:that every class
Speaker:does not let you cheat your way through. And I think that's a different
Speaker:thing than getting them to use AI. But I
Speaker:agree with you 100%. They. They're really cheating the entire
Speaker:institution if they don't at least make their assignments AI
Speaker:resilient. If you have a question, please come ask us because
Speaker:otherwise you have to deal with the things that Craig and I think we want
Speaker:to talk about. Or what are you saying? So we do have another question.
Speaker:This one is from Rosetta Romano, our president of Sig Ed.
Speaker:She sent us an email a few days ago
Speaker:that basically asks who's paying for all this AI?
Speaker:How many of you are personally paying for one or more AI tools?
Speaker:Right. Only 41 people raised their hand this time.
Speaker:Yeah. So a lot of us are. And now some of us have funding
Speaker:where we can get the tools, but a lot of faculty
Speaker:don't. And she's concerned about
Speaker:what happens in a cost constrained institution
Speaker:around these AI tools because the numbers start to get big pretty quickly.
Speaker:Rob, you had a. We talked about this last night. You had an interesting
Speaker:story. So I think this is a really good question. And there's
Speaker:two sides of this that I want to talk about. So one is
Speaker:a university that I won't name because I'm going to get some of the details
Speaker:wrong. I don't need my friends there coming up to me and saying something. But
Speaker:they started a pilot program that required interested
Speaker:faculty and it probably included staff to report
Speaker:monthly about how they were using it so they could learn about use cases along
Speaker:the way. And so they could see if they were
Speaker:actually using what was being paid for. Because the fear was, we're going to pay
Speaker:for these licenses, and then half of them aren't going to be
Speaker:used anyway, so why are we paying for them? Right? And so then they could
Speaker:deploy them to somebody else who would use them. And so over the course of
Speaker:an academic year, they went from about 70, if my recollection of the
Speaker:story is correct, people in this program to 700. And
Speaker:with the, you know, reports being they were learning use cases, because that's one
Speaker:of the big struggles we had. Microsoft came to Washington State University to
Speaker:teach us how to use Copilot. When I was like, great, they wanted everyone to
Speaker:go to this training. They invested a lot of money into training. And when I
Speaker:went and asked them at that time, we were done about use cases,
Speaker:their answer to me was, you have to figure those out on
Speaker:your own. Which I'm like, if we're going to bring in products and
Speaker:encourage people to use them, if we don't actually give a
Speaker:handful of use cases that are going to increase productivity, then
Speaker:what sort of a tool are we really giving people?
Speaker:I think that's a really clever way to do it. Although the
Speaker:numbers get big, you figured out it's probably $160,000 or
Speaker:so per year, which a school of
Speaker:the size of the school that you are not naming can afford,
Speaker:but a lot of schools can't. That's a chunk of money. You
Speaker:know, on the other hand, especially for business
Speaker:faculty, you can probably afford
Speaker:20 bucks a month if it makes your life easier. There are other
Speaker:faculty that don't get paid so well, and maybe at smaller
Speaker:institutions, community colleges, but I don't know. I'm willing to pay
Speaker:a little bit, but I know that's a privileged position. You can also go
Speaker:pretty far with the free tools. You can. But here's the caution and here's
Speaker:why, you know, you call me Mr. Copilot when we talk, because I talk about
Speaker:it all the time. Copilot has been reviewed by Washington State
Speaker:University's attorneys and has been put on the acceptable use
Speaker:matrix. They have verified that it is FERPA
Speaker:compliant, HIPAA compliant. All of these compliances that we
Speaker:need to care about. None of the other tools are. Yeah, you
Speaker:absolutely don't want to share anything personally identifiable about your
Speaker:students. I wouldn't do it about employees either, and
Speaker:that's a bad idea. But that still leaves a lot of use cases.
Speaker:I'm sorry, but every time I've tried to use
Speaker:Copilot. It has failed miserably.
Speaker:So I don't know. It's not my favorite tool. But I actually
Speaker:quit my license to open AI this last month
Speaker:because they're starting to talk about monetizing
Speaker:by advertising to me. And it scares the heck out of me with
Speaker:Gemini, because they're tied into the entire Google ecosystem
Speaker:that already knows everything about me. I really, really, really think this
Speaker:whole making money thing is going to be an
Speaker:interesting conversation for the next year, because nobody
Speaker:is making money with AI right now. And if we think
Speaker:back to social media and all these sorts of things and how they make money,
Speaker:it's based on the data that we give them. So, no, that's absolutely.
Speaker:It is going to be scary because I was reading about the mental health of
Speaker:teenagers the other day, and a lot of teenagers are picking up these
Speaker:apps and they are using it as their counselor, and they are sharing so much
Speaker:information with them that, I mean, it's actually led to
Speaker:encouraging some to do some terrible, terrible things to themselves. But
Speaker:more troubling to me is the machine
Speaker:now knows even more about these children. We don't
Speaker:have safeguards and safety trails. In place, and that's a problem. There's no doubt about
Speaker:that. I do want to plant a seed for some of you. It
Speaker:is not all that difficult to run some
Speaker:large language models locally. You get something like LM
Speaker:Studio, you need at least an okay computer, but you
Speaker:don't need some $10,000 workstation to run
Speaker:it. It's something that you can think about. They're not the Frontier
Speaker:models, but like the one that that OpenAI released.
Speaker:Their OSS model was kind of about 4.0
Speaker:GPT. 4.0 in terms of its capabilities. LLAMA
Speaker:has a model that'll handle a lot of context. They're not as good
Speaker:as the Frontier models, but they're not bad. Facebook just announced they're going
Speaker:to quit releasing LLAMA because they want AI that makes them money. I'm
Speaker:shocked. Well, I've already got it, so. All right.
Speaker:Any last questions from the audience? Yes, sir.
Speaker:One question, because you mentioned it, so it came in
Speaker:my mind about it. There's a huge discussion coming
Speaker:about companionship, and that's
Speaker:what they told him. Is it like a trainer, that he is
Speaker:a light? I got, I don't know, a
Speaker:personal mentor or something like that. And
Speaker:you see it also in the academic world. There are people,
Speaker:professors training
Speaker:LLM with the rocket system, all the information about
Speaker:administration, education and so on.
Speaker:And they just auto reply students and they
Speaker:just say, okay, I have no time to email every student,
Speaker:where's the timeline? So it doesn't elevate. So it's
Speaker:also a kind of companionship, another way around. So
Speaker:what do you think about AI? So there are two pieces
Speaker:to that, right? And let me comment first, if you don't mind.
Speaker:The first piece is the companionship piece. The true
Speaker:trying to find a substitute for human companionship. And I'm going to put that one
Speaker:aside for a second. But the second one with
Speaker:kind of loading up AI with all of the policies and that
Speaker:kind of thing, I am 100% for that. And
Speaker:cranky old man would say they're not reading the syllabus, why should I read their
Speaker:emails? But really I think it's a very efficient way
Speaker:for them to get a 24, 7 response because
Speaker:I go to bed ridiculously early and I get cranky students because they
Speaker:email me at 10 o' clock at night and they didn't hear back until 4
Speaker:o' clock the next morning. And it's some question that was
Speaker:actually answered in the learning management system. I think it's great
Speaker:now, you know, you need to be careful because even with these
Speaker:retrieval augmented generation systems, they can make mistakes and that
Speaker:sort of thing. So you have to be a little bit careful around disclaimers and
Speaker:all of that. But I don't see any problem with that. The
Speaker:companionship piece, the human companionship.
Speaker:Human companionship companionship substitution piece.
Speaker:Dragon's not been drinking. That's. Maybe I should have been.
Speaker:That's a tougher question. So Rob, what do you have to say about that? So
Speaker:I think one, there's a really interesting research question in there about
Speaker:the dark side of is and that is in this world we live in with,
Speaker:especially since COVID increased amounts of loneliness, the pandemic of loneliness. You
Speaker:hear people are looking this as a solution to that with
Speaker:unintended consequences. So I think there's a lot of really interesting things to be
Speaker:understood because we don't know. And I think that's the whole thing with a
Speaker:lot of these AI things. I will say it's no different than any other new
Speaker:technology. When technology comes out, we're very quick to jump
Speaker:on the benefits of what we can receive from them without fully
Speaker:understanding and knowing what the downside is or those unintended
Speaker:consequences. And I do think companionship is one of those places
Speaker:because we put them right into the midst of human life and meeting people where
Speaker:they are. It could be good for A certain number of
Speaker:people that might be a good thing. But how bad is bad for
Speaker:the people who it ends up being really bad for? I don't know. I know
Speaker:that's a really tough and interesting question. What I would like to see happen
Speaker:is some group create
Speaker:proper guard railed AI companions.
Speaker:Because if you had to tell me, should
Speaker:we have AI companions or should we not? I would come down on the side
Speaker:of we should because I think there's so many lonely people out there
Speaker:that they're probably being helped more than the number of people
Speaker:that are being hurt. But I don't have data on that.
Speaker:We have a question from the audience in.
Speaker:Response to.
Speaker:Something that affected civilization, civilization
Speaker:that drove people apart. Why
Speaker:should we accept AI as a solution to that problem?
Speaker:Why not accept it? I mean,
Speaker:you know that that's a really deep and interesting.
Speaker:No, it does. And, and I think it would be much better if we address
Speaker:the underlying cause, but we're not. So,
Speaker:you know, is it better to at least have a band aid? I don't know.
Speaker:I mean, I'm. I'm asking the question. I don't really know. And I, I think
Speaker:we're getting the high sign. You want to. Yeah, we're, we're getting the.
Speaker:It's time for us to wrap up.
Speaker:Rob, any last thoughts? I will say again, we need to
Speaker:understand these technologies, the impact they're having in the classroom,
Speaker:the impact they're having on our students as they use them, the impact they're having
Speaker:on society. And I think there's a lot of great opportunities to understand that
Speaker:and doctoral students who are listening. I think there are some great dissertations
Speaker:that could be written right now, and I'd be willing to bet that if you
Speaker:did a good job that mis porterly isr top journals would listen
Speaker:to what you're doing. Yep. All right, that's it. Thank you
Speaker:for joining us on the first ever live stream of AI Goes to college. And
Speaker:thanks again to the AIs Sig Ed folks for
Speaker:inviting us to do this. Bye.