The Ethical Use of AI in Academia: A Conversation with Carlos I. Torres


Imagine walking into a classroom where AI isn't the elephant in the room - it's a welcomed partner in learning. That's exactly what's happening in Carlos I. Torres's information security classes at Baylor University. Instead of joining the chorus of educators crying "Ban AI!" Torres is asking a more intriguing question: What if we taught students to dance with artificial intelligence rather than fight against it?
In this fascinating discussion, Torres pulls back the curtain on his groundbreaking approach. He's not just teaching information security; he's reimagining how students learn in an AI-powered world. His students don't hide their use of AI - they showcase it, document it, and most importantly, learn to think critically about it.
But here's what makes this conversation truly compelling: Torres isn't just preparing students for exams; he's equipping them for a future where AI will be as common as smartphones are today. As we explore the ethical tightropes and practical challenges of this approach, one thing becomes crystal clear: the future of education isn't about fighting AI - it's about learning to harness its power while keeping our human wisdom firmly in the driver's seat.
Takeaways:
- The integration of AI within higher education necessitates a nuanced understanding of its capabilities and limitations.
- Carlos I. Torres emphasizes the importance of guiding students on effective AI usage to enhance their learning experience.
- Engaging students with AI prompts fosters critical thinking and deeper engagement in research assignments.
- The assessment of student work should encompass both the final product and the process of interacting with AI tools.
- Ethical considerations surrounding AI usage in academia are paramount, necessitating discussions around transparency and integrity.
- The future workforce must be equipped with skills to supervise AI agents, ensuring their outputs are trustworthy and effective.
Companies mentioned in this episode:
- Washington State University
- Baylor University
Mentioned in this episode:
AI Goes to College Newsletter
Welcome to another episode of AI Goes to College, the podcast that helps you figure out what in the world is going on with generative AI and how it's going to affect higher education.
Speaker AI'm joined once again by my friend, colleague, and co host, Robert E.
Speaker ACrossler, Ph.D.
Speaker Afrom Washington state University.
Speaker ARob, we have a guest today.
Speaker BThank you, Craig.
Speaker BI am super excited to welcome Carlos I.
Speaker BTorres, who is not only a friend of mine, a former doctoral student of mine, a co author of mine, but he is also an assistant professor in the Information Systems and Business analytics department at the Hancomer School of Business at Baylor University.
Speaker BHe received his Ph.D.
Speaker Bat Washington State University in Information Systems a few years back.
Speaker BAnd his research focuses on the intersection of humans and technology, really with the focus on information security, but is very creative, and I really do look forward to talking to him about how he is using AI to help him be a better instructor in the classroom.
Speaker ACarlos, welcome.
Speaker CThank you for having me.
Speaker CCraig and Rob, it's a pleasure to be here with you today and talk about what I do or what I haven't done yet with AI.
Speaker AAll right, so give us a big picture, Carlos.
Speaker AHow are you using AI in your classes?
Speaker CI teach information security, right?
Speaker CThat's the class that I teach, the Introduction to Information Security.
Speaker CAnd with that class, there are a few things that you can do with AI and there are things that you cannot do with AI, like the labs, you really can't.
Speaker CYou have to do them on your own, right?
Speaker CBecause there is no way that you have AI to actually do those for you.
Speaker CBut for written assignments, for research assignments, for questions that we ask, there are few things that the students do and can't do with AI and that's where I'm actually using AI for them.
Speaker CI tell them those are the type of questions that I want you to ask AI.
Speaker CI want you to check what you are, what prompts you are using.
Speaker CSo I tell them, this is the question that I want you to answer, but I want you to provide what they gave you and how you actually prompted the AI to.
Speaker CTo provide the answer.
Speaker CSo you provide the first answer and the last answer and all the prompts that you produce in the middle.
Speaker CAnd that's how I'm doing things with AI with my students.
Speaker BSo, Carlos, let me interrupt you for a second.
Speaker BAnd how, when you do that, do you assess what they turn in?
Speaker BAre you assessing how they prompt?
Speaker BAre you assessing how their prompts change, or are you assessing the final document that they ultimately create?
Speaker CI assess the final document and how they came to that final element through the prompts that they asked to actually produce or the type of answers that they asked to produce.
Speaker CAnd what I have found is that I have really interesting responses also from the AI, Some similar to the students.
Speaker CRight.
Speaker CStudents would do some more research, others would do some really very shallow answers.
Speaker CAnd it really depends on how they actually prompt the AI to produce the answer.
Speaker CSometimes we have very shallow responses.
Speaker CAnd you see the prompts that they ask, basically one or two prompts almost copy and pasting what I told them that they should actually do.
Speaker CSometimes they go really deep into the questions and that really produces that wish response on the assessment that they do.
Speaker ACarlos, let me play a little bit of devil's advocate here.
Speaker AWhy not just ban AI use?
Speaker AWhy not just tell them, don't use AI at all?
Speaker AWouldn't that be easier for you?
Speaker CI don't think we should ban AI.
Speaker CI mean, that's personally my position.
Speaker CI understand why some classes may need to ban AI because they need to learn something like coding.
Speaker CYou may want to have, okay, this one.
Speaker CYou don't want to use that with AI.
Speaker CBut when it comes to my class, I don't think it's needed.
Speaker CGiving you an example, my labs, they have to go in the lab and they have to do that on a virtual machine.
Speaker CSo there is a way that they can have the AI doing things for them on that labs and on assessments.
Speaker CI can basically do an exam in class.
Speaker CThe AI won't be able to answer the exam for them.
Speaker CSo I think I can combine that in my class because of the content of my class, when I want them to do some research, when I want them to write an essay, they can use AI.
Speaker CAnd I think they will use AI no matter what.
Speaker CSo why should I just ban the use of AI?
Speaker ARight.
Speaker AOne of the things as you were talking through the differences and the range of answers that you get, it seems like one of the things you're doing is you're teaching them how to use AI effectively.
Speaker AAnd so it's not just, okay, they're going to use it anyway.
Speaker AIt's like, hey, you're going to be using this when you get out in the workforce.
Speaker ASo let's make sure you know how to use it in the right way.
Speaker AIs that one of your goals?
Speaker CThat's one of the things that I try to do, yes.
Speaker CI try to actually make them understand that they can use it efficiently.
Speaker CThat also they have to validate the answers that they are getting from the AI that they cannot just believe Everything, because AI said so.
Speaker CAnd they have to really know exactly what it is and whether they are actually referring to.
Speaker CWe talk in class about any type of hallucinations that they may have found when they were actually producing any of the things that we were discussing.
Speaker CSo, yeah, I mean, I think it's part of what we should do and taking advantage of the technology.
Speaker AYeah, absolutely.
Speaker BSo, Carlos, as you've been doing this and teaching information security class, what are you doing to discuss, like, governance and some of the things that, when we look at going into the workforce and the deployment of AI, is that coming up in any of your conversations in the class?
Speaker BAnd how do you guys talk about the importance of getting it right when it's being deployed in the workplace?
Speaker CSo that's a great point.
Speaker CI haven't gotten yet to that point with the students of getting it right deployed.
Speaker CBut that's a question that is in my mind, and those are the things that I'm actually getting into my research.
Speaker CHow you can actually, from a security perspective, get this deployed right.
Speaker CWhat are the elements that you really need to take into consideration?
Speaker AThat's a big question.
Speaker AI'm not sure anybody's getting that right just now.
Speaker CAnd that's why it's a good research question.
Speaker CRight.
Speaker CWhat is the.
Speaker CI mean, what are the best ways to actually implement them?
Speaker BWell, and I would say that becomes a very interesting conversation with students, even if we don't know the right answer.
Speaker BBecause as we challenge students, as we help them to begin to say, you know, what, industry doesn't have this figured out yet, and you're about ready to jump into a job and into a role where it's a moving target, and how do you participate in a conversation when there currently is no perfectly right solution?
Speaker BI think it's an exciting opportunity for our students to enter into this brave new world.
Speaker AI agree.
Speaker AWell, employers are absolutely looking to our students, especially in our field, but more broadly, too, to help them figure out generative AI.
Speaker AI've had advisory board members say that over and over and over again, where we're looking to the new graduates because we don't really know this stuff.
Speaker AI want to go back and see if we can't dig into kind of the philosophy behind your approach, Carlos, and if I'm wrong here, tell me I'm going to make some assumptions and see if I can make this relevant beyond security.
Speaker ASo what I hear you saying is that you're separating out the learning from the assessment.
Speaker ASo you've got activities where you want them to learn Learn about generative AI, learn about whatever the concepts are they're researching, whatever it might be.
Speaker ABut then you've got ways that you're doing assessment that still might involve learning, like the labs.
Speaker AYeah, but we often bundle learning and assessment together, and that gets a little tough to do with generative AI if you don't do it in a very skillful way.
Speaker ASo I like your general approach that, yeah, it's all learning at some level.
Speaker ABut, you know, when I'm going to test you, we're going to be face to face.
Speaker AYou're not going to be able to use AI If I'm going to assess you in the lab, you're not going to have AI to be able to, you know, do the configuration or whatever it is you need to do in the lab.
Speaker ABut when you're in the process of trying to learn this stuff, you can use generative AI appropriately.
Speaker ANow, am I anywhere close to what you're actually thinking or am I totally making that up?
Speaker CI think you are close.
Speaker CI do get for the assessment that they actually produce with AI, I have parts of the assessment for a class that I know that they won't be able to use AI for.
Speaker CBut those assessments, they are also part of the grade that they get.
Speaker CI call them engagement.
Speaker CRight.
Speaker CSo that's part of things that they have to do beyond what they actually do in class or beyond the exam.
Speaker CThose are engagement points that they are getting with those assessments.
Speaker CNow, I can tell you a story of something very interesting that I did last semester with my class.
Speaker CI asked them one of those questions, okay, you can have AI.
Speaker CAnd then I told them, okay, you're going to ask the AI to actually grade your assessment and report the grade that the AI gave you.
Speaker AHow did that work?
Speaker CSo what's your guess?
Speaker AAI is really famous for giving B's and B pluses regardless of the level of the work.
Speaker CYeah, they were all B's and B pluses.
Speaker AYeah, AI doesn't want to quite go overboard, but, you know, this is solid B work right here.
Speaker BWell, here's a question.
Speaker BTwo questions.
Speaker BOne, how did your students respond?
Speaker BAnd two, if AI is right, are we, as faculty member, too generous with how many A's we give out?
Speaker CSo this is, this is what is interesting.
Speaker CSo I told them, okay, I mean, how would you grade your work if I ask you to produce your work?
Speaker CRight.
Speaker CI mean, you are not using AI.
Speaker CAnd of course, they all say, I would give myself an A.
Speaker CAnd if they are very honest and they say, I didn't Work that much on this, they would probably say, I'm going to give myself an A minus.
Speaker CBut then they come and they say, it is interesting that the AI is producing the work.
Speaker CThe final prompt was actually produced by the AI.
Speaker CAnd yet when I asked them to grade according to the same prompt, the original prompt, this is the topic that you have to research.
Speaker CThe AI grade its own work very harshly, right?
Speaker CSo for them, it was interesting.
Speaker CI was kind of boomed, what's going on here?
Speaker CI mean, why?
Speaker CWhy I cannot get an A out of something that I thought it was an eca.
Speaker CSo that was creative in the way that they were like, okay, probably, you know, what the AI does.
Speaker CWhat the AI says is not entirely.
Speaker CI have to be very careful on evaluating the output of the AI and trying to introduce my own thoughts, my own understanding of the topic, and my own creative thinking, my own human element, to actually make this really valid.
Speaker ARight.
Speaker AInteresting, Interesting.
Speaker ADid the students not feel like it was fair, or were they kind of maybe I didn't do quite as good a job as I thought I did.
Speaker CSo in the conversation that I had with them, they first, they thought that it was interesting why the AI wouldn't give an A to that work.
Speaker CIt was produced by the AI.
Speaker CIt was complete.
Speaker CSo that was the first question.
Speaker CThen they weren't really asking us whether it was fair.
Speaker CI graded the assignments based on what I wanted to write, on the level of prompts and type of prompt that they asked how deep they wanted to do in research, using AI to produce the answer.
Speaker CI didn't use the grade that they provided.
Speaker CI was trying to tell them, let's compare this, right?
Speaker CAnd the idea they had was, okay, I have to be more careful on checking the output that I'm getting from the AI, rather just copying and pasting and say, okay, this is done.
Speaker AThat's a great outcome.
Speaker AAI is really good at critiquing things.
Speaker AYou know, it's kind of a joke that it always gives a B, but if you ask it to critique certain aspects of a paper or whatever it is you're doing, it's pretty good about giving some reasonable feedback.
Speaker AI put a full paper in and ask it to act as a senior editor at a top journal.
Speaker AAnd I named the journal and give me a review of the paper and make a decision.
Speaker AWould you reject it?
Speaker ARevise and resubmit, minor revision, major revision, whatever.
Speaker AAnd it was about 3/4 or 80% what I would have expected you could say, when you've been doing this as long as we haven't been rejected at top journalists as many times as I have.
Speaker AYou kind of know what's going to get criticized.
Speaker ABut there were a few things that hadn't occurred to me and you know that that can save a fair amount of time.
Speaker BYeah, this comes up time and time again, Craig, and I think it's an important takeaway of generative AI is, you know, how you use it and what you ask it to do is where you get the value.
Speaker BAnd one of those places is in the critique.
Speaker BHow is it, you know, even if it's something I think we're going to talk about later on this episode is preparing for class and preparing your syllabus.
Speaker BYou know, there are ways that it can be super valuable and just cut down on some of the busy work time of critically evaluating anything from a research paper to a homework assignment that students may or may not understand what you're asking.
Speaker BIt can be very, very valuable in those places.
Speaker AWell, that's a good transition into the next question.
Speaker AHow do you use Carlos, how do you use AI to help you kind of prepare for and structure your classes?
Speaker CSo I do use AI in two different ways.
Speaker CWhen it's coming to prepare my classes.
Speaker CThe first thing is that I talk to AI about topics or questions that I want to ask the class about the content that I'm going to talk about things that may be happening.
Speaker CAnd the other way that I actually use in AI, which is interesting, is trying to produce my exams.
Speaker CI ask for potential questions that I can ask about this topic, depending on what type of exam I'm going to give.
Speaker CAnd I have some very good elements.
Speaker CSometimes I get very tough questions that would probably make people fail.
Speaker CBut in general, those are things that I do with this type of tools.
Speaker AWell, you know what's great is you can crank out so many questions if some of them are terrible.
Speaker ASo what, you need 20 questions, you ask it for 30 or 40 and you know, if you throw away a bunch of them, it's no big deal.
Speaker AI've used it to ask to.
Speaker AI've asked it to critique my questions.
Speaker AYou know, tell me where these might be unintentionally confusing or it might have some redundant answers or whatever it might be.
Speaker AI'm curious, do you use ChatGPT or do you use Claude or what do you use?
Speaker CI use ChatGPT also.
Speaker CI use Copilot.
Speaker BOkay, so do you have a favorite for various different things, Carlos?
Speaker CI use Copilot because that's kind of the authorized one that we have here at Baylor.
Speaker CSo I use Copilot for those things and I also use ChatGPT for my own things.
Speaker AYeah, my thinking on that has changed a little bit.
Speaker AI used to recommend PO to everybody.
Speaker COkay.
Speaker ABecause you can use all the different models, like 30 something different models.
Speaker ABut boy, I just use ChatGPT almost all the time.
Speaker AAnd then Gemini for some quick stuff.
Speaker APoor Claude has kind of got left behind for me, which I used to love.
Speaker CClaude, what about Perplexity?
Speaker CHave you used it?
Speaker AYeah, I was a huge fan of Perplexity at first, and then I've had some bad results with it.
Speaker AIt hallucinated on a deep research report and so I'm trying it again, trying to go back to it, but with the deep research with Gemini and ChatGPT, you know, it took over a lot of what I used to use Perplexity for.
Speaker APerplexity would cite its sources.
Speaker AWell, now you can get ChatGPT and Gemini to do the same thing.
Speaker ASo I don't know, it'll be interesting to see how it all plays out.
Speaker CHave you tried the GPT Scholar one?
Speaker ANo, I haven't used that one.
Speaker CI want to hear from somebody who has used that one.
Speaker ARob, have you used it?
Speaker BHaven't pledged that one yet.
Speaker BSo need to put on my list.
Speaker AThe ever growing list.
Speaker ASo let's switch gears a little bit.
Speaker ARob, unless you had any follow ups.
Speaker BExactly what Carlos is talking about.
Speaker BAs I've had hallway conversations and talk to different people, this is the nature of where a lot of professors are right now.
Speaker BThey're trying a few things.
Speaker BThere's some stuff that they're getting comfortable with that's working and they're trying to find those creative ways to get students engaging with AI in the classroom in a way that prepares them for the changing marketplace.
Speaker BCarlos is right on line with a lot of the conversations I'm having.
Speaker ARob, I don't know if you did it on purpose or not, but that is a perfect setup for the next question.
Speaker AAnd Carlos, what do you think students need to know about AI as they enter the workforce?
Speaker AWhat skills, what capabilities do you think they need?
Speaker CSo generally, I think they need to know how to use it.
Speaker CWhether they have to use their own rational element reading to complement that, but also coming from a security mindset, the ethical use of AI is an important thing to actually learn what is allowed, what is not allowed, what is right and what isn't right.
Speaker CAnd now that ethical use of AI is a big, you know, I mean, what may be ethical for someone may not be ethical for somebody else.
Speaker CSo that is a huge challenge to actually try to define what is the ethical use of AI.
Speaker CSo I think that's a challenge.
Speaker CI mean, it's not telling them what is right, what is good, but actually that they understand from their own values, principles, education.
Speaker CWhat is the ethical use of AI?
Speaker AThat's a great point.
Speaker AHave you heard about this new concept called post plagiarism, where plagiarism isn't really a thing the way it used to be?
Speaker BI had not heard this.
Speaker BHow is that defined?
Speaker BAnd what are people saying is the path forward in that world?
Speaker AWell, I literally just ran across this this morning, so I have not had a chance to dig into it.
Speaker ABut it got me thinking, you know, if I take some idea, some whatever, and AI work with AI to produce it, and AI writes the final whatever, let's say it's a short report.
Speaker AAnd I kind of along the lines of what Carlos has his students do.
Speaker AI prompt AI, get it to do some research.
Speaker AI look at what it comes up with.
Speaker AI tailor my next prompt based on a combination of what it's saying and what I'm thinking.
Speaker AAnd then at the end of the session, I say, we'll just write this up in a executive summary.
Speaker AAnd then I copy and paste that executive summary and put my name on it.
Speaker AI'm not so sure that's really plagiarism because I helped create that document just as if I had a conversation with the two of you, and you helped me out a little bit, and then I gave it to some ghostwriter to write.
Speaker AI mean, I'm not.
Speaker AI'm not saying it is not different.
Speaker AI'm saying I'm wondering whether or not that's really plagiarism.
Speaker AThis came up.
Speaker AI was in a SIM Society for Information Management meeting yesterday, and the same thing came up.
Speaker AIs that.
Speaker AIs that really unethical to do that?
Speaker ASo what do you all think where.
Speaker BI still come down on this, Craig, And I'll let Carlos jump in if he's got a different thought.
Speaker BTransparency is still important in that.
Speaker BAnd so if I used generative AI technologies to create that, I can still take credit for it and put my name on it, but somewhere it should be communicated.
Speaker BThen I did it with the assistance of generative AI.
Speaker BSo that way the person who is consuming it knows how that was written.
Speaker BAnd I guess one of the things I communicate to my students is transparency is important.
Speaker BLet people know how this happened.
Speaker BSo when they go into a.
Speaker BYou give a document to your boss and they take it into a meeting.
Speaker BHe knows when he gets put on the spot that it was not completely created by his interns or his junior employees.
Speaker BCan not put a false front on something that is going forward is a talking point.
Speaker AWell, and that that's a smart move anyway.
Speaker AYou know, you cover yourself a bit.
Speaker CI was going to add on that I think plagiarism is something different in that sense.
Speaker CIt's more related to the ideas.
Speaker CLike, I mean, and I'm going to please take this from the perspective of English as a second language speaker.
Speaker CRight?
Speaker CSo every time I have a paper that you know is going to be an important paper, I want to get that copy edited by somebody who actually gets and fixes basically the English part of it.
Speaker CRight.
Speaker CNow, was that written by the copy editor or by me?
Speaker ARight, especially when it helps you with wording.
Speaker CRight, especially when it helps me with that type of things or even if it's helping me to give context to the story, like connecting the paragraph between one and the other one.
Speaker CIt's help.
Speaker CI mean, I don't say it's help and.
Speaker CBut the content, the idea, still yours.
Speaker CIf you are using AI as a tool, whether it's to write, whether it's to research, of course, you know, I can thank the copy editor.
Speaker CYou know, I have a note at the end, I can also cite, hey, I did this, and that's the transparency part.
Speaker CI did this and I did this with AI in this way or the other way.
Speaker CBut at the end, really what matters is how you actually created this thing and the idea that you had and how you produce the thing.
Speaker ARight.
Speaker AWell, a lot of journal editors have been at least soft, requiring authors to use copy editors for a while.
Speaker AI know some of them have a little bit of a money machine going with it as well, but.
Speaker ANo, I think that's a great point.
Speaker AThe reason I brought this up is I think we really need to reconsider what some of these concepts we've had in our brains for a long time really mean in this new world.
Speaker AI'm trying to get my head around what plagiarism might be.
Speaker AYou copy and paste it.
Speaker AYou do one shot, prompt copy paste, get something off the Internet, copy paste.
Speaker AOkay, that's plagiarism.
Speaker ABut there's a lot of gray area there that I think we're going to need to struggle with and have some pretty honest conversations about.
Speaker ABecause, Carlos, you kind of said this early on in a different way.
Speaker AIt's not black and white.
Speaker AIt's not just ban it or have carte Blanche to use it however you want.
Speaker AWe've got to teach students that moral decision making that you brought up earlier.
Speaker ASo I think that's a great point.
Speaker BAnd we talk about this.
Speaker BGreg, I've got a term that I like better than plagiarism, and that's academic integrity.
Speaker BSo if there's certain expectations for communication of transparency or tell me your prompts or whatever that is, it becomes academic integrity.
Speaker BWhether or not it's plagiarism doesn't really matter.
Speaker BIt's.
Speaker BThis is the level of.
Speaker BThese are the boundary conditions we should be playing in and defining that.
Speaker BAnd then, you know, playing inside that sandbox is what I think the way forward is.
Speaker BIt's just what does defining that look like right now from an academic integrity perspective?
Speaker AAnd academic integrity is much easier to spell than plagiarism, 100%.
Speaker ASo I think that's reason enough to just change the term.
Speaker BSo.
Speaker AAll right.
Speaker AWell, Carlos, any last things you want to share about what you're doing before we kind of go into a more general discussion?
Speaker CSo I have one question for you.
Speaker CI read one of your topics about deep research.
Speaker CSo what do you think?
Speaker CHave you used those weather, ChatGPT or even those specialized tools, AI tools for literature reviews to search the literature and then bring ideas about something?
Speaker AYeah, absolutely.
Speaker ASo I was so impressed with deep research that I Now pay the $200 a month for ChatGPT Pro, which gives me some insane number.
Speaker AIt's weird.
Speaker ASo I get like 120 or 130 with a really good model.
Speaker COkay.
Speaker AAnd then I get another 120 or so with a little bit watered down the lighter model.
Speaker AIt is fantastic for getting up to speed in an area quickly.
Speaker ANow, it is not any substitute for doing a literature review and really thinking through the literature.
Speaker ABut if you kind of are just getting started in an area, 20 minutes, 30 minutes, you can have a report that will cover all of the main points in whatever area you're looking at.
Speaker ALike I'm working on a paper that's using service dominant logic.
Speaker ASo it's kind of a marketing thing that basically says services are the core of economic exchange, not products.
Speaker AAnd I'm using this as an overarching theory for a paper.
Speaker AWell, I didn't know anything about it and so I need to come up to speed on it quickly in a pretty short period of time.
Speaker AI have the basics now.
Speaker AI still need to go read the seminal papers, read the recent articles in the good journals and do all of that kind of thing.
Speaker ABut now I Have some context about it instead of just going in absolutely without any kind of prior knowledge.
Speaker AIt's almost like if you took a really diligent graduate assistant who can compile things really well and ask them to put together a report, you know, in an area, they can do that and it will help you, but you better not trust it all the way.
Speaker AAnd it's not a substitute for doing your own reading in your own research.
Speaker AThat's been my take with it.
Speaker AIt's phenomenal and what it'll do.
Speaker BSo Craig, how accurate have you found it?
Speaker BHave you found where it is getting things wrong in some places or have you found.
Speaker BIt's been pretty spot on?
Speaker ANo, it's been pretty spot on from what I can tell.
Speaker ANow, I have not done a highly detailed analysis and checked every reference and that sort of thing, but it gives you links to the papers and you know, all of that.
Speaker ASo it could hallucinate.
Speaker AI'm sure it does in some instances, but I found it to be quite useful.
Speaker ABut it's not ever.
Speaker AI mean, at least not now.
Speaker AYou can't take that and put it in as your literature review.
Speaker CYeah, I have been trying to do that and I struggle with something.
Speaker CRight.
Speaker CAnd it's with what we value as researchers that it is actually hard to ask the AI to value that equally.
Speaker CRight.
Speaker CSo basically you ask, hey, what are the 10 more important papers about this topic?
Speaker CRight.
Speaker CAnd then you get a bunch of references and those references, you get a conference paper, a journal paper, and then you look at the journal paper and it has 10 citations and things like that.
Speaker CSo actually it makes me wonder whether we are the value that we give to things is wrong or the AI doesn't consider the elements that we think are very powerful.
Speaker CLike, okay, what was the journal that this paper was published in?
Speaker COr how many people are actually building on this thing compared to a conference paper.
Speaker CThat may be closer in the idea that I'm asking to evaluate but yet was not fully developed to a paper journal?
Speaker AWell, I think there's a reason, a technical reason that happens and that's the paywalls.
Speaker ASo most of our top journals, and I know it's true in our field, but I think it's true in most fields those top journals are not open access, they're behind a paywall, which means AI is going to be less likely to find them.
Speaker AThey might be able to put some things together from an abstract or what other papers have said about Rob's paper with Franz Melanger, the big privacy paper.
Speaker AA lot of people have said things about that, so we can start to put pieces together.
Speaker ABut I think that's a huge problem and a big reason not to over rely on things like deep research.
Speaker AThey're really great for background, but I think that at least for now, that's where it stops for me.
Speaker ARob, what do you weigh in on this?
Speaker BI, I think you're right.
Speaker BThe fear I have is you're taking a very reasoned, ethical approach to how you're doing things.
Speaker BAnd I'm sure there are others in the academic community that aren't.
Speaker BAnd how as reviewers, as you know, the, the people who are deciding what gets in to be published and so forth, how is that going to be determined?
Speaker BAnd that's a real challenge that I don't know that there is a great answer to.
Speaker BAnd there will be people who start publishing way more frequently because they're letting AI do the literature review and are adding minimal human value above and beyond what the prompts do.
Speaker BTo me, this is a place where, when it comes to ethics, I think this is a huge ethical dilemma that each individual person can evaluate for themselves.
Speaker BBut how do you know what a person's ethical values were when you're the judge and gatekeeper of whether that gets to be published in our academic journals?
Speaker AThat's a good point.
Speaker AI think it's going to be much more of a problem for lower level journals.
Speaker AYou know, the top journals have people that know reviewers and editors that know these areas inside and out and can spot sloppy work a mile away.
Speaker AI don't think it's going to be a problem for those top 50 journals, top 30 journals, whatever, pick a number.
Speaker ABut when you get way down the scale, we're going to see all kinds of generative AI garbage.
Speaker AAnd we already are, right?
Speaker BAnd people begin citing the garbage and inciting the, you know, stuff that AI created.
Speaker BMaybe it's good, maybe it's bad.
Speaker BYou know, that starts to make an influence as well.
Speaker BIt's a really interesting time to be considering just how that all plays out.
Speaker ASo can we switch gears?
Speaker ASo I want to talk about something that I've been pondering for a while now and wrote a little AI Goes to College newsletter article about it.
Speaker AAnd that's this idea of AI supervision.
Speaker ALet me give you the high level view.
Speaker AI think what we're going to see in the next five years or so is lots and lots of AI agents, more or less autonomous agents, working together.
Speaker AIt'll almost be like this AI fiver where I need somebody to copy edit a Paper.
Speaker ASo I'm going to have that agent copy editing the paper.
Speaker ANow, that agent may call an agent that knows the style of the journal or the formatting agent, and that agent may call some other agent.
Speaker AAnd we'll have all these different agents working together.
Speaker AI think what we might see with entry level jobs is workers will be supervising those agents.
Speaker ASo they'll be hiring those agents, they'll be evaluating them, they'll be firing them, they'll be coordinating them.
Speaker AIt really is all of those things that managers do, but with these autonomous agents.
Speaker AAnd I know that's a little bit out there, but I'm pretty convinced this is where the world is going to go and, you know, five years, 10 years, whatever it is.
Speaker ASo I don't know.
Speaker AWhat do you think?
Speaker BYeah, I think we are.
Speaker BAnd I think the challenge is going to be at what point do you stop trusting or you develop the trust that your agent is truly representing what the person wants them to.
Speaker BAnd so if I'm going to actually have it make decisions on my behalf, at what point do I develop that trust where I let it make those decisions for me?
Speaker BAnd so that's basically what a manager does, right?
Speaker BIs they try to get their employees to that place where they're making the decisions consistent with how the organization would or how the manager would.
Speaker BBut developing that level of trust, especially as different agents work with each other, I'm honestly, as a cybersecurity guy, concerned that the well could be poisoned and you may not know it.
Speaker BAnd I think that's something we have to seriously consider.
Speaker BBecause when you're taking the human out of the element of those different pieces, you're less likely to have that person step in and say, oof, that looks like a phishing attack or whatever that thing is.
Speaker BSo I think that's where it's going.
Speaker BBut I also think it's a place where we need to tread carefully and not just jump in and say, just because it can do it, we should correct.
Speaker CI think my concern, and probably what you mean by supervision is the autonomy part.
Speaker CI mean, what is really being autonomous?
Speaker CWhat is really having an autonomous AI?
Speaker CHow we set the boundaries in which they can actually be autonomous or can't be autonomous?
Speaker CRight, right.
Speaker CI personally don't see.
Speaker CI mean, we have come a long way with AI now from what we had even three or four years ago, but still, I think autonomy is a big, big, big word to actually be used.
Speaker CI mean, would you actually let, similar to what we were just chatting about, would you actually let The AI do this for you and trust completely that that's the job and then just supervise the results.
Speaker COr you really want to have an AI assisting the work and let somebody does the work?
Speaker AI mean, we've been trusting algorithms for decades.
Speaker AYou know, that's what most of automation is, right?
Speaker AWe just have everything from a factory floor to payroll processing to automated grading.
Speaker AI think this is an extension of that.
Speaker AAlthough it's a lot harder to audit and to know what it's going to do because it's non deterministic and generative, which I think is part of what you guys were getting at.
Speaker ADo you think we ought to start teaching this?
Speaker AThat's where I was going with it.
Speaker AI think we need to start considering having AI supervision courses where they learn.
Speaker AHow do you determine the trustworthiness of the agent?
Speaker AHow do you put the guardrails in place?
Speaker AHow do you evaluate their performance?
Speaker AHow do you make sure they're communicating effectively and that they're coordinated correctly?
Speaker AI think we need to start really working on that.
Speaker BWell, I think about from the information systems world, the systems development lifecycle class almost is what you described, right?
Speaker BThose are tasks, those are steps that we very well were doing with man made, human made systems.
Speaker BBut as we begin piecing together these off the shelf systems, these AI agents working together, it fits within the same framework of a lot of what I believe is already being taught.
Speaker BIt's just pivoting from we're developing a new ERP system to we're utilizing these AI chatbots to do different things.
Speaker BSo it really is a pivot.
Speaker BIt may change kind of the framework we might follow, but it's similar thinking to what we've thought about for years.
Speaker ASo Rob, for the listeners who are not is so ERP is an enterprise system, kind of like ours is workday.
Speaker AI'm sure you guys have your own systems that kind of do lots of stuff around the university.
Speaker ASo tell give us the 32nd version of what the SDLC is.
Speaker BSo systems Development Lifecycle is a process that you go through from initial conceptualization to figuring out what the system should do, to how you implement it, to how you evaluate it and test it to see if it works, ultimately how you deploy it and then how you monitor it.
Speaker BSo it's the initial version was seven steps.
Speaker BThere's various variation to that, but it really is how do we take what we want to do, figure out how we're going to do it, do it, and then evaluate if it's doing what we thought it did through A very systematic process.
Speaker ARight.
Speaker AIt's a kind of step by step.
Speaker AYou start with figuring out what it needs to do and go all the way through maintaining the system.
Speaker AOkay.
Speaker ASo just want to make sure we kind of leveled that out a little bit.
Speaker CI want to give you an example of something that is really related to what you said.
Speaker CBut there is an example that I can use, and I try to use this in my class.
Speaker CRight.
Speaker CAnd we have actually CM systems actually trying to monitor thousands of logs, events, or things that are happening in every machine.
Speaker CRight.
Speaker CAnd then, of course, trying to identify traffic abnormalities and trying to identify, okay, there may be a reason we're having an attack this or here or there.
Speaker CSo that's a great task that you can develop with something like AI, Right.
Speaker CBut again, you need supervision, right.
Speaker CYou don't want to have false positive there or worse, false negative.
Speaker CRight.
Speaker CSo you need that supervision for those agents that you can have.
Speaker CSo you can have a lot of help because process information more efficiently.
Speaker CInstead of having 20 people just looking at every single log file, trying to identify if we have something which is abnormal there, we can actually use those.
Speaker CBut supervision means that we still need to be careful on what type of decisions we need to make based on the information that we are getting.
Speaker AAbsolutely.
Speaker CAnd then who makes the decision?
Speaker CRight.
Speaker CI mean, like, let's say that we are going to say, okay, we are definitely under attack.
Speaker CAnd then we need to actually do this, this or that with those routers, or we're gonna see this or this or that.
Speaker CThat supervision needs also the same or even higher level of understanding of what's going on to be able to make that decision at that moment.
Speaker CIt's not that I don't understand how traffic behaves in a network.
Speaker CAnd just because the AI is telling me this, I will decide what I do with that.
Speaker CI have to understand how traffic goes, what type of protocols I'm using, what is a normal pattern that I can use, what are the protocols that I usually see.
Speaker CAll those elements needs to be understood in order to actually be able to supervise that AI.
Speaker CBut yeah, I agree that, you know, they are helpful.
Speaker CAnd having those will be extremely, extremely helpful.
Speaker AThat's a kind of important point.
Speaker AThat AI may not do 100% of the work, but what it might do is 80% of the work, let's say.
Speaker AAnd that lets the human expert really focus on that 20% instead of spending a lot of time trying to filter through to figure out what the really kind of edge cases or really difficult decisions or assessments are.
Speaker AI mean, we see this with grading.
Speaker AThis is why I'd love to have grading agents.
Speaker AIf I'm lucky enough to have it helping me, I tell them I want you to do the easy ones.
Speaker ALike if I give an Excel project, the easy ones you take care of.
Speaker AIf you have to spend more than a couple of minutes trying to figure out how to grade part of an assignment, kick it over to me.
Speaker ASo now instead of 90 to look at, I've got 10 or 12 where I can really dig in and add value.
Speaker ARight?
Speaker ASo we are coming up on our close.
Speaker ACarlos, any last comments?
Speaker AAny anything you want to leave our listeners with?
Speaker CI just want to thank you.
Speaker CIt was fantastic.
Speaker CThat's a fun conversation to have.
Speaker AWe're delighted you could join us.
Speaker ARob, any last thoughts?
Speaker BNope.
Speaker BI just want to express my thanks to Carlos for coming and hanging out with us today.
Speaker BIt was a pleasure to chat with you and I look forward to future conversations where you can share with us further things you're doing with generative AI and other AI tools.
Speaker AAll right, well, this has been another episode of AI Goes to College.
Speaker AYou can find it wherever you find podcasts or you can go to aigostocollege.com follow and we've got all kinds of one click links that'll set you right up.
Speaker AAll right, thanks everybody.
Speaker CThank you.