May 13, 2025

The Ethical Use of AI in Academia: A Conversation with Carlos I. Torres

The Ethical Use of AI in Academia: A Conversation with Carlos I. Torres
The player is loading ...
The Ethical Use of AI in Academia: A Conversation with Carlos I. Torres

Imagine walking into a classroom where AI isn't the elephant in the room - it's a welcomed partner in learning. That's exactly what's happening in Carlos I. Torres's information security classes at Baylor University. Instead of joining the chorus of educators crying "Ban AI!" Torres is asking a more intriguing question: What if we taught students to dance with artificial intelligence rather than fight against it?

In this fascinating discussion, Torres pulls back the curtain on his groundbreaking approach. He's not just teaching information security; he's reimagining how students learn in an AI-powered world. His students don't hide their use of AI - they showcase it, document it, and most importantly, learn to think critically about it.

But here's what makes this conversation truly compelling: Torres isn't just preparing students for exams; he's equipping them for a future where AI will be as common as smartphones are today. As we explore the ethical tightropes and practical challenges of this approach, one thing becomes crystal clear: the future of education isn't about fighting AI - it's about learning to harness its power while keeping our human wisdom firmly in the driver's seat.

Takeaways:

  • The integration of AI within higher education necessitates a nuanced understanding of its capabilities and limitations.
  • Carlos I. Torres emphasizes the importance of guiding students on effective AI usage to enhance their learning experience.
  • Engaging students with AI prompts fosters critical thinking and deeper engagement in research assignments.
  • The assessment of student work should encompass both the final product and the process of interacting with AI tools.
  • Ethical considerations surrounding AI usage in academia are paramount, necessitating discussions around transparency and integrity.
  • The future workforce must be equipped with skills to supervise AI agents, ensuring their outputs are trustworthy and effective.

Companies mentioned in this episode:

  • Washington State University
  • Baylor University

Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - Untitled

00:41 - Untitled

01:02 - Introducing a Guest to the Podcast

04:24 - Integrating AI in Education

18:26 - The Ethics of AI in Education

23:53 - Navigating the Gray Areas of Academic Integrity and AI

32:34 - The Future of AI Supervision

Transcript
Speaker A

Welcome to another episode of AI Goes to College, the podcast that helps you figure out what in the world is going on with generative AI and how it's going to affect higher education.

Speaker A

I'm joined once again by my friend, colleague, and co host, Robert E.

Speaker A

Crossler, Ph.D.

Speaker A

from Washington state University.

Speaker A

Rob, we have a guest today.

Speaker B

Thank you, Craig.

Speaker B

I am super excited to welcome Carlos I.

Speaker B

Torres, who is not only a friend of mine, a former doctoral student of mine, a co author of mine, but he is also an assistant professor in the Information Systems and Business analytics department at the Hancomer School of Business at Baylor University.

Speaker B

He received his Ph.D.

Speaker B

at Washington State University in Information Systems a few years back.

Speaker B

And his research focuses on the intersection of humans and technology, really with the focus on information security, but is very creative, and I really do look forward to talking to him about how he is using AI to help him be a better instructor in the classroom.

Speaker A

Carlos, welcome.

Speaker C

Thank you for having me.

Speaker C

Craig and Rob, it's a pleasure to be here with you today and talk about what I do or what I haven't done yet with AI.

Speaker A

All right, so give us a big picture, Carlos.

Speaker A

How are you using AI in your classes?

Speaker C

I teach information security, right?

Speaker C

That's the class that I teach, the Introduction to Information Security.

Speaker C

And with that class, there are a few things that you can do with AI and there are things that you cannot do with AI, like the labs, you really can't.

Speaker C

You have to do them on your own, right?

Speaker C

Because there is no way that you have AI to actually do those for you.

Speaker C

But for written assignments, for research assignments, for questions that we ask, there are few things that the students do and can't do with AI and that's where I'm actually using AI for them.

Speaker C

I tell them those are the type of questions that I want you to ask AI.

Speaker C

I want you to check what you are, what prompts you are using.

Speaker C

So I tell them, this is the question that I want you to answer, but I want you to provide what they gave you and how you actually prompted the AI to.

Speaker C

To provide the answer.

Speaker C

So you provide the first answer and the last answer and all the prompts that you produce in the middle.

Speaker C

And that's how I'm doing things with AI with my students.

Speaker B

So, Carlos, let me interrupt you for a second.

Speaker B

And how, when you do that, do you assess what they turn in?

Speaker B

Are you assessing how they prompt?

Speaker B

Are you assessing how their prompts change, or are you assessing the final document that they ultimately create?

Speaker C

I assess the final document and how they came to that final element through the prompts that they asked to actually produce or the type of answers that they asked to produce.

Speaker C

And what I have found is that I have really interesting responses also from the AI, Some similar to the students.

Speaker C

Right.

Speaker C

Students would do some more research, others would do some really very shallow answers.

Speaker C

And it really depends on how they actually prompt the AI to produce the answer.

Speaker C

Sometimes we have very shallow responses.

Speaker C

And you see the prompts that they ask, basically one or two prompts almost copy and pasting what I told them that they should actually do.

Speaker C

Sometimes they go really deep into the questions and that really produces that wish response on the assessment that they do.

Speaker A

Carlos, let me play a little bit of devil's advocate here.

Speaker A

Why not just ban AI use?

Speaker A

Why not just tell them, don't use AI at all?

Speaker A

Wouldn't that be easier for you?

Speaker C

I don't think we should ban AI.

Speaker C

I mean, that's personally my position.

Speaker C

I understand why some classes may need to ban AI because they need to learn something like coding.

Speaker C

You may want to have, okay, this one.

Speaker C

You don't want to use that with AI.

Speaker C

But when it comes to my class, I don't think it's needed.

Speaker C

Giving you an example, my labs, they have to go in the lab and they have to do that on a virtual machine.

Speaker C

So there is a way that they can have the AI doing things for them on that labs and on assessments.

Speaker C

I can basically do an exam in class.

Speaker C

The AI won't be able to answer the exam for them.

Speaker C

So I think I can combine that in my class because of the content of my class, when I want them to do some research, when I want them to write an essay, they can use AI.

Speaker C

And I think they will use AI no matter what.

Speaker C

So why should I just ban the use of AI?

Speaker A

Right.

Speaker A

One of the things as you were talking through the differences and the range of answers that you get, it seems like one of the things you're doing is you're teaching them how to use AI effectively.

Speaker A

And so it's not just, okay, they're going to use it anyway.

Speaker A

It's like, hey, you're going to be using this when you get out in the workforce.

Speaker A

So let's make sure you know how to use it in the right way.

Speaker A

Is that one of your goals?

Speaker C

That's one of the things that I try to do, yes.

Speaker C

I try to actually make them understand that they can use it efficiently.

Speaker C

That also they have to validate the answers that they are getting from the AI that they cannot just believe Everything, because AI said so.

Speaker C

And they have to really know exactly what it is and whether they are actually referring to.

Speaker C

We talk in class about any type of hallucinations that they may have found when they were actually producing any of the things that we were discussing.

Speaker C

So, yeah, I mean, I think it's part of what we should do and taking advantage of the technology.

Speaker A

Yeah, absolutely.

Speaker B

So, Carlos, as you've been doing this and teaching information security class, what are you doing to discuss, like, governance and some of the things that, when we look at going into the workforce and the deployment of AI, is that coming up in any of your conversations in the class?

Speaker B

And how do you guys talk about the importance of getting it right when it's being deployed in the workplace?

Speaker C

So that's a great point.

Speaker C

I haven't gotten yet to that point with the students of getting it right deployed.

Speaker C

But that's a question that is in my mind, and those are the things that I'm actually getting into my research.

Speaker C

How you can actually, from a security perspective, get this deployed right.

Speaker C

What are the elements that you really need to take into consideration?

Speaker A

That's a big question.

Speaker A

I'm not sure anybody's getting that right just now.

Speaker C

And that's why it's a good research question.

Speaker C

Right.

Speaker C

What is the.

Speaker C

I mean, what are the best ways to actually implement them?

Speaker B

Well, and I would say that becomes a very interesting conversation with students, even if we don't know the right answer.

Speaker B

Because as we challenge students, as we help them to begin to say, you know, what, industry doesn't have this figured out yet, and you're about ready to jump into a job and into a role where it's a moving target, and how do you participate in a conversation when there currently is no perfectly right solution?

Speaker B

I think it's an exciting opportunity for our students to enter into this brave new world.

Speaker A

I agree.

Speaker A

Well, employers are absolutely looking to our students, especially in our field, but more broadly, too, to help them figure out generative AI.

Speaker A

I've had advisory board members say that over and over and over again, where we're looking to the new graduates because we don't really know this stuff.

Speaker A

I want to go back and see if we can't dig into kind of the philosophy behind your approach, Carlos, and if I'm wrong here, tell me I'm going to make some assumptions and see if I can make this relevant beyond security.

Speaker A

So what I hear you saying is that you're separating out the learning from the assessment.

Speaker A

So you've got activities where you want them to learn Learn about generative AI, learn about whatever the concepts are they're researching, whatever it might be.

Speaker A

But then you've got ways that you're doing assessment that still might involve learning, like the labs.

Speaker A

Yeah, but we often bundle learning and assessment together, and that gets a little tough to do with generative AI if you don't do it in a very skillful way.

Speaker A

So I like your general approach that, yeah, it's all learning at some level.

Speaker A

But, you know, when I'm going to test you, we're going to be face to face.

Speaker A

You're not going to be able to use AI If I'm going to assess you in the lab, you're not going to have AI to be able to, you know, do the configuration or whatever it is you need to do in the lab.

Speaker A

But when you're in the process of trying to learn this stuff, you can use generative AI appropriately.

Speaker A

Now, am I anywhere close to what you're actually thinking or am I totally making that up?

Speaker C

I think you are close.

Speaker C

I do get for the assessment that they actually produce with AI, I have parts of the assessment for a class that I know that they won't be able to use AI for.

Speaker C

But those assessments, they are also part of the grade that they get.

Speaker C

I call them engagement.

Speaker C

Right.

Speaker C

So that's part of things that they have to do beyond what they actually do in class or beyond the exam.

Speaker C

Those are engagement points that they are getting with those assessments.

Speaker C

Now, I can tell you a story of something very interesting that I did last semester with my class.

Speaker C

I asked them one of those questions, okay, you can have AI.

Speaker C

And then I told them, okay, you're going to ask the AI to actually grade your assessment and report the grade that the AI gave you.

Speaker A

How did that work?

Speaker C

So what's your guess?

Speaker A

AI is really famous for giving B's and B pluses regardless of the level of the work.

Speaker C

Yeah, they were all B's and B pluses.

Speaker A

Yeah, AI doesn't want to quite go overboard, but, you know, this is solid B work right here.

Speaker B

Well, here's a question.

Speaker B

Two questions.

Speaker B

One, how did your students respond?

Speaker B

And two, if AI is right, are we, as faculty member, too generous with how many A's we give out?

Speaker C

So this is, this is what is interesting.

Speaker C

So I told them, okay, I mean, how would you grade your work if I ask you to produce your work?

Speaker C

Right.

Speaker C

I mean, you are not using AI.

Speaker C

And of course, they all say, I would give myself an A.

Speaker C

And if they are very honest and they say, I didn't Work that much on this, they would probably say, I'm going to give myself an A minus.

Speaker C

But then they come and they say, it is interesting that the AI is producing the work.

Speaker C

The final prompt was actually produced by the AI.

Speaker C

And yet when I asked them to grade according to the same prompt, the original prompt, this is the topic that you have to research.

Speaker C

The AI grade its own work very harshly, right?

Speaker C

So for them, it was interesting.

Speaker C

I was kind of boomed, what's going on here?

Speaker C

I mean, why?

Speaker C

Why I cannot get an A out of something that I thought it was an eca.

Speaker C

So that was creative in the way that they were like, okay, probably, you know, what the AI does.

Speaker C

What the AI says is not entirely.

Speaker C

I have to be very careful on evaluating the output of the AI and trying to introduce my own thoughts, my own understanding of the topic, and my own creative thinking, my own human element, to actually make this really valid.

Speaker A

Right.

Speaker A

Interesting, Interesting.

Speaker A

Did the students not feel like it was fair, or were they kind of maybe I didn't do quite as good a job as I thought I did.

Speaker C

So in the conversation that I had with them, they first, they thought that it was interesting why the AI wouldn't give an A to that work.

Speaker C

It was produced by the AI.

Speaker C

It was complete.

Speaker C

So that was the first question.

Speaker C

Then they weren't really asking us whether it was fair.

Speaker C

I graded the assignments based on what I wanted to write, on the level of prompts and type of prompt that they asked how deep they wanted to do in research, using AI to produce the answer.

Speaker C

I didn't use the grade that they provided.

Speaker C

I was trying to tell them, let's compare this, right?

Speaker C

And the idea they had was, okay, I have to be more careful on checking the output that I'm getting from the AI, rather just copying and pasting and say, okay, this is done.

Speaker A

That's a great outcome.

Speaker A

AI is really good at critiquing things.

Speaker A

You know, it's kind of a joke that it always gives a B, but if you ask it to critique certain aspects of a paper or whatever it is you're doing, it's pretty good about giving some reasonable feedback.

Speaker A

I put a full paper in and ask it to act as a senior editor at a top journal.

Speaker A

And I named the journal and give me a review of the paper and make a decision.

Speaker A

Would you reject it?

Speaker A

Revise and resubmit, minor revision, major revision, whatever.

Speaker A

And it was about 3/4 or 80% what I would have expected you could say, when you've been doing this as long as we haven't been rejected at top journalists as many times as I have.

Speaker A

You kind of know what's going to get criticized.

Speaker A

But there were a few things that hadn't occurred to me and you know that that can save a fair amount of time.

Speaker B

Yeah, this comes up time and time again, Craig, and I think it's an important takeaway of generative AI is, you know, how you use it and what you ask it to do is where you get the value.

Speaker B

And one of those places is in the critique.

Speaker B

How is it, you know, even if it's something I think we're going to talk about later on this episode is preparing for class and preparing your syllabus.

Speaker B

You know, there are ways that it can be super valuable and just cut down on some of the busy work time of critically evaluating anything from a research paper to a homework assignment that students may or may not understand what you're asking.

Speaker B

It can be very, very valuable in those places.

Speaker A

Well, that's a good transition into the next question.

Speaker A

How do you use Carlos, how do you use AI to help you kind of prepare for and structure your classes?

Speaker C

So I do use AI in two different ways.

Speaker C

When it's coming to prepare my classes.

Speaker C

The first thing is that I talk to AI about topics or questions that I want to ask the class about the content that I'm going to talk about things that may be happening.

Speaker C

And the other way that I actually use in AI, which is interesting, is trying to produce my exams.

Speaker C

I ask for potential questions that I can ask about this topic, depending on what type of exam I'm going to give.

Speaker C

And I have some very good elements.

Speaker C

Sometimes I get very tough questions that would probably make people fail.

Speaker C

But in general, those are things that I do with this type of tools.

Speaker A

Well, you know what's great is you can crank out so many questions if some of them are terrible.

Speaker A

So what, you need 20 questions, you ask it for 30 or 40 and you know, if you throw away a bunch of them, it's no big deal.

Speaker A

I've used it to ask to.

Speaker A

I've asked it to critique my questions.

Speaker A

You know, tell me where these might be unintentionally confusing or it might have some redundant answers or whatever it might be.

Speaker A

I'm curious, do you use ChatGPT or do you use Claude or what do you use?

Speaker C

I use ChatGPT also.

Speaker C

I use Copilot.

Speaker B

Okay, so do you have a favorite for various different things, Carlos?

Speaker C

I use Copilot because that's kind of the authorized one that we have here at Baylor.

Speaker C

So I use Copilot for those things and I also use ChatGPT for my own things.

Speaker A

Yeah, my thinking on that has changed a little bit.

Speaker A

I used to recommend PO to everybody.

Speaker C

Okay.

Speaker A

Because you can use all the different models, like 30 something different models.

Speaker A

But boy, I just use ChatGPT almost all the time.

Speaker A

And then Gemini for some quick stuff.

Speaker A

Poor Claude has kind of got left behind for me, which I used to love.

Speaker C

Claude, what about Perplexity?

Speaker C

Have you used it?

Speaker A

Yeah, I was a huge fan of Perplexity at first, and then I've had some bad results with it.

Speaker A

It hallucinated on a deep research report and so I'm trying it again, trying to go back to it, but with the deep research with Gemini and ChatGPT, you know, it took over a lot of what I used to use Perplexity for.

Speaker A

Perplexity would cite its sources.

Speaker A

Well, now you can get ChatGPT and Gemini to do the same thing.

Speaker A

So I don't know, it'll be interesting to see how it all plays out.

Speaker C

Have you tried the GPT Scholar one?

Speaker A

No, I haven't used that one.

Speaker C

I want to hear from somebody who has used that one.

Speaker A

Rob, have you used it?

Speaker B

Haven't pledged that one yet.

Speaker B

So need to put on my list.

Speaker A

The ever growing list.

Speaker A

So let's switch gears a little bit.

Speaker A

Rob, unless you had any follow ups.

Speaker B

Exactly what Carlos is talking about.

Speaker B

As I've had hallway conversations and talk to different people, this is the nature of where a lot of professors are right now.

Speaker B

They're trying a few things.

Speaker B

There's some stuff that they're getting comfortable with that's working and they're trying to find those creative ways to get students engaging with AI in the classroom in a way that prepares them for the changing marketplace.

Speaker B

Carlos is right on line with a lot of the conversations I'm having.

Speaker A

Rob, I don't know if you did it on purpose or not, but that is a perfect setup for the next question.

Speaker A

And Carlos, what do you think students need to know about AI as they enter the workforce?

Speaker A

What skills, what capabilities do you think they need?

Speaker C

So generally, I think they need to know how to use it.

Speaker C

Whether they have to use their own rational element reading to complement that, but also coming from a security mindset, the ethical use of AI is an important thing to actually learn what is allowed, what is not allowed, what is right and what isn't right.

Speaker C

And now that ethical use of AI is a big, you know, I mean, what may be ethical for someone may not be ethical for somebody else.

Speaker C

So that is a huge challenge to actually try to define what is the ethical use of AI.

Speaker C

So I think that's a challenge.

Speaker C

I mean, it's not telling them what is right, what is good, but actually that they understand from their own values, principles, education.

Speaker C

What is the ethical use of AI?

Speaker A

That's a great point.

Speaker A

Have you heard about this new concept called post plagiarism, where plagiarism isn't really a thing the way it used to be?

Speaker B

I had not heard this.

Speaker B

How is that defined?

Speaker B

And what are people saying is the path forward in that world?

Speaker A

Well, I literally just ran across this this morning, so I have not had a chance to dig into it.

Speaker A

But it got me thinking, you know, if I take some idea, some whatever, and AI work with AI to produce it, and AI writes the final whatever, let's say it's a short report.

Speaker A

And I kind of along the lines of what Carlos has his students do.

Speaker A

I prompt AI, get it to do some research.

Speaker A

I look at what it comes up with.

Speaker A

I tailor my next prompt based on a combination of what it's saying and what I'm thinking.

Speaker A

And then at the end of the session, I say, we'll just write this up in a executive summary.

Speaker A

And then I copy and paste that executive summary and put my name on it.

Speaker A

I'm not so sure that's really plagiarism because I helped create that document just as if I had a conversation with the two of you, and you helped me out a little bit, and then I gave it to some ghostwriter to write.

Speaker A

I mean, I'm not.

Speaker A

I'm not saying it is not different.

Speaker A

I'm saying I'm wondering whether or not that's really plagiarism.

Speaker A

This came up.

Speaker A

I was in a SIM Society for Information Management meeting yesterday, and the same thing came up.

Speaker A

Is that.

Speaker A

Is that really unethical to do that?

Speaker A

So what do you all think where.

Speaker B

I still come down on this, Craig, And I'll let Carlos jump in if he's got a different thought.

Speaker B

Transparency is still important in that.

Speaker B

And so if I used generative AI technologies to create that, I can still take credit for it and put my name on it, but somewhere it should be communicated.

Speaker B

Then I did it with the assistance of generative AI.

Speaker B

So that way the person who is consuming it knows how that was written.

Speaker B

And I guess one of the things I communicate to my students is transparency is important.

Speaker B

Let people know how this happened.

Speaker B

So when they go into a.

Speaker B

You give a document to your boss and they take it into a meeting.

Speaker B

He knows when he gets put on the spot that it was not completely created by his interns or his junior employees.

Speaker B

Can not put a false front on something that is going forward is a talking point.

Speaker A

Well, and that that's a smart move anyway.

Speaker A

You know, you cover yourself a bit.

Speaker C

I was going to add on that I think plagiarism is something different in that sense.

Speaker C

It's more related to the ideas.

Speaker C

Like, I mean, and I'm going to please take this from the perspective of English as a second language speaker.

Speaker C

Right?

Speaker C

So every time I have a paper that you know is going to be an important paper, I want to get that copy edited by somebody who actually gets and fixes basically the English part of it.

Speaker C

Right.

Speaker C

Now, was that written by the copy editor or by me?

Speaker A

Right, especially when it helps you with wording.

Speaker C

Right, especially when it helps me with that type of things or even if it's helping me to give context to the story, like connecting the paragraph between one and the other one.

Speaker C

It's help.

Speaker C

I mean, I don't say it's help and.

Speaker C

But the content, the idea, still yours.

Speaker C

If you are using AI as a tool, whether it's to write, whether it's to research, of course, you know, I can thank the copy editor.

Speaker C

You know, I have a note at the end, I can also cite, hey, I did this, and that's the transparency part.

Speaker C

I did this and I did this with AI in this way or the other way.

Speaker C

But at the end, really what matters is how you actually created this thing and the idea that you had and how you produce the thing.

Speaker A

Right.

Speaker A

Well, a lot of journal editors have been at least soft, requiring authors to use copy editors for a while.

Speaker A

I know some of them have a little bit of a money machine going with it as well, but.

Speaker A

No, I think that's a great point.

Speaker A

The reason I brought this up is I think we really need to reconsider what some of these concepts we've had in our brains for a long time really mean in this new world.

Speaker A

I'm trying to get my head around what plagiarism might be.

Speaker A

You copy and paste it.

Speaker A

You do one shot, prompt copy paste, get something off the Internet, copy paste.

Speaker A

Okay, that's plagiarism.

Speaker A

But there's a lot of gray area there that I think we're going to need to struggle with and have some pretty honest conversations about.

Speaker A

Because, Carlos, you kind of said this early on in a different way.

Speaker A

It's not black and white.

Speaker A

It's not just ban it or have carte Blanche to use it however you want.

Speaker A

We've got to teach students that moral decision making that you brought up earlier.

Speaker A

So I think that's a great point.

Speaker B

And we talk about this.

Speaker B

Greg, I've got a term that I like better than plagiarism, and that's academic integrity.

Speaker B

So if there's certain expectations for communication of transparency or tell me your prompts or whatever that is, it becomes academic integrity.

Speaker B

Whether or not it's plagiarism doesn't really matter.

Speaker B

It's.

Speaker B

This is the level of.

Speaker B

These are the boundary conditions we should be playing in and defining that.

Speaker B

And then, you know, playing inside that sandbox is what I think the way forward is.

Speaker B

It's just what does defining that look like right now from an academic integrity perspective?

Speaker A

And academic integrity is much easier to spell than plagiarism, 100%.

Speaker A

So I think that's reason enough to just change the term.

Speaker B

So.

Speaker A

All right.

Speaker A

Well, Carlos, any last things you want to share about what you're doing before we kind of go into a more general discussion?

Speaker C

So I have one question for you.

Speaker C

I read one of your topics about deep research.

Speaker C

So what do you think?

Speaker C

Have you used those weather, ChatGPT or even those specialized tools, AI tools for literature reviews to search the literature and then bring ideas about something?

Speaker A

Yeah, absolutely.

Speaker A

So I was so impressed with deep research that I Now pay the $200 a month for ChatGPT Pro, which gives me some insane number.

Speaker A

It's weird.

Speaker A

So I get like 120 or 130 with a really good model.

Speaker C

Okay.

Speaker A

And then I get another 120 or so with a little bit watered down the lighter model.

Speaker A

It is fantastic for getting up to speed in an area quickly.

Speaker A

Now, it is not any substitute for doing a literature review and really thinking through the literature.

Speaker A

But if you kind of are just getting started in an area, 20 minutes, 30 minutes, you can have a report that will cover all of the main points in whatever area you're looking at.

Speaker A

Like I'm working on a paper that's using service dominant logic.

Speaker A

So it's kind of a marketing thing that basically says services are the core of economic exchange, not products.

Speaker A

And I'm using this as an overarching theory for a paper.

Speaker A

Well, I didn't know anything about it and so I need to come up to speed on it quickly in a pretty short period of time.

Speaker A

I have the basics now.

Speaker A

I still need to go read the seminal papers, read the recent articles in the good journals and do all of that kind of thing.

Speaker A

But now I Have some context about it instead of just going in absolutely without any kind of prior knowledge.

Speaker A

It's almost like if you took a really diligent graduate assistant who can compile things really well and ask them to put together a report, you know, in an area, they can do that and it will help you, but you better not trust it all the way.

Speaker A

And it's not a substitute for doing your own reading in your own research.

Speaker A

That's been my take with it.

Speaker A

It's phenomenal and what it'll do.

Speaker B

So Craig, how accurate have you found it?

Speaker B

Have you found where it is getting things wrong in some places or have you found.

Speaker B

It's been pretty spot on?

Speaker A

No, it's been pretty spot on from what I can tell.

Speaker A

Now, I have not done a highly detailed analysis and checked every reference and that sort of thing, but it gives you links to the papers and you know, all of that.

Speaker A

So it could hallucinate.

Speaker A

I'm sure it does in some instances, but I found it to be quite useful.

Speaker A

But it's not ever.

Speaker A

I mean, at least not now.

Speaker A

You can't take that and put it in as your literature review.

Speaker C

Yeah, I have been trying to do that and I struggle with something.

Speaker C

Right.

Speaker C

And it's with what we value as researchers that it is actually hard to ask the AI to value that equally.

Speaker C

Right.

Speaker C

So basically you ask, hey, what are the 10 more important papers about this topic?

Speaker C

Right.

Speaker C

And then you get a bunch of references and those references, you get a conference paper, a journal paper, and then you look at the journal paper and it has 10 citations and things like that.

Speaker C

So actually it makes me wonder whether we are the value that we give to things is wrong or the AI doesn't consider the elements that we think are very powerful.

Speaker C

Like, okay, what was the journal that this paper was published in?

Speaker C

Or how many people are actually building on this thing compared to a conference paper.

Speaker C

That may be closer in the idea that I'm asking to evaluate but yet was not fully developed to a paper journal?

Speaker A

Well, I think there's a reason, a technical reason that happens and that's the paywalls.

Speaker A

So most of our top journals, and I know it's true in our field, but I think it's true in most fields those top journals are not open access, they're behind a paywall, which means AI is going to be less likely to find them.

Speaker A

They might be able to put some things together from an abstract or what other papers have said about Rob's paper with Franz Melanger, the big privacy paper.

Speaker A

A lot of people have said things about that, so we can start to put pieces together.

Speaker A

But I think that's a huge problem and a big reason not to over rely on things like deep research.

Speaker A

They're really great for background, but I think that at least for now, that's where it stops for me.

Speaker A

Rob, what do you weigh in on this?

Speaker B

I, I think you're right.

Speaker B

The fear I have is you're taking a very reasoned, ethical approach to how you're doing things.

Speaker B

And I'm sure there are others in the academic community that aren't.

Speaker B

And how as reviewers, as you know, the, the people who are deciding what gets in to be published and so forth, how is that going to be determined?

Speaker B

And that's a real challenge that I don't know that there is a great answer to.

Speaker B

And there will be people who start publishing way more frequently because they're letting AI do the literature review and are adding minimal human value above and beyond what the prompts do.

Speaker B

To me, this is a place where, when it comes to ethics, I think this is a huge ethical dilemma that each individual person can evaluate for themselves.

Speaker B

But how do you know what a person's ethical values were when you're the judge and gatekeeper of whether that gets to be published in our academic journals?

Speaker A

That's a good point.

Speaker A

I think it's going to be much more of a problem for lower level journals.

Speaker A

You know, the top journals have people that know reviewers and editors that know these areas inside and out and can spot sloppy work a mile away.

Speaker A

I don't think it's going to be a problem for those top 50 journals, top 30 journals, whatever, pick a number.

Speaker A

But when you get way down the scale, we're going to see all kinds of generative AI garbage.

Speaker A

And we already are, right?

Speaker B

And people begin citing the garbage and inciting the, you know, stuff that AI created.

Speaker B

Maybe it's good, maybe it's bad.

Speaker B

You know, that starts to make an influence as well.

Speaker B

It's a really interesting time to be considering just how that all plays out.

Speaker A

So can we switch gears?

Speaker A

So I want to talk about something that I've been pondering for a while now and wrote a little AI Goes to College newsletter article about it.

Speaker A

And that's this idea of AI supervision.

Speaker A

Let me give you the high level view.

Speaker A

I think what we're going to see in the next five years or so is lots and lots of AI agents, more or less autonomous agents, working together.

Speaker A

It'll almost be like this AI fiver where I need somebody to copy edit a Paper.

Speaker A

So I'm going to have that agent copy editing the paper.

Speaker A

Now, that agent may call an agent that knows the style of the journal or the formatting agent, and that agent may call some other agent.

Speaker A

And we'll have all these different agents working together.

Speaker A

I think what we might see with entry level jobs is workers will be supervising those agents.

Speaker A

So they'll be hiring those agents, they'll be evaluating them, they'll be firing them, they'll be coordinating them.

Speaker A

It really is all of those things that managers do, but with these autonomous agents.

Speaker A

And I know that's a little bit out there, but I'm pretty convinced this is where the world is going to go and, you know, five years, 10 years, whatever it is.

Speaker A

So I don't know.

Speaker A

What do you think?

Speaker B

Yeah, I think we are.

Speaker B

And I think the challenge is going to be at what point do you stop trusting or you develop the trust that your agent is truly representing what the person wants them to.

Speaker B

And so if I'm going to actually have it make decisions on my behalf, at what point do I develop that trust where I let it make those decisions for me?

Speaker B

And so that's basically what a manager does, right?

Speaker B

Is they try to get their employees to that place where they're making the decisions consistent with how the organization would or how the manager would.

Speaker B

But developing that level of trust, especially as different agents work with each other, I'm honestly, as a cybersecurity guy, concerned that the well could be poisoned and you may not know it.

Speaker B

And I think that's something we have to seriously consider.

Speaker B

Because when you're taking the human out of the element of those different pieces, you're less likely to have that person step in and say, oof, that looks like a phishing attack or whatever that thing is.

Speaker B

So I think that's where it's going.

Speaker B

But I also think it's a place where we need to tread carefully and not just jump in and say, just because it can do it, we should correct.

Speaker C

I think my concern, and probably what you mean by supervision is the autonomy part.

Speaker C

I mean, what is really being autonomous?

Speaker C

What is really having an autonomous AI?

Speaker C

How we set the boundaries in which they can actually be autonomous or can't be autonomous?

Speaker C

Right, right.

Speaker C

I personally don't see.

Speaker C

I mean, we have come a long way with AI now from what we had even three or four years ago, but still, I think autonomy is a big, big, big word to actually be used.

Speaker C

I mean, would you actually let, similar to what we were just chatting about, would you actually let The AI do this for you and trust completely that that's the job and then just supervise the results.

Speaker C

Or you really want to have an AI assisting the work and let somebody does the work?

Speaker A

I mean, we've been trusting algorithms for decades.

Speaker A

You know, that's what most of automation is, right?

Speaker A

We just have everything from a factory floor to payroll processing to automated grading.

Speaker A

I think this is an extension of that.

Speaker A

Although it's a lot harder to audit and to know what it's going to do because it's non deterministic and generative, which I think is part of what you guys were getting at.

Speaker A

Do you think we ought to start teaching this?

Speaker A

That's where I was going with it.

Speaker A

I think we need to start considering having AI supervision courses where they learn.

Speaker A

How do you determine the trustworthiness of the agent?

Speaker A

How do you put the guardrails in place?

Speaker A

How do you evaluate their performance?

Speaker A

How do you make sure they're communicating effectively and that they're coordinated correctly?

Speaker A

I think we need to start really working on that.

Speaker B

Well, I think about from the information systems world, the systems development lifecycle class almost is what you described, right?

Speaker B

Those are tasks, those are steps that we very well were doing with man made, human made systems.

Speaker B

But as we begin piecing together these off the shelf systems, these AI agents working together, it fits within the same framework of a lot of what I believe is already being taught.

Speaker B

It's just pivoting from we're developing a new ERP system to we're utilizing these AI chatbots to do different things.

Speaker B

So it really is a pivot.

Speaker B

It may change kind of the framework we might follow, but it's similar thinking to what we've thought about for years.

Speaker A

So Rob, for the listeners who are not is so ERP is an enterprise system, kind of like ours is workday.

Speaker A

I'm sure you guys have your own systems that kind of do lots of stuff around the university.

Speaker A

So tell give us the 32nd version of what the SDLC is.

Speaker B

So systems Development Lifecycle is a process that you go through from initial conceptualization to figuring out what the system should do, to how you implement it, to how you evaluate it and test it to see if it works, ultimately how you deploy it and then how you monitor it.

Speaker B

So it's the initial version was seven steps.

Speaker B

There's various variation to that, but it really is how do we take what we want to do, figure out how we're going to do it, do it, and then evaluate if it's doing what we thought it did through A very systematic process.

Speaker A

Right.

Speaker A

It's a kind of step by step.

Speaker A

You start with figuring out what it needs to do and go all the way through maintaining the system.

Speaker A

Okay.

Speaker A

So just want to make sure we kind of leveled that out a little bit.

Speaker C

I want to give you an example of something that is really related to what you said.

Speaker C

But there is an example that I can use, and I try to use this in my class.

Speaker C

Right.

Speaker C

And we have actually CM systems actually trying to monitor thousands of logs, events, or things that are happening in every machine.

Speaker C

Right.

Speaker C

And then, of course, trying to identify traffic abnormalities and trying to identify, okay, there may be a reason we're having an attack this or here or there.

Speaker C

So that's a great task that you can develop with something like AI, Right.

Speaker C

But again, you need supervision, right.

Speaker C

You don't want to have false positive there or worse, false negative.

Speaker C

Right.

Speaker C

So you need that supervision for those agents that you can have.

Speaker C

So you can have a lot of help because process information more efficiently.

Speaker C

Instead of having 20 people just looking at every single log file, trying to identify if we have something which is abnormal there, we can actually use those.

Speaker C

But supervision means that we still need to be careful on what type of decisions we need to make based on the information that we are getting.

Speaker A

Absolutely.

Speaker C

And then who makes the decision?

Speaker C

Right.

Speaker C

I mean, like, let's say that we are going to say, okay, we are definitely under attack.

Speaker C

And then we need to actually do this, this or that with those routers, or we're gonna see this or this or that.

Speaker C

That supervision needs also the same or even higher level of understanding of what's going on to be able to make that decision at that moment.

Speaker C

It's not that I don't understand how traffic behaves in a network.

Speaker C

And just because the AI is telling me this, I will decide what I do with that.

Speaker C

I have to understand how traffic goes, what type of protocols I'm using, what is a normal pattern that I can use, what are the protocols that I usually see.

Speaker C

All those elements needs to be understood in order to actually be able to supervise that AI.

Speaker C

But yeah, I agree that, you know, they are helpful.

Speaker C

And having those will be extremely, extremely helpful.

Speaker A

That's a kind of important point.

Speaker A

That AI may not do 100% of the work, but what it might do is 80% of the work, let's say.

Speaker A

And that lets the human expert really focus on that 20% instead of spending a lot of time trying to filter through to figure out what the really kind of edge cases or really difficult decisions or assessments are.

Speaker A

I mean, we see this with grading.

Speaker A

This is why I'd love to have grading agents.

Speaker A

If I'm lucky enough to have it helping me, I tell them I want you to do the easy ones.

Speaker A

Like if I give an Excel project, the easy ones you take care of.

Speaker A

If you have to spend more than a couple of minutes trying to figure out how to grade part of an assignment, kick it over to me.

Speaker A

So now instead of 90 to look at, I've got 10 or 12 where I can really dig in and add value.

Speaker A

Right?

Speaker A

So we are coming up on our close.

Speaker A

Carlos, any last comments?

Speaker A

Any anything you want to leave our listeners with?

Speaker C

I just want to thank you.

Speaker C

It was fantastic.

Speaker C

That's a fun conversation to have.

Speaker A

We're delighted you could join us.

Speaker A

Rob, any last thoughts?

Speaker B

Nope.

Speaker B

I just want to express my thanks to Carlos for coming and hanging out with us today.

Speaker B

It was a pleasure to chat with you and I look forward to future conversations where you can share with us further things you're doing with generative AI and other AI tools.

Speaker A

All right, well, this has been another episode of AI Goes to College.

Speaker A

You can find it wherever you find podcasts or you can go to aigostocollege.com follow and we've got all kinds of one click links that'll set you right up.

Speaker A

All right, thanks everybody.

Speaker C

Thank you.