Aug. 13, 2025

AI Confessions, Energy Costs and Vibe Coding in Academia

AI Confessions, Energy Costs and Vibe Coding in Academia
The player is loading ...
AI Confessions, Energy Costs and Vibe Coding in Academia

Episode Overview: In this episode, hosts Craig and Rob discuss the evolving landscape of AI in academia, research ethics, and the surprising environmental impact of AI technologies. They also test-drive AI vibe coding, discuss agentic AI, and share practical advice for instructors, researchers, and students navigating a fast-changing technological world.

As a bonus, listen to how a border collie would explain epistemic injustice!

Key Topics & Takeaways

1. Academic Honesty & AI ("AI Confessions" in Publishing)

  • Honesty is the Best Policy: When using AI tools like Elicit or Grammarly for research, be transparent in your academic declarations. Share enough detail to feel honest, but don’t stress if you can’t recall every interaction.
  • Journals & AI Use: Journal policies on AI differ dramatically—some even ban AI use altogether. Question whether those venues align with your publishing values.
  • Editors & Transparency: Journals demand transparency from authors, but rarely provide clear guidelines or disclosure on how your AI usage will be handled.
  • Takeaway: Aim for high-level honesty in your disclosures. If in doubt, err on the side of transparency, but don’t feel compelled to provide exhaustive step-by-steps.

2. The Environmental Cost of AI

  • AI & Resource Consumption: Training large language models consumes massive electricity and water resources. Data centers may bring economic benefits but create significant energy and environmental tensions.
  • Transparency Needed: AI companies and governments should be more open about environmental impacts and strategies for sustainability.
  • User Responsibility: Everyday users contribute to AI’s environmental footprint—using AI efficiently and mindfully is everyone’s responsibility.
  • Takeaway: Educate yourself on the energy/water cost of AI and advocate for sustainable practices in tech.

3. Vibe Coding & AI-Assisted Programming

  • What is Vibe Coding? It’s prompting AI (like ChatGPT) to write software for you—sometimes even without traditional coding.
  • Practical Examples: From fun tools that explain complex subjects in dog-speak (‘Colliesplain’), to running advanced text analyses (LDA topic modeling) in Python with minimal programming knowledge.
  • Limits & Opportunities: Fully relying on AI for complex projects can be risky if you can’t debug or fully understand the code. However, AI-assisted coding dramatically speeds up the process and opens doors for those who wouldn’t have coded otherwise.
  • Takeaway: AI is a powerful coding assistant, especially for prototyping or smaller tasks, but a foundational understanding of the code and analysis involved remains essential.

4. Agentic AI & Task Automation

  • What’s Agentic AI? Tools that not only complete tasks but can string together sequences of tasks or collaborate with other agents.
  • Real-World Use: The hosts discuss planning conference materials and lifelong learning using agentic AI, noting it can handle much of the “grunt work” but still requires human direction for nuanced judgement.
  • Governance Cautions: The delegation of decisions to AI agents (especially in areas like applicant screening) can lead to ethical and legal issues if not managed carefully.
  • Takeaway: Embrace AI agents for efficiency, but institute proper oversight and understand the governance and ethical implications.

5. Navigating AI Tools/Platforms

  • Emerging Tools: New features like ChatGPT’s Agent Mode, Study & Learn, and Gemini’s Guided Learning are making AI more accessible and interactive for learners and educators.
  • Practical Use: Satisficing—choosing a tool that works well enough rather than chasing constant upgrades—can save time and reduce frustration.
  • Institutional Policies: Heed privacy regulations (like FERPA in the US) when using AI with student or confidential data. Many universities approve only specific platforms.
  • Takeaway: Test new offerings, pick what works for you, but remain vigilant about data privacy and security.

For all things AI Goes to College, including the podcast, go to https://www.aigoestocollege.com/.

Email Rob - Rob.Crossler@AIGoesToCollege.com

Email Craig - Craig@AIGoesToCollege.com

Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - Untitled

00:41 - Untitled

00:42 - Introduction to AI Education

08:36 - Ethical Considerations in AI Usage

09:41 - The Impact of Technology on Environmental Concerns

16:30 - Explaining Epistemic Injustice through a Dog's Perspective

28:25 - The Impact of AI on Coding and Data Analysis

34:46 - The Role of Trust in Agentic AI

40:31 - Navigating New Technologies in Education

Transcript
Speaker A

Welcome to another episode of AI Goes to College.

Speaker A

And as always, I am joined by my friend, colleague, and co host, Dr. Robert E. Crossler from Washington State University, where it is probably much more comfortable than it is here in northeast Louisiana.

Speaker B

Yeah, we're in the 90s today, Craig.

Speaker B

We just don't get the humidity that you do.

Speaker A

Oh, wow.

Speaker A

We're going to do something new today.

Speaker A

We're going to start off with a couple of listener questions.

Speaker A

The first one comes from Rosetta, and I'm going to just read this.

Speaker A

I would like to ask a question that may be relevant to many academics, and if you haven't covered it yet on your podcast, which we haven't, I hope you will consider it.

Speaker A

It's about confession.

Speaker A

I submitted a paper to a journal and was asked about the use of AI.

Speaker A

While I may have used ILLICIT to point me in the direction of relevant articles, I discounted some of its suggestions and used other sources in my quest for relevance.

Speaker A

How much detail should I provide in confession and how will the journal react to the confession?

Speaker A

Am I taking five more Hail Marys and three Our Fathers?

Speaker A

So, in other words, is she going to have to do some sort of penance because she used AI?

Speaker A

So, Rob, what's your take on this?

Speaker B

I love this question because it brings to the front a challenge a lot of us face in that we're using AI, we're trying to stay within our own voice of who we are, but there are some efficiencies to be gained by using it.

Speaker B

And then journals all seem to have a different approach to what is acceptable and what is not acceptable for the use of AI.

Speaker B

And so I faced with this very question, you know, do I go into the details or do I simply acknowledge that, you know, in my research and in grammar and updating grammar and writing that AI was a tool that I used, or do I give the details I come down on, erring on the side of being honest, right, That I used it to help me research, to help outline whatever that might be, that I use Grammarly pretty heavily to help me make sure my writing is tight and those sorts of things and let the cards fall where they may.

Speaker B

But when I read LinkedIn from some of the editors and what they talk about for what they expect, some editors come out and say, zero AI.

Speaker B

And so it kind of begins to make me wonder, A, can they detect it?

Speaker B

And B, is that a journal I want to publish in if I can't with a clean conscience answer about AI in a way that I know is consistent with what that journal believes.

Speaker A

Yeah, I think unless that's a really top journal, 0ai would just be off my list.

Speaker A

It's a frankly ridiculous stance in my view, especially since I'll bet a lot of people don't know if they're using AI.

Speaker A

But if you're using Grammarly, are you using AI?

Speaker A

Well, depends on how you're using Grammarly.

Speaker A

It's kind of ironic.

Speaker A

So journalists want transparency from authors, but they're not providing transparency in what they're going to do with the information.

Speaker A

I take a little bit different approach.

Speaker A

I would talk about how I used AI at a high level, but would not try to go into too much detail because I'm not sure I could remember.

Speaker A

You know, I use AI constantly to help me develop ideas, to help me refine arguments, to give me gaps in my logic or in my writing.

Speaker A

It doesn't write anything for me and it might get me started on some searches, but it's certainly not even the majority of what I might do for the background literature.

Speaker A

So I don't know.

Speaker A

Could you even put in detail exactly how you use AI?

Speaker B

I don't think I could.

Speaker B

It would be hard to do it and to do it honestly.

Speaker B

And what I'd really like to see from journals is a list of what they don't want us to use AI for.

Speaker B

Right.

Speaker B

Be much more explicit.

Speaker B

I understand when generative, generative AI became a tool that was flooding the world that it was hard to really say what you could and what you couldn't use for it.

Speaker B

But I think there's enough use cases out there and enough information that explicit statements about what it should not be used for would help authors to not feel like Rosetta feels where, you know, it's just an open ended question and you don't know what the trigger activities are that will get you blacklisted on that particular article.

Speaker A

Yeah.

Speaker A

And I, I mean, I don't even know how you draw lines.

Speaker A

I mean it's kind of a parallel to what we face to face instructors.

Speaker A

You know, exactly what guidance do we give to students?

Speaker A

Although I think we are doing a better job than journals are right now.

Speaker A

My answer would be enough to be honest and no more.

Speaker A

My concern about putting in too much detail is you may leave something out and that could give the appearance of being dishonest, even though it was just an omission because we co produce with these tools and we don't really know exactly how we used AI.

Speaker B

Yeah, and I think one of the things too, going back to authors and their Roles and responsibilities in this.

Speaker B

I would argue that I'm going to make up a number, but 80% of authors are going to have this exact same dilemma that we're having, where it's like, okay, we used it, we did the work, here's how it is.

Speaker B

But there's still 20%, maybe a high percentage, but of authors that are gaming the system, are using the tools in unethical ways.

Speaker B

And in an effort to catch those that are, the rest of the author community gets to suffer a little bit from figuring out what's going on.

Speaker B

How are things inadvertently affecting those people who are trying to be honest authors utilizing the tools in a reasonable way?

Speaker A

Yeah, and I'd put some of this back on the journals.

Speaker A

If they're not doing a good enough job of reviewing and vetting these papers to tell if it's really produced by AI, then they've got other problems going on.

Speaker A

And I know those papers get through, but it's kind of easy to tell if AI is just writing it and it's going to be crappy quality research anyway.

Speaker B

Yeah, yeah, no, it's an interesting world that we're in and I do look forward to how it progresses through there.

Speaker B

But I think a lot of players should be having this conversation.

Speaker B

And even if you're an associate editor, senior editor, editor in chief especially, but as you get a chance to be in the room having these sorts of conversations, I think bringing more voices to the table in it is important.

Speaker B

I talked to one of my colleagues who's an AE at a journal, not in information systems, and was asking, what are they doing about generative AI and those sorts of things in their journals.

Speaker B

And the response I got was, I've not heard any conversations.

Speaker B

Right.

Speaker B

So even people who are in the room for the editorial meetings aren't necessarily hearing what's going on.

Speaker B

So it does make you wonder where the conversations are happening.

Speaker A

You know, I think for some of some of the publishers, it's the overall publisher, the Taylor and Francis or those kind of folks that are setting some of the policies.

Speaker A

But it's a good question.

Speaker A

I'm not sure we have a perfect answer, but I will reiterate, put in enough detail to make.

Speaker A

Make yourself honest to feel okay about it, but then stop.

Speaker B

So the other thing I would encourage people to think about, and this is an ethical part of it as well, is many of these large publishers are selling the papers back to these companies to seed and to be a part of the database for the large language models.

Speaker B

So when I see us struggle with what do I use it for?

Speaker B

How do I use it?

Speaker B

And then see the papers basically being a revenue source for the publishers through AI.

Speaker B

It almost seems paradoxical.

Speaker B

I think that is something as authors that we need to start looking at and saying, how do we feel about that?

Speaker B

And where is our agency in all of this?

Speaker A

You didn't get your cut, you didn't get your check this month.

Speaker B

I'm still waiting on that, Craig.

Speaker B

Maybe they sent mine to your house.

Speaker A

Could be.

Speaker A

I'll double check.

Speaker A

Yeah, that's a good point.

Speaker A

All right, let's move on to the next one, which is completely different, but also touches on ethics.

Speaker A

And this one is from Gene.

Speaker A

I'd love to hear your thoughts on the energy resources required for AI and how AI companies are addressing them.

Speaker B

That's a great question.

Speaker B

I'll take first swing at that one.

Speaker B

So, no, I've thought about this a lot and I actually was approached to see if I wanted to be interviewed for a news article about this very topic, but I declined because I didn't know enough to feel like I could be an expert on the news about this sort of topic.

Speaker B

But the thing about large language models and especially training them is they require a lot of electricity and they require a lot of water and it hasn't been talked about a whole lot.

Speaker B

I do know communities are turning down permits for data centers because of the environmental concerns.

Speaker B

A lot of times these are trying to be put in in places that are currently experiencing droughts.

Speaker B

And, you know, that trade off between the environment and our ready access to data is a significant challenge.

Speaker B

And I don't think we talk about it enough.

Speaker B

I think this needs to come to the forefront and be part of the conversations.

Speaker B

And this is where I'd like to see.

Speaker B

I don't know if it's government needs to step up and force these conversations or if businesses who are looking at using large language models begin speaking more about how they're balancing these green resources in a way that helps them to be more efficient with data.

Speaker B

Because one of the things that really has shocked me is how little we've talked about green energy and those sorts of things from businesses.

Speaker B

As soon as generative AI, you know, hit the conversation, it's almost like it got turned off as something that we talked about.

Speaker B

And I'm.

Speaker B

I think pendulums swing one way or the other and go back and forth over time.

Speaker B

But I think we have moved very far away from talking about the impact on our environment as we're going full sail into what technology is doing.

Speaker A

And I agree totally.

Speaker A

It's a significant concern that I don't want to say it's not being addressed, but it probably isn't being addressed as publicly or as expansively as it ought to be.

Speaker A

We're getting a huge data center not far from where I live.

Speaker A

We don't really have the drought problem that a lot of areas have, but there's a tension.

Speaker A

So that's going to be a significant economic boom for the region.

Speaker A

You know, northeast Louisiana is not the most wealthy area in the country.

Speaker A

And so it's a tension, you know, that's going to bring a lot of jobs, not just with the development but with the infrastructure that's going to go into place.

Speaker A

The residents are going to benefit from to some extent from the infrastructure.

Speaker A

So there are a lot of positives, but the energy use is going to be massive.

Speaker A

I know there aren't one of the companies, Facebook or Amazon or somebody is trying to spin up Three Mile island again.

Speaker A

So I think nuclear could be a big part of this.

Speaker A

That's something we shut down in the US a long time ago.

Speaker A

But other parts of the world are relying on it.

Speaker A

Frankly, I'm more concerned about water than I am about the electricity aspect of it.

Speaker A

But I am not an environmental scientist, so I really don't know what I'm talking about here.

Speaker B

Yeah, one thing from water that I've seen that's encouraging and what they're doing is a lot of times when these new data centers are being built, they're using the water to help heat other places, governmental buildings and to help the community with the fact that they're using that water.

Speaker B

And a lot of times they try to make that water self contained so it doesn't evaporate and they can recycle, reuse.

Speaker B

So it's not just a waste of water that you see in so many different places.

Speaker B

So I think being purposeful about that and going back to what you said, that, you know, you think people are doing something about this, but we don't hear about it enough suggests that we just aren't being transparent enough as a society about what we're doing.

Speaker B

And I think a lot of times in the void of information, we, we assume the worst.

Speaker B

And so I would encourage organizations, I would encourage government bodies that can force organizations to do things through legislation to encourage and require more transparency about this so we can see the full picture of, you know, what are the GDP gains, if you will, what are the productivity gains that we're getting from using the technologies.

Speaker B

But what is the true costs to other aspects of society.

Speaker A

Right.

Speaker A

Well, and I'll put some of this on the users.

Speaker A

Just like if you order things from Amazon, you get a box.

Speaker A

If you use a lot of generative AI, you're contributing to the overall problem.

Speaker A

So this is going to sound really weird, but I've always been very polite to the chatbots, and I'm starting to rethink that.

Speaker A

You know, I have no idea of saying thanks really has any kind of material impact.

Speaker A

I mean, I'm sure my one use of it doesn't matter, but, you know, in the aggregate, maybe it does.

Speaker A

I don't know.

Speaker A

So, you know, this is going to be a little bit of a stretch, but we.

Speaker A

We have a lot of paper mills here.

Speaker A

Forestry products are one of our biggest agricultural resources in this region.

Speaker A

A lot of them go to Amazon boxes.

Speaker A

A lot of trees, big pine trees, get cut down every year so that you and I can get our Amazon deliveries.

Speaker A

And so, you know, is this just a different version of that same sort of thing?

Speaker A

I don't know.

Speaker B

Yeah, it might be.

Speaker B

I mean, recycling's kind of been one of those big things that, you know, a lot of people are encouraged to recycle those cardboard boxes so they can be made into different cardboard boxes or different paper products.

Speaker B

And so what is the analogy?

Speaker B

In the world of large language models and data centers, that is the equivalent of recycling, and what does that look like?

Speaker B

And how do we begin to learn about that as appropriate societal behavior?

Speaker A

That's a good point.

Speaker A

But one last point on this is there's a lot of pressure on the AI providers to reduce the computational costs because they're huge.

Speaker A

I mean, forget about the environmental cost for a second.

Speaker A

The financial cost of running these data centers is massive.

Speaker A

And I think that's one of the things that OpenAI and Google and Anthropic and the rest are working on is trying to lower the computational costs.

Speaker A

One thing that I think will happen is some of this will get pushed onto devices in these mini models that a lot of the big companies are working on.

Speaker A

So I don't know, Gene, I'm not sure if we gave you a good answer or not.

Speaker A

You wanted to hear our thoughts and you did.

Speaker A

Maybe you're wishing you hadn't, but that's where we are.

Speaker A

Right.

Speaker A

Anything else on that one, Rob?

Speaker B

Nope, I think we're good.

Speaker A

All right, let's talk about another controversial topic in a little bit of a different way.

Speaker A

Vibe coding.

Speaker A

So, Rob, when I say vibe coding, what do you think about?

Speaker B

I think about just talking to ChatGPT and asking it to create code for you, and you do it without writing any code.

Speaker A

All right.

Speaker A

Yeah.

Speaker A

So that's the basic idea.

Speaker A

You talk to some large language model and it will either just write the software for you or it will give you the code that you can copy and paste into another environment and come up with something.

Speaker A

So I want to give two very different examples that I played with over the last couple of weeks, and the first one I'm very proud of.

Speaker A

It's called collie splain.

Speaker A

Rob.

Speaker A

What?

Speaker A

Whoops.

Speaker A

And on cue, one of my collies is barking.

Speaker A

So I know I tried to send this to you and it didn't work out.

Speaker A

So what do you think collysplain is?

Speaker B

Well, if I were to take the knowledge of mansplain, and I would say it's similar to that, but it is a collie explaining to you how the world works.

Speaker A

Yes, that is right.

Speaker A

So I just wanted to play around with Google's AI Studio.

Speaker A

And so I said, write a program that explains any topic as if a border collie is explaining it.

Speaker A

And so I'm going to read one to you, which makes for great audio.

Speaker A

So we'll see how this goes.

Speaker A

I don't know.

Speaker A

So I ask it to explain epistemic injustice.

Speaker A

All right?

Speaker A

Which is basically the idea that someone is disadvantaged and harmed in their capacity as a knower.

Speaker A

And so here's what it says.

Speaker A

Woof, woof.

Speaker A

All right, let's herd these thoughts.

Speaker A

I've got it.

Speaker A

I've got it.

Speaker A

Epistemic injustice.

Speaker A

It's like when my human thinks I'm just barking at a leaf, but I'm actually barking at a squirrel that's plotting to steal my squeaky ball.

Speaker A

They dismiss my truth because they think they know better or because I'm just an excited dog.

Speaker A

Okay, so picture this.

Speaker A

You know something important.

Speaker A

Maybe you just know the best way to get all the sheep into the pen or where the really good treat is hidden.

Speaker A

You have knowledge, but someone doesn't believe you or doesn't even understand what you're trying to say.

Speaker A

Not because your knowledge is bad, but because of who you are.

Speaker A

Maybe you're a young pup or you have a funny accent, or you're just different from the dogs they usually listen to.

Speaker A

So your knowledge gets ignored or your voice isn't heard fairly.

Speaker A

It's like I'm trying to explain with urgent barks and paw gestures that the cat is on the counter, but my human just says, patches.

Speaker A

You're just being silly.

Speaker A

They're not listening to my valid warning because they have a preset idea about me.

Speaker A

My knowing is dismissed.

Speaker A

It's when your ability to know or understand things or to share that understanding is unfairly put down or ignored because of some prejudice or stereotype about you.

Speaker A

It's like not letting a perfectly good border collie run the agility course just because someone thinks all collies are too obsessed with Frisbees.

Speaker A

It's unfair.

Speaker A

And it means good information, like where the squirrel is gets lost.

Speaker A

Now can we go chase something?

Speaker A

My brain is buzzing.

Speaker A

And then it's got a little note at the bottom that says, brought to you by AI Studio and a good dog.

Speaker A

Now, okay, that's kind of silly, right?

Speaker A

But I want to point out that it's pretty.

Speaker A

Right.

Speaker A

That's a pretty good explanation of the basics of epistemic injustice.

Speaker A

So I don't know.

Speaker A

Are you a border collie guy?

Speaker A

I love border collies.

Speaker B

No, we're a golden retriever guy.

Speaker B

I wonder how a golden retriever would explain that.

Speaker A

But with slobbery tennis balls for sure.

Speaker A

Without a doubt.

Speaker B

Yep.

Speaker B

So what I'd be curious about, and this is where I've been looking at Vibe coding and the argument of is this going to replace jobs?

Speaker B

You know, are computer scientists going to be replaced by Vibe coding?

Speaker B

I was reading an anecdotal story of one expert who spent three to four months playing with Vibe coding and ultimately gave up because it would create something nice like you just described, very simple, you know, on that initial prompt.

Speaker B

And whenever he tried to use Vibe coding to alter his code, it would break something else.

Speaker B

So it add the new feature, but it would break something else.

Speaker B

So it would always give him something, but it never gave him something that was completely workable.

Speaker B

And then as a knowledgeable programmer, he would spend more time going back in trying to fix those things that got broken than he would have if he would have just added the new feature himself.

Speaker B

And so that's where I see some real challenges with Vibe coding.

Speaker B

If you don't know how to be the person who can go in and fix things, or to know when is it a good idea to use the Vibe coding approach, and when is it a better idea to go in and actually write the code yourself?

Speaker A

Yeah, I think that's a really good take on it.

Speaker A

Although I would point out that from what I'm reading, maybe not fully vibe coding, but AI assisted coding makes programmers much more efficient.

Speaker B

100%.

Speaker B

And I think of how I used to code when I had the opportunity to code earlier in my professional career.

Speaker B

And Google was the source of that.

Speaker B

Right.

Speaker B

I wouldn't know how to do something, but I got really good at writing Google queries to go find it and then come in and apply it.

Speaker B

And as I play with some things in generative AI technologies, it gets me that information so much faster and then allows me to spend my cognitive efforts not in Google searching, but in getting that data or that knowledge and then applying it.

Speaker B

And so I think that does change the world of how we learn to code and so on.

Speaker B

But it's going to be important, I think, for the foreseeable future to understand what your code is doing so you can get in there and test it, make sure it's not doing something unintentional, and modify it as needed.

Speaker A

Yeah, I think that's right.

Speaker A

And this kind of fully vibe coding is good for small things.

Speaker A

Where I think it's got a lot of potential is if you want a program to do one little thing and you don't really know how to code, there's no good way to get that done, or there was no good way to get that done.

Speaker A

But if you have a little bit of knowledge of what your task is and kind of how things work in general, I think you can vibe code small things like maybe giving you a summary of your week's calendar and going out and pulling up relevant emails is the kind of thing that you could do.

Speaker A

Are you going to Deploy that to 10,000 users?

Speaker A

Maybe not, but it could work for you.

Speaker B

Well, and where I see this being super helpful too is as an MIS professor, if you understand the business problems, you understand what you need to do to iterate, even have an example of what something could do or so forth from the data that you've collected from the business and trying to solve that problem that you can get there way, way quicker and without having to coordinate with maybe the programming wizard who would be the person who could put those things into place.

Speaker B

So I see a lot of potential for systems analysis, sorts of people that are helping to launch new business solutions and being able to do so very rapidly and very quickly.

Speaker A

Yeah, especially things like prototyping could be huge.

Speaker A

But I want to give you another example that I think is probably much more interesting, if less fun.

Speaker A

So have you ever heard of Leighton Dirichlet analysis?

Speaker B

No.

Speaker A

Okay, so basically, and sorry if we geek out here for just a second, it's kind of like qualitative factor analysis.

Speaker A

It's a text analysis algorithm that will analyze a big chunk of text and try to Put comments or whatever it is you're looking at into groups.

Speaker A

So it'll take this big mess and try to make sense out of it.

Speaker A

We're working on a revision for a fairly important paper for us, and we did a bunch of coding by hand.

Speaker A

So we had some comments and we went through the comments, had hundreds of them, and we tried to classify the comments into different categories.

Speaker A

I can't give too much away here because the paper's under revision, but like, let's say that we were looking at something teaching oriented and you've got a bunch of student comments and you want to put it into buckets.

Speaker A

Like, the assignment was too hard, the assignment was unclear, the assignment was easy.

Speaker A

So you can kind of make sense of the comments.

Speaker A

Is that making sense?

Speaker B

Yep.

Speaker A

Okay, so this is an algorithmic way to do this and it's really complicated and there's a lot of pre processing that has to go on.

Speaker A

I understood the basics of the method, but I had no idea how to code it.

Speaker A

None.

Speaker A

I don't know Python.

Speaker A

I quit coding a long time ago, I mean, decades ago.

Speaker A

So I don't really know the language, but I thought, okay, I'm going to see what I can do with this.

Speaker A

So I worked with a combination of ChatGPT and Claude, mostly ChatGPT, walking through this code, putting it into Google Collab, and basically Google Collab has an interface where you can load your data and then copy and paste or write your code in another window.

Speaker A

Chatgpt would tell me, all right, here's the first thing you need to do to load these packages that you need for Python little sub programs.

Speaker A

And so I'd put that in.

Speaker A

Well, I might get an error message.

Speaker A

And then Google Collab would say, hey, you've got this error, do you want me to fix it?

Speaker A

Sure, of course I want you to fix it.

Speaker A

And it would fix it.

Speaker A

And then I'd go back to ChatGPT and do the next thing.

Speaker A

And in the course of, I don't know, a couple of hours, I had this analysis done.

Speaker A

I mean, it was shocking.

Speaker A

And this was thousands of records of data.

Speaker A

I captured the code so I can document what we did.

Speaker A

I understood what the code was doing, but there was no way I could have done that without the AI tools.

Speaker A

And like you said, it's all reproducible, it's all captured, it's all documented.

Speaker B

And just to be clear, and this kind of goes back to what we talked about earlier in what you did is you used Chat GPT to help you write Python code, and then it was that Python code that you used that did the help with the analysis.

Speaker B

So it wasn't generative AI doing the analysis for you, it was generative AI helping you to learn the programming language.

Speaker B

You needed that.

Speaker B

Anyone who was doing this sort of a data analysis would be doing the exact same type of code.

Speaker B

And so much like we would have done with SPSS or sas, but it just made you much, much more efficient at getting that code written to be able to use Python as a tool to help you.

Speaker A

That's right, except for one little thing.

Speaker A

It not only made it more efficient, it made it possible, which it wouldn't have been without.

Speaker A

I mean, you know, I could have gone and hired a grad student, that kind of thing, to write the code.

Speaker A

But you raise a good point.

Speaker A

If I had asked ChatGPT to do the analysis, it would not be reproducible.

Speaker A

I would have no idea how it got to where it got.

Speaker A

But now I can document all the decisions, I can document the code, I can document everything.

Speaker A

Just like if you were doing this in sas, you'd have your SAS code.

Speaker A

Sas, for those of you that don't know, is just a statistical package.

Speaker A

It's pretty widely used, so that's pretty cool.

Speaker A

I was shocked by how well it worked.

Speaker B

Yeah, it's a great solution.

Speaker B

And it goes back to the same example we were talking about before, where it used to be, you might have been able to do some Google searches and get people who say, well, here's how you load your data, here's the code that you need to use.

Speaker B

You'd have to read through it, process it, and say, okay, let me go and give that a try.

Speaker B

And you may have been able to accomplish a similar thing using Google searches, but it probably would have taken you a full week instead of something you were able to do in an afternoon.

Speaker A

Absolutely.

Speaker A

And I would have given up at that point.

Speaker A

I would have gone and hired somebody to try to do this.

Speaker A

A couple of interesting aspects of this I want to make sure we point out.

Speaker A

One is that even though I don't know Python, I know enough about coding to understand what was going on.

Speaker A

So with the colleagues playing, I just said, write this code.

Speaker A

I didn't even look at it because it didn't matter.

Speaker A

But with this other analysis, this LDA text analysis, I needed to know what was going on.

Speaker A

I need to understand it.

Speaker A

And so I was very careful to make sure that I knew what the code was doing, even though I couldn't have written it.

Speaker A

You know, Python's not the toughest language to interpret.

Speaker A

You can kind of tell if you've done any programming at all.

Speaker A

You can kind of tell what some of it's doing.

Speaker A

So that helped a lot.

Speaker A

Even though I don't know that particular language, I did know enough about coding to kind of know what was going on.

Speaker A

The second thing is I understood the underlying analysis.

Speaker A

So even though I couldn't write the algorithm to do this text analysis, I understood the basics of it and could interpret what it was telling me.

Speaker A

Like, you have to make this choice about how many different buckets you want to put the text into.

Speaker A

And I could decide, okay, do we need 15 buckets or do we need 35 buckets?

Speaker A

I knew enough about what was going on to make that decision instead of just having the computer make it for me.

Speaker A

So I think this is going to change how we do a lot of analysis.

Speaker A

Once people understand what's going on, it puts a greater array of tools into people's hands, which is a good thing and a bad thing.

Speaker A

You know, it's a good thing for obvious reasons.

Speaker A

It's a bad thing because if you don't know what you're doing, you don't know what you're doing.

Speaker B

Well, the other thing I think might change as this becomes more available is I know some people who are co authors on paper, on a paper because they knew how to do exactly what you were able to get Python to do for you through these sorts of tools.

Speaker B

Are those same people going to have the same value of what they bring to a paper if that was their contribution in co authorship?

Speaker B

So it could really change some dynamics on author teams and various different things as certain skill sets are prioritized or unprioritized, if you will.

Speaker A

Yeah, yeah, I spent some time on this, but it was a really interesting exercise.

Speaker A

I'm glad I did it.

Speaker A

So for listeners who have not ever done any coding, check out Google's AI Studio and you can write some fun little stuff pretty easily.

Speaker A

So, all right, we'll see how that goes.

Speaker A

So we're going to mark down Rob as a vibe coding skeptic and me is maybe a little bit more of a believer.

Speaker A

But still, I think I'm skeptical about vibe coding taking over the world, but we'll see.

Speaker B

One to educators out there.

Speaker B

I think the challenge that we need to face in the midst of vibe coding and other sorts of really cool things like this that are coming in is preparing the next generation of students and professionals to have the level of expertise they need to have in A world where the basic level stuff is easy to do.

Speaker B

And I honestly don't know what that solution is, but I think that is a big challenge that lays before us is how do we ensure we're preparing future workers to not miss out on those entry level sorts of growing areas, as technology is just incredibly more powerful than open was 5, 10, 20 years ago.

Speaker A

Yeah, that's a great point.

Speaker A

And I think we need to focus on how to let humans do what humans are good at and technology do what technology is good at.

Speaker A

So I think the phrase we've used before is moving humans up the stack instead of maybe writing all of the code, engineering the solution, understanding how to apply computer code to a problem, which is kind of what I did with my text analysis example.

Speaker A

But, you know, these tools are evolving very rapidly, so I think we're going to be in a pretty uncertain environment for a number of years.

Speaker A

All right, anything else?

Speaker B

Nothing on this one.

Speaker A

Okay, so lots of ChatGPT OpenAI news and a little bit of Gemini news over the last couple of weeks.

Speaker A

Have you played with Agent Mode?

Speaker A

Do you have it yet in ChatGPT?

Speaker B

I've made an agent in ChatGPT, but it was in the version 4, and I've done it in teams as well, but I haven't done the latest yet.

Speaker B

I want to see if what I created for Class in four works in the latest version of what it gave me.

Speaker A

So I think if you have a paid version, you should have Agent mode now.

Speaker A

So basically what Agent Mode does is you give it a task and it creates its own little desktop and goes out and does that task, and you kind of have to play around with it.

Speaker A

It's not something we can describe very well on audio, but what I did, we're going to have a conference in Flagstaff in May of 2026, and I said, well, let's test this out and see if it can come up with a visitor's guide to Flagstaff.

Speaker A

And Flagstaff is an amazing place to be in the summer, so there's lots to do right around there.

Speaker A

And since I lived there for six years, I kind of know what a lot of those things are.

Speaker A

And within a matter of probably five or ten minutes of my time and maybe a half an hour of ChatGPT's time, it had this visitor's guide that I literally just downloaded put into a Google Doc, and I'll go in and edit it up when we get a little bit closer to the conference.

Speaker A

So it was pretty remarkable how well it did with that Task, it's kind of cool because you can see it working and if it gets stuck, like if it gets blocked by a paywall, it'll try to find a way around that barrier and we'll just keep going.

Speaker A

It may ask you to take over from time to time, like if you've got to log in to some system.

Speaker A

I don't know that I would deploy this for real tasks that I'm going to have to do over and over and over again.

Speaker A

But for one off tasks, it's pretty good.

Speaker A

I mean it'll plan trips, it'll, oh, I had it track down some healthcare resources.

Speaker A

It's pretty interesting how well it works.

Speaker A

So the other thing I did which might be more relevant to our audience is I'm on sabbatical and one of my goals is to learn more about agentic AI.

Speaker A

So AI agents are AI tools that go out and do a task.

Speaker A

Agentic AI is groups of those agents working together and I think that's where the future is going.

Speaker A

We've talked about this before, so I want to know more about it.

Speaker A

So I said create a learning plan for agentic AI and I gave it a little bit more background and kind of that I wanted a business focus rather than a technical focus, but I wanted to understand the technology in maybe 25, 30 minutes.

Speaker A

I had a pretty nice 24 week learning plan and I think, I think I shared that with you.

Speaker A

Rob.

Speaker A

Did you have a chance to look at it?

Speaker B

Yeah, I looked at it.

Speaker B

It was pretty detailed.

Speaker B

I was impressed with the resources it had and I was like, I'd take this class.

Speaker A

Yeah, yeah, it was.

Speaker A

I mean that's the next thing to try is to have it create an entire class, which I think it could probably do.

Speaker B

Yeah.

Speaker B

The one thing I would like to see talked about in this world of agentic AI and having agents actually do tasks for you is the role of trust in all of this.

Speaker B

And so I put my hat on as a department chair.

Speaker B

If I used agentic AI to do a lot of my department chair job of managing people, I would probably create struggle points of people who no longer appreciated my leadership because it was was the AI tools that were making decisions and doing various different things.

Speaker B

So finding that balance between the human in the process and just letting the agency process take place I think is important because you violate trust, that's hard to win back.

Speaker A

But then you wouldn't have to be chair anymore.

Speaker A

So there's that.

Speaker B

There is that.

Speaker A

So yeah, you raise a really good point.

Speaker A

So if I was going to put a number on it, I would say that most of my tasks have gotten me about three quarters of the way there.

Speaker A

But I mean, think about what it would take to go out and find the resources to put together a credible learning plan for anything agentic, AI or whatever it might be by using Google searches.

Speaker A

I mean, it would take hours and hours and hours.

Speaker A

Now I've got this learning plan.

Speaker A

I'll go in and look at the individual resources, maybe work with ChatGPT or Gemini to replace some of them.

Speaker A

But what would have probably taken me several days to do, I can now probably do in a couple of hours.

Speaker B

Right.

Speaker B

And the thing that I think is important in that is you're still the decision maker.

Speaker B

Right.

Speaker B

Where I struggle is when people say that AI is going to make decisions for us.

Speaker B

As someone who is in the cybersecurity space is where I do research and even information privacy space.

Speaker B

When you start thinking about turning over decision making and a final decision making process to a machine, that to me gets really, really scary for ethical reasons, for security reasons and all sorts of things.

Speaker B

So that's where I am curious of where this is going to go is are we going to get to that point of decision making or is it just going to speed up the process of researching and gathering information so you can reliably make decisions as the human in the process?

Speaker A

Yeah, I think it's going to be a hybrid where some of the more straightforward decisions get made by AI, where we can get into some of those semi structured decisions that are kind of tough now to do with technology.

Speaker A

And what I'm thinking is that for quite some period of time, the best we're going to get is that some of these agentic AI networks will do some of the tasks and then kick some over for human review.

Speaker B

Right.

Speaker B

So Craig, I want to point out a lawsuit that's happening right now because of agents currently workday is being sued and it's turning into something similar to a class action lawsuit because of discrimination in reviewing people's resumes and the fact that there is potentially age discrimination discrimination.

Speaker B

So it's being litigated, Nothing's been decided.

Speaker B

But it's the idea that there was bias in those algorithms and people weren't actually having their resumes reviewed.

Speaker B

Qualified people who were getting rejection after rejection, all of whom were over 40 years old.

Speaker B

And I think that's as we rely on agents to make decisions, to do things.

Speaker B

As the person making that decision, you begin bearing the risk of the decision that it makes.

Speaker B

And so what is the unintended consequences of gaining those efficiencies, efficiencies in doing something more rapidly.

Speaker B

If there's something incorrect in how those algorithms work, I don't think it means don't do it.

Speaker B

But I think it means as you use these tools, you need to go in with your eyes wide open about what you're trusting it to do on your behalf.

Speaker A

Well, that's interesting.

Speaker A

So first, I feel very depressed that somebody substantially younger than me is being discriminated against because of age discrimination.

Speaker A

So let's just get that out of the way.

Speaker A

But I think maybe the failure there is in governance.

Speaker A

So it's not so much the technology, it's that there wasn't enough human oversight, there wasn't enough testing.

Speaker A

I mean, it seems like that would have been a pretty easy thing to have a big test bed to see that sort of thing happen.

Speaker A

Yeah, you'd think, but I'm guessing that didn't.

Speaker B

And as the court case plays out, we'll find out if this truly was discrimination or not.

Speaker B

And N workday says wait till we can present our evidence.

Speaker B

And they're denying there was anything going on there, but they're going to spend a lot of money on lawyer fees being able to prove this.

Speaker B

And yeah.

Speaker B

So what does their governance look like?

Speaker B

How did they test it?

Speaker B

How did they do this?

Speaker B

Various different stuff.

Speaker B

It's going to be fascinating to watch.

Speaker B

I know.

Speaker B

I've read other articles about organizations that had entire databases destroyed by generative AI technology that was given permission to destroy their database.

Speaker B

And in doing something agentic, and so it created that security issue of data was lost.

Speaker B

And so again, we're going to learn the hard way as we go through this process of playing with the new technology.

Speaker B

But I'd be remiss not to say, hey, as you play with this, do so with guardrails and do so paying attention to what you're actually allowing technology to make decisions about.

Speaker A

Absolutely, absolutely.

Speaker A

Yeah, you've got a.

Speaker A

Governance is huge.

Speaker A

We're just starting to talk about that sufficiently.

Speaker A

One other quick comment on the workday case.

Speaker A

That probably was agentic AI in some respect, but I'll bet that was not generative AI.

Speaker A

And with generative AI, it's even worse because you don't know exactly how it's doing it.

Speaker A

You know that algorithm that they were using to screen applicants?

Speaker A

My guess is somebody could go in and look at the code and figure out what's going on, which you can't really do with generative AI.

Speaker A

So one last topic really quickly.

Speaker A

We're coming up against our time.

Speaker A

Both Gemini and ChatGPT introduced new learning functionalities.

Speaker A

Have you played with either?

Speaker A

So it's Study and learn and Chat GPT and it's guided learning in Gemini.

Speaker A

Have you played with either one of them yet?

Speaker B

I've not played with them, but I've read about them.

Speaker B

They're pretty exciting what they can do, but some concerns a little bit in there too.

Speaker A

Yeah.

Speaker A

I'm a little underwhelmed in my experiences so far, but I need to dig into them more deeply.

Speaker A

I have not played with Geminis at all, although they had similar systems that were kind of standalone, that were pretty interesting.

Speaker A

ChatGPT, after I got a few prompts in, I didn't see what it was doing any differently than what I could have done without Study and learn.

Speaker A

But that could be me not understanding the tool yet.

Speaker A

The reason I wanted to bring this up briefly is for faculty members and really anybody, any of our listeners who haven't tried these tools yet, you should probably play around with them.

Speaker A

So even if they're not suitable for doing more formal kinds of teaching tasks, they can be great for learning on your own, you know, learning something that you need to learn.

Speaker B

Yeah, and I think that's going to be one of the challenges as we enter into this next academic year is there's going to be a of lot of new technologies have become available.

Speaker B

I just got notified today that Grok 4 is available for free to everybody.

Speaker B

I haven't played a whole lot in Grok, but I'm like, oh, it's free.

Speaker B

I may go see how their Pro version compares to other versions and be able to talk more about that.

Speaker B

My son was given access to Gemini Pro as a college student for the entire year for free.

Speaker B

And so we're going to see different tools come available.

Speaker B

They're going to become available to different groups of people based on what's going on.

Speaker B

And I'm concerned about the paralysis of which one do I even use as we add more and more and more to the process.

Speaker B

And so, you know, I would encourage people to use what's working for you and talk to your colleagues about it, what they're doing.

Speaker B

Because everybody can be their own use case, their own test bed.

Speaker B

And you may be able to learn because somebody else took a different approach than you did.

Speaker B

And that's going to ramp up our speed to adoption, I think.

Speaker A

Yeah, that's a really, really good point.

Speaker A

I'm a big fan of satisficing when it comes to these tools.

Speaker A

If something's working for you, just stick with it.

Speaker A

Maybe every six months or every year, go back and look at maybe what some alternative tools are.

Speaker A

But unless you're like us and you just enjoy figuring out what the different tools are good at, I think for most users, any of the major tools are going to be okay.

Speaker A

So you can waste a lot of time trying to get that little bit better.

Speaker A

That probably doesn't make a material difference.

Speaker B

My last public service announcement about this, Craig, and I've been talking to my faculty here about this, is as academics, we're required to follow FERPA as well as other sorts of regulations.

Speaker B

And as you're putting data into these different tools, understand how they're using them and what they're using them for.

Speaker B

Because at the end of the day, if I do something incorrect with data that's protected, say, by ferpa, I'm responsible for it.

Speaker B

So pay attention to it.

Speaker B

And I don't say that to put fear into your life.

Speaker B

But it is a thing that at wsu, they give us a grid of what's allowable technologies that their lawyers have looked at.

Speaker B

And for us here, it's copilot.

Speaker B

It's the only one that they've signed off on as being FERPA compliant.

Speaker B

And anything outside of that, we're on our own as far as what we use it for.

Speaker B

So pay attention to that because it's important.

Speaker A

Yep, absolutely.

Speaker A

Yeah.

Speaker A

And a couple of quick rules of thumb on that.

Speaker A

First of all, don't have any data that can possibly be identifying a student.

Speaker A

And sometimes that's not a single piece of data, but the combination of fields can identify somebody.

Speaker A

And the second piece of advice is if you have to really think about whether or not you should do it from a FERPA perspective for more than about 30 seconds, don't do it.

Speaker A

And by the way, for those of you who are outside the United States, FERPA is.

Speaker A

It's a pretty big piece of legislation, but for the context here, basically it discusses what you're allowed to share about students.

Speaker A

Who's allowed to see what data about students.

Speaker A

All right, so we would love to hear your questions or comments.

Speaker A

So you can get in touch with me at Craig, that's C R A I G@AIG, I goes to college.com and Rob, you're Rob Crossler@aigostocollege.com right?

Speaker A

Correct.

Speaker A

C R O S S L E R. I don't think we have to spell out Rob, so.

Speaker A

All right.

Speaker A

Any last thoughts before we close as.

Speaker B

We enter into a new academic year.

Speaker B

Keep us looped into the challenges and the things you're facing or your wins, your use cases.

Speaker B

Again, reach out to us, share these stories with us.

Speaker B

We'd love to get other people's thoughts and opinions into the show and see if we can grow our knowledge base by working together.

Speaker A

Absolutely.

Speaker A

And if any of you are attending the upcoming AMSIS conference in Montreal, look for me and say hi, I will be there.

Speaker A

All right, thanks.

Speaker A

That's it.

Speaker A

We will talk to you next time.