AI Confessions, Energy Costs and Vibe Coding in Academia


Episode Overview: In this episode, hosts Craig and Rob discuss the evolving landscape of AI in academia, research ethics, and the surprising environmental impact of AI technologies. They also test-drive AI vibe coding, discuss agentic AI, and share practical advice for instructors, researchers, and students navigating a fast-changing technological world.
As a bonus, listen to how a border collie would explain epistemic injustice!
Key Topics & Takeaways
1. Academic Honesty & AI ("AI Confessions" in Publishing)
- Honesty is the Best Policy: When using AI tools like Elicit or Grammarly for research, be transparent in your academic declarations. Share enough detail to feel honest, but don’t stress if you can’t recall every interaction.
- Journals & AI Use: Journal policies on AI differ dramatically—some even ban AI use altogether. Question whether those venues align with your publishing values.
- Editors & Transparency: Journals demand transparency from authors, but rarely provide clear guidelines or disclosure on how your AI usage will be handled.
- Takeaway: Aim for high-level honesty in your disclosures. If in doubt, err on the side of transparency, but don’t feel compelled to provide exhaustive step-by-steps.
2. The Environmental Cost of AI
- AI & Resource Consumption: Training large language models consumes massive electricity and water resources. Data centers may bring economic benefits but create significant energy and environmental tensions.
- Transparency Needed: AI companies and governments should be more open about environmental impacts and strategies for sustainability.
- User Responsibility: Everyday users contribute to AI’s environmental footprint—using AI efficiently and mindfully is everyone’s responsibility.
- Takeaway: Educate yourself on the energy/water cost of AI and advocate for sustainable practices in tech.
3. Vibe Coding & AI-Assisted Programming
- What is Vibe Coding? It’s prompting AI (like ChatGPT) to write software for you—sometimes even without traditional coding.
- Practical Examples: From fun tools that explain complex subjects in dog-speak (‘Colliesplain’), to running advanced text analyses (LDA topic modeling) in Python with minimal programming knowledge.
- Limits & Opportunities: Fully relying on AI for complex projects can be risky if you can’t debug or fully understand the code. However, AI-assisted coding dramatically speeds up the process and opens doors for those who wouldn’t have coded otherwise.
- Takeaway: AI is a powerful coding assistant, especially for prototyping or smaller tasks, but a foundational understanding of the code and analysis involved remains essential.
4. Agentic AI & Task Automation
- What’s Agentic AI? Tools that not only complete tasks but can string together sequences of tasks or collaborate with other agents.
- Real-World Use: The hosts discuss planning conference materials and lifelong learning using agentic AI, noting it can handle much of the “grunt work” but still requires human direction for nuanced judgement.
- Governance Cautions: The delegation of decisions to AI agents (especially in areas like applicant screening) can lead to ethical and legal issues if not managed carefully.
- Takeaway: Embrace AI agents for efficiency, but institute proper oversight and understand the governance and ethical implications.
5. Navigating AI Tools/Platforms
- Emerging Tools: New features like ChatGPT’s Agent Mode, Study & Learn, and Gemini’s Guided Learning are making AI more accessible and interactive for learners and educators.
- Practical Use: Satisficing—choosing a tool that works well enough rather than chasing constant upgrades—can save time and reduce frustration.
- Institutional Policies: Heed privacy regulations (like FERPA in the US) when using AI with student or confidential data. Many universities approve only specific platforms.
- Takeaway: Test new offerings, pick what works for you, but remain vigilant about data privacy and security.
For all things AI Goes to College, including the podcast, go to https://www.aigoestocollege.com/.
Email Rob - Rob.Crossler@AIGoesToCollege.com
Email Craig - Craig@AIGoesToCollege.com
Mentioned in this episode:
AI Goes to College Newsletter
00:00 - Untitled
00:41 - Untitled
00:42 - Introduction to AI Education
08:36 - Ethical Considerations in AI Usage
09:41 - The Impact of Technology on Environmental Concerns
16:30 - Explaining Epistemic Injustice through a Dog's Perspective
28:25 - The Impact of AI on Coding and Data Analysis
34:46 - The Role of Trust in Agentic AI
40:31 - Navigating New Technologies in Education
Welcome to another episode of AI Goes to College.
Speaker AAnd as always, I am joined by my friend, colleague, and co host, Dr. Robert E. Crossler from Washington State University, where it is probably much more comfortable than it is here in northeast Louisiana.
Speaker BYeah, we're in the 90s today, Craig.
Speaker BWe just don't get the humidity that you do.
Speaker AOh, wow.
Speaker AWe're going to do something new today.
Speaker AWe're going to start off with a couple of listener questions.
Speaker AThe first one comes from Rosetta, and I'm going to just read this.
Speaker AI would like to ask a question that may be relevant to many academics, and if you haven't covered it yet on your podcast, which we haven't, I hope you will consider it.
Speaker AIt's about confession.
Speaker AI submitted a paper to a journal and was asked about the use of AI.
Speaker AWhile I may have used ILLICIT to point me in the direction of relevant articles, I discounted some of its suggestions and used other sources in my quest for relevance.
Speaker AHow much detail should I provide in confession and how will the journal react to the confession?
Speaker AAm I taking five more Hail Marys and three Our Fathers?
Speaker ASo, in other words, is she going to have to do some sort of penance because she used AI?
Speaker ASo, Rob, what's your take on this?
Speaker BI love this question because it brings to the front a challenge a lot of us face in that we're using AI, we're trying to stay within our own voice of who we are, but there are some efficiencies to be gained by using it.
Speaker BAnd then journals all seem to have a different approach to what is acceptable and what is not acceptable for the use of AI.
Speaker BAnd so I faced with this very question, you know, do I go into the details or do I simply acknowledge that, you know, in my research and in grammar and updating grammar and writing that AI was a tool that I used, or do I give the details I come down on, erring on the side of being honest, right, That I used it to help me research, to help outline whatever that might be, that I use Grammarly pretty heavily to help me make sure my writing is tight and those sorts of things and let the cards fall where they may.
Speaker BBut when I read LinkedIn from some of the editors and what they talk about for what they expect, some editors come out and say, zero AI.
Speaker BAnd so it kind of begins to make me wonder, A, can they detect it?
Speaker BAnd B, is that a journal I want to publish in if I can't with a clean conscience answer about AI in a way that I know is consistent with what that journal believes.
Speaker AYeah, I think unless that's a really top journal, 0ai would just be off my list.
Speaker AIt's a frankly ridiculous stance in my view, especially since I'll bet a lot of people don't know if they're using AI.
Speaker ABut if you're using Grammarly, are you using AI?
Speaker AWell, depends on how you're using Grammarly.
Speaker AIt's kind of ironic.
Speaker ASo journalists want transparency from authors, but they're not providing transparency in what they're going to do with the information.
Speaker AI take a little bit different approach.
Speaker AI would talk about how I used AI at a high level, but would not try to go into too much detail because I'm not sure I could remember.
Speaker AYou know, I use AI constantly to help me develop ideas, to help me refine arguments, to give me gaps in my logic or in my writing.
Speaker AIt doesn't write anything for me and it might get me started on some searches, but it's certainly not even the majority of what I might do for the background literature.
Speaker ASo I don't know.
Speaker ACould you even put in detail exactly how you use AI?
Speaker BI don't think I could.
Speaker BIt would be hard to do it and to do it honestly.
Speaker BAnd what I'd really like to see from journals is a list of what they don't want us to use AI for.
Speaker BRight.
Speaker BBe much more explicit.
Speaker BI understand when generative, generative AI became a tool that was flooding the world that it was hard to really say what you could and what you couldn't use for it.
Speaker BBut I think there's enough use cases out there and enough information that explicit statements about what it should not be used for would help authors to not feel like Rosetta feels where, you know, it's just an open ended question and you don't know what the trigger activities are that will get you blacklisted on that particular article.
Speaker AYeah.
Speaker AAnd I, I mean, I don't even know how you draw lines.
Speaker AI mean it's kind of a parallel to what we face to face instructors.
Speaker AYou know, exactly what guidance do we give to students?
Speaker AAlthough I think we are doing a better job than journals are right now.
Speaker AMy answer would be enough to be honest and no more.
Speaker AMy concern about putting in too much detail is you may leave something out and that could give the appearance of being dishonest, even though it was just an omission because we co produce with these tools and we don't really know exactly how we used AI.
Speaker BYeah, and I think one of the things too, going back to authors and their Roles and responsibilities in this.
Speaker BI would argue that I'm going to make up a number, but 80% of authors are going to have this exact same dilemma that we're having, where it's like, okay, we used it, we did the work, here's how it is.
Speaker BBut there's still 20%, maybe a high percentage, but of authors that are gaming the system, are using the tools in unethical ways.
Speaker BAnd in an effort to catch those that are, the rest of the author community gets to suffer a little bit from figuring out what's going on.
Speaker BHow are things inadvertently affecting those people who are trying to be honest authors utilizing the tools in a reasonable way?
Speaker AYeah, and I'd put some of this back on the journals.
Speaker AIf they're not doing a good enough job of reviewing and vetting these papers to tell if it's really produced by AI, then they've got other problems going on.
Speaker AAnd I know those papers get through, but it's kind of easy to tell if AI is just writing it and it's going to be crappy quality research anyway.
Speaker BYeah, yeah, no, it's an interesting world that we're in and I do look forward to how it progresses through there.
Speaker BBut I think a lot of players should be having this conversation.
Speaker BAnd even if you're an associate editor, senior editor, editor in chief especially, but as you get a chance to be in the room having these sorts of conversations, I think bringing more voices to the table in it is important.
Speaker BI talked to one of my colleagues who's an AE at a journal, not in information systems, and was asking, what are they doing about generative AI and those sorts of things in their journals.
Speaker BAnd the response I got was, I've not heard any conversations.
Speaker BRight.
Speaker BSo even people who are in the room for the editorial meetings aren't necessarily hearing what's going on.
Speaker BSo it does make you wonder where the conversations are happening.
Speaker AYou know, I think for some of some of the publishers, it's the overall publisher, the Taylor and Francis or those kind of folks that are setting some of the policies.
Speaker ABut it's a good question.
Speaker AI'm not sure we have a perfect answer, but I will reiterate, put in enough detail to make.
Speaker AMake yourself honest to feel okay about it, but then stop.
Speaker BSo the other thing I would encourage people to think about, and this is an ethical part of it as well, is many of these large publishers are selling the papers back to these companies to seed and to be a part of the database for the large language models.
Speaker BSo when I see us struggle with what do I use it for?
Speaker BHow do I use it?
Speaker BAnd then see the papers basically being a revenue source for the publishers through AI.
Speaker BIt almost seems paradoxical.
Speaker BI think that is something as authors that we need to start looking at and saying, how do we feel about that?
Speaker BAnd where is our agency in all of this?
Speaker AYou didn't get your cut, you didn't get your check this month.
Speaker BI'm still waiting on that, Craig.
Speaker BMaybe they sent mine to your house.
Speaker ACould be.
Speaker AI'll double check.
Speaker AYeah, that's a good point.
Speaker AAll right, let's move on to the next one, which is completely different, but also touches on ethics.
Speaker AAnd this one is from Gene.
Speaker AI'd love to hear your thoughts on the energy resources required for AI and how AI companies are addressing them.
Speaker BThat's a great question.
Speaker BI'll take first swing at that one.
Speaker BSo, no, I've thought about this a lot and I actually was approached to see if I wanted to be interviewed for a news article about this very topic, but I declined because I didn't know enough to feel like I could be an expert on the news about this sort of topic.
Speaker BBut the thing about large language models and especially training them is they require a lot of electricity and they require a lot of water and it hasn't been talked about a whole lot.
Speaker BI do know communities are turning down permits for data centers because of the environmental concerns.
Speaker BA lot of times these are trying to be put in in places that are currently experiencing droughts.
Speaker BAnd, you know, that trade off between the environment and our ready access to data is a significant challenge.
Speaker BAnd I don't think we talk about it enough.
Speaker BI think this needs to come to the forefront and be part of the conversations.
Speaker BAnd this is where I'd like to see.
Speaker BI don't know if it's government needs to step up and force these conversations or if businesses who are looking at using large language models begin speaking more about how they're balancing these green resources in a way that helps them to be more efficient with data.
Speaker BBecause one of the things that really has shocked me is how little we've talked about green energy and those sorts of things from businesses.
Speaker BAs soon as generative AI, you know, hit the conversation, it's almost like it got turned off as something that we talked about.
Speaker BAnd I'm.
Speaker BI think pendulums swing one way or the other and go back and forth over time.
Speaker BBut I think we have moved very far away from talking about the impact on our environment as we're going full sail into what technology is doing.
Speaker AAnd I agree totally.
Speaker AIt's a significant concern that I don't want to say it's not being addressed, but it probably isn't being addressed as publicly or as expansively as it ought to be.
Speaker AWe're getting a huge data center not far from where I live.
Speaker AWe don't really have the drought problem that a lot of areas have, but there's a tension.
Speaker ASo that's going to be a significant economic boom for the region.
Speaker AYou know, northeast Louisiana is not the most wealthy area in the country.
Speaker AAnd so it's a tension, you know, that's going to bring a lot of jobs, not just with the development but with the infrastructure that's going to go into place.
Speaker AThe residents are going to benefit from to some extent from the infrastructure.
Speaker ASo there are a lot of positives, but the energy use is going to be massive.
Speaker AI know there aren't one of the companies, Facebook or Amazon or somebody is trying to spin up Three Mile island again.
Speaker ASo I think nuclear could be a big part of this.
Speaker AThat's something we shut down in the US a long time ago.
Speaker ABut other parts of the world are relying on it.
Speaker AFrankly, I'm more concerned about water than I am about the electricity aspect of it.
Speaker ABut I am not an environmental scientist, so I really don't know what I'm talking about here.
Speaker BYeah, one thing from water that I've seen that's encouraging and what they're doing is a lot of times when these new data centers are being built, they're using the water to help heat other places, governmental buildings and to help the community with the fact that they're using that water.
Speaker BAnd a lot of times they try to make that water self contained so it doesn't evaporate and they can recycle, reuse.
Speaker BSo it's not just a waste of water that you see in so many different places.
Speaker BSo I think being purposeful about that and going back to what you said, that, you know, you think people are doing something about this, but we don't hear about it enough suggests that we just aren't being transparent enough as a society about what we're doing.
Speaker BAnd I think a lot of times in the void of information, we, we assume the worst.
Speaker BAnd so I would encourage organizations, I would encourage government bodies that can force organizations to do things through legislation to encourage and require more transparency about this so we can see the full picture of, you know, what are the GDP gains, if you will, what are the productivity gains that we're getting from using the technologies.
Speaker BBut what is the true costs to other aspects of society.
Speaker ARight.
Speaker AWell, and I'll put some of this on the users.
Speaker AJust like if you order things from Amazon, you get a box.
Speaker AIf you use a lot of generative AI, you're contributing to the overall problem.
Speaker ASo this is going to sound really weird, but I've always been very polite to the chatbots, and I'm starting to rethink that.
Speaker AYou know, I have no idea of saying thanks really has any kind of material impact.
Speaker AI mean, I'm sure my one use of it doesn't matter, but, you know, in the aggregate, maybe it does.
Speaker AI don't know.
Speaker ASo, you know, this is going to be a little bit of a stretch, but we.
Speaker AWe have a lot of paper mills here.
Speaker AForestry products are one of our biggest agricultural resources in this region.
Speaker AA lot of them go to Amazon boxes.
Speaker AA lot of trees, big pine trees, get cut down every year so that you and I can get our Amazon deliveries.
Speaker AAnd so, you know, is this just a different version of that same sort of thing?
Speaker AI don't know.
Speaker BYeah, it might be.
Speaker BI mean, recycling's kind of been one of those big things that, you know, a lot of people are encouraged to recycle those cardboard boxes so they can be made into different cardboard boxes or different paper products.
Speaker BAnd so what is the analogy?
Speaker BIn the world of large language models and data centers, that is the equivalent of recycling, and what does that look like?
Speaker BAnd how do we begin to learn about that as appropriate societal behavior?
Speaker AThat's a good point.
Speaker ABut one last point on this is there's a lot of pressure on the AI providers to reduce the computational costs because they're huge.
Speaker AI mean, forget about the environmental cost for a second.
Speaker AThe financial cost of running these data centers is massive.
Speaker AAnd I think that's one of the things that OpenAI and Google and Anthropic and the rest are working on is trying to lower the computational costs.
Speaker AOne thing that I think will happen is some of this will get pushed onto devices in these mini models that a lot of the big companies are working on.
Speaker ASo I don't know, Gene, I'm not sure if we gave you a good answer or not.
Speaker AYou wanted to hear our thoughts and you did.
Speaker AMaybe you're wishing you hadn't, but that's where we are.
Speaker ARight.
Speaker AAnything else on that one, Rob?
Speaker BNope, I think we're good.
Speaker AAll right, let's talk about another controversial topic in a little bit of a different way.
Speaker AVibe coding.
Speaker ASo, Rob, when I say vibe coding, what do you think about?
Speaker BI think about just talking to ChatGPT and asking it to create code for you, and you do it without writing any code.
Speaker AAll right.
Speaker AYeah.
Speaker ASo that's the basic idea.
Speaker AYou talk to some large language model and it will either just write the software for you or it will give you the code that you can copy and paste into another environment and come up with something.
Speaker ASo I want to give two very different examples that I played with over the last couple of weeks, and the first one I'm very proud of.
Speaker AIt's called collie splain.
Speaker ARob.
Speaker AWhat?
Speaker AWhoops.
Speaker AAnd on cue, one of my collies is barking.
Speaker ASo I know I tried to send this to you and it didn't work out.
Speaker ASo what do you think collysplain is?
Speaker BWell, if I were to take the knowledge of mansplain, and I would say it's similar to that, but it is a collie explaining to you how the world works.
Speaker AYes, that is right.
Speaker ASo I just wanted to play around with Google's AI Studio.
Speaker AAnd so I said, write a program that explains any topic as if a border collie is explaining it.
Speaker AAnd so I'm going to read one to you, which makes for great audio.
Speaker ASo we'll see how this goes.
Speaker AI don't know.
Speaker ASo I ask it to explain epistemic injustice.
Speaker AAll right?
Speaker AWhich is basically the idea that someone is disadvantaged and harmed in their capacity as a knower.
Speaker AAnd so here's what it says.
Speaker AWoof, woof.
Speaker AAll right, let's herd these thoughts.
Speaker AI've got it.
Speaker AI've got it.
Speaker AEpistemic injustice.
Speaker AIt's like when my human thinks I'm just barking at a leaf, but I'm actually barking at a squirrel that's plotting to steal my squeaky ball.
Speaker AThey dismiss my truth because they think they know better or because I'm just an excited dog.
Speaker AOkay, so picture this.
Speaker AYou know something important.
Speaker AMaybe you just know the best way to get all the sheep into the pen or where the really good treat is hidden.
Speaker AYou have knowledge, but someone doesn't believe you or doesn't even understand what you're trying to say.
Speaker ANot because your knowledge is bad, but because of who you are.
Speaker AMaybe you're a young pup or you have a funny accent, or you're just different from the dogs they usually listen to.
Speaker ASo your knowledge gets ignored or your voice isn't heard fairly.
Speaker AIt's like I'm trying to explain with urgent barks and paw gestures that the cat is on the counter, but my human just says, patches.
Speaker AYou're just being silly.
Speaker AThey're not listening to my valid warning because they have a preset idea about me.
Speaker AMy knowing is dismissed.
Speaker AIt's when your ability to know or understand things or to share that understanding is unfairly put down or ignored because of some prejudice or stereotype about you.
Speaker AIt's like not letting a perfectly good border collie run the agility course just because someone thinks all collies are too obsessed with Frisbees.
Speaker AIt's unfair.
Speaker AAnd it means good information, like where the squirrel is gets lost.
Speaker ANow can we go chase something?
Speaker AMy brain is buzzing.
Speaker AAnd then it's got a little note at the bottom that says, brought to you by AI Studio and a good dog.
Speaker ANow, okay, that's kind of silly, right?
Speaker ABut I want to point out that it's pretty.
Speaker ARight.
Speaker AThat's a pretty good explanation of the basics of epistemic injustice.
Speaker ASo I don't know.
Speaker AAre you a border collie guy?
Speaker AI love border collies.
Speaker BNo, we're a golden retriever guy.
Speaker BI wonder how a golden retriever would explain that.
Speaker ABut with slobbery tennis balls for sure.
Speaker AWithout a doubt.
Speaker BYep.
Speaker BSo what I'd be curious about, and this is where I've been looking at Vibe coding and the argument of is this going to replace jobs?
Speaker BYou know, are computer scientists going to be replaced by Vibe coding?
Speaker BI was reading an anecdotal story of one expert who spent three to four months playing with Vibe coding and ultimately gave up because it would create something nice like you just described, very simple, you know, on that initial prompt.
Speaker BAnd whenever he tried to use Vibe coding to alter his code, it would break something else.
Speaker BSo it add the new feature, but it would break something else.
Speaker BSo it would always give him something, but it never gave him something that was completely workable.
Speaker BAnd then as a knowledgeable programmer, he would spend more time going back in trying to fix those things that got broken than he would have if he would have just added the new feature himself.
Speaker BAnd so that's where I see some real challenges with Vibe coding.
Speaker BIf you don't know how to be the person who can go in and fix things, or to know when is it a good idea to use the Vibe coding approach, and when is it a better idea to go in and actually write the code yourself?
Speaker AYeah, I think that's a really good take on it.
Speaker AAlthough I would point out that from what I'm reading, maybe not fully vibe coding, but AI assisted coding makes programmers much more efficient.
Speaker B100%.
Speaker BAnd I think of how I used to code when I had the opportunity to code earlier in my professional career.
Speaker BAnd Google was the source of that.
Speaker BRight.
Speaker BI wouldn't know how to do something, but I got really good at writing Google queries to go find it and then come in and apply it.
Speaker BAnd as I play with some things in generative AI technologies, it gets me that information so much faster and then allows me to spend my cognitive efforts not in Google searching, but in getting that data or that knowledge and then applying it.
Speaker BAnd so I think that does change the world of how we learn to code and so on.
Speaker BBut it's going to be important, I think, for the foreseeable future to understand what your code is doing so you can get in there and test it, make sure it's not doing something unintentional, and modify it as needed.
Speaker AYeah, I think that's right.
Speaker AAnd this kind of fully vibe coding is good for small things.
Speaker AWhere I think it's got a lot of potential is if you want a program to do one little thing and you don't really know how to code, there's no good way to get that done, or there was no good way to get that done.
Speaker ABut if you have a little bit of knowledge of what your task is and kind of how things work in general, I think you can vibe code small things like maybe giving you a summary of your week's calendar and going out and pulling up relevant emails is the kind of thing that you could do.
Speaker AAre you going to Deploy that to 10,000 users?
Speaker AMaybe not, but it could work for you.
Speaker BWell, and where I see this being super helpful too is as an MIS professor, if you understand the business problems, you understand what you need to do to iterate, even have an example of what something could do or so forth from the data that you've collected from the business and trying to solve that problem that you can get there way, way quicker and without having to coordinate with maybe the programming wizard who would be the person who could put those things into place.
Speaker BSo I see a lot of potential for systems analysis, sorts of people that are helping to launch new business solutions and being able to do so very rapidly and very quickly.
Speaker AYeah, especially things like prototyping could be huge.
Speaker ABut I want to give you another example that I think is probably much more interesting, if less fun.
Speaker ASo have you ever heard of Leighton Dirichlet analysis?
Speaker BNo.
Speaker AOkay, so basically, and sorry if we geek out here for just a second, it's kind of like qualitative factor analysis.
Speaker AIt's a text analysis algorithm that will analyze a big chunk of text and try to Put comments or whatever it is you're looking at into groups.
Speaker ASo it'll take this big mess and try to make sense out of it.
Speaker AWe're working on a revision for a fairly important paper for us, and we did a bunch of coding by hand.
Speaker ASo we had some comments and we went through the comments, had hundreds of them, and we tried to classify the comments into different categories.
Speaker AI can't give too much away here because the paper's under revision, but like, let's say that we were looking at something teaching oriented and you've got a bunch of student comments and you want to put it into buckets.
Speaker ALike, the assignment was too hard, the assignment was unclear, the assignment was easy.
Speaker ASo you can kind of make sense of the comments.
Speaker AIs that making sense?
Speaker BYep.
Speaker AOkay, so this is an algorithmic way to do this and it's really complicated and there's a lot of pre processing that has to go on.
Speaker AI understood the basics of the method, but I had no idea how to code it.
Speaker ANone.
Speaker AI don't know Python.
Speaker AI quit coding a long time ago, I mean, decades ago.
Speaker ASo I don't really know the language, but I thought, okay, I'm going to see what I can do with this.
Speaker ASo I worked with a combination of ChatGPT and Claude, mostly ChatGPT, walking through this code, putting it into Google Collab, and basically Google Collab has an interface where you can load your data and then copy and paste or write your code in another window.
Speaker AChatgpt would tell me, all right, here's the first thing you need to do to load these packages that you need for Python little sub programs.
Speaker AAnd so I'd put that in.
Speaker AWell, I might get an error message.
Speaker AAnd then Google Collab would say, hey, you've got this error, do you want me to fix it?
Speaker ASure, of course I want you to fix it.
Speaker AAnd it would fix it.
Speaker AAnd then I'd go back to ChatGPT and do the next thing.
Speaker AAnd in the course of, I don't know, a couple of hours, I had this analysis done.
Speaker AI mean, it was shocking.
Speaker AAnd this was thousands of records of data.
Speaker AI captured the code so I can document what we did.
Speaker AI understood what the code was doing, but there was no way I could have done that without the AI tools.
Speaker AAnd like you said, it's all reproducible, it's all captured, it's all documented.
Speaker BAnd just to be clear, and this kind of goes back to what we talked about earlier in what you did is you used Chat GPT to help you write Python code, and then it was that Python code that you used that did the help with the analysis.
Speaker BSo it wasn't generative AI doing the analysis for you, it was generative AI helping you to learn the programming language.
Speaker BYou needed that.
Speaker BAnyone who was doing this sort of a data analysis would be doing the exact same type of code.
Speaker BAnd so much like we would have done with SPSS or sas, but it just made you much, much more efficient at getting that code written to be able to use Python as a tool to help you.
Speaker AThat's right, except for one little thing.
Speaker AIt not only made it more efficient, it made it possible, which it wouldn't have been without.
Speaker AI mean, you know, I could have gone and hired a grad student, that kind of thing, to write the code.
Speaker ABut you raise a good point.
Speaker AIf I had asked ChatGPT to do the analysis, it would not be reproducible.
Speaker AI would have no idea how it got to where it got.
Speaker ABut now I can document all the decisions, I can document the code, I can document everything.
Speaker AJust like if you were doing this in sas, you'd have your SAS code.
Speaker ASas, for those of you that don't know, is just a statistical package.
Speaker AIt's pretty widely used, so that's pretty cool.
Speaker AI was shocked by how well it worked.
Speaker BYeah, it's a great solution.
Speaker BAnd it goes back to the same example we were talking about before, where it used to be, you might have been able to do some Google searches and get people who say, well, here's how you load your data, here's the code that you need to use.
Speaker BYou'd have to read through it, process it, and say, okay, let me go and give that a try.
Speaker BAnd you may have been able to accomplish a similar thing using Google searches, but it probably would have taken you a full week instead of something you were able to do in an afternoon.
Speaker AAbsolutely.
Speaker AAnd I would have given up at that point.
Speaker AI would have gone and hired somebody to try to do this.
Speaker AA couple of interesting aspects of this I want to make sure we point out.
Speaker AOne is that even though I don't know Python, I know enough about coding to understand what was going on.
Speaker ASo with the colleagues playing, I just said, write this code.
Speaker AI didn't even look at it because it didn't matter.
Speaker ABut with this other analysis, this LDA text analysis, I needed to know what was going on.
Speaker AI need to understand it.
Speaker AAnd so I was very careful to make sure that I knew what the code was doing, even though I couldn't have written it.
Speaker AYou know, Python's not the toughest language to interpret.
Speaker AYou can kind of tell if you've done any programming at all.
Speaker AYou can kind of tell what some of it's doing.
Speaker ASo that helped a lot.
Speaker AEven though I don't know that particular language, I did know enough about coding to kind of know what was going on.
Speaker AThe second thing is I understood the underlying analysis.
Speaker ASo even though I couldn't write the algorithm to do this text analysis, I understood the basics of it and could interpret what it was telling me.
Speaker ALike, you have to make this choice about how many different buckets you want to put the text into.
Speaker AAnd I could decide, okay, do we need 15 buckets or do we need 35 buckets?
Speaker AI knew enough about what was going on to make that decision instead of just having the computer make it for me.
Speaker ASo I think this is going to change how we do a lot of analysis.
Speaker AOnce people understand what's going on, it puts a greater array of tools into people's hands, which is a good thing and a bad thing.
Speaker AYou know, it's a good thing for obvious reasons.
Speaker AIt's a bad thing because if you don't know what you're doing, you don't know what you're doing.
Speaker BWell, the other thing I think might change as this becomes more available is I know some people who are co authors on paper, on a paper because they knew how to do exactly what you were able to get Python to do for you through these sorts of tools.
Speaker BAre those same people going to have the same value of what they bring to a paper if that was their contribution in co authorship?
Speaker BSo it could really change some dynamics on author teams and various different things as certain skill sets are prioritized or unprioritized, if you will.
Speaker AYeah, yeah, I spent some time on this, but it was a really interesting exercise.
Speaker AI'm glad I did it.
Speaker ASo for listeners who have not ever done any coding, check out Google's AI Studio and you can write some fun little stuff pretty easily.
Speaker ASo, all right, we'll see how that goes.
Speaker ASo we're going to mark down Rob as a vibe coding skeptic and me is maybe a little bit more of a believer.
Speaker ABut still, I think I'm skeptical about vibe coding taking over the world, but we'll see.
Speaker BOne to educators out there.
Speaker BI think the challenge that we need to face in the midst of vibe coding and other sorts of really cool things like this that are coming in is preparing the next generation of students and professionals to have the level of expertise they need to have in A world where the basic level stuff is easy to do.
Speaker BAnd I honestly don't know what that solution is, but I think that is a big challenge that lays before us is how do we ensure we're preparing future workers to not miss out on those entry level sorts of growing areas, as technology is just incredibly more powerful than open was 5, 10, 20 years ago.
Speaker AYeah, that's a great point.
Speaker AAnd I think we need to focus on how to let humans do what humans are good at and technology do what technology is good at.
Speaker ASo I think the phrase we've used before is moving humans up the stack instead of maybe writing all of the code, engineering the solution, understanding how to apply computer code to a problem, which is kind of what I did with my text analysis example.
Speaker ABut, you know, these tools are evolving very rapidly, so I think we're going to be in a pretty uncertain environment for a number of years.
Speaker AAll right, anything else?
Speaker BNothing on this one.
Speaker AOkay, so lots of ChatGPT OpenAI news and a little bit of Gemini news over the last couple of weeks.
Speaker AHave you played with Agent Mode?
Speaker ADo you have it yet in ChatGPT?
Speaker BI've made an agent in ChatGPT, but it was in the version 4, and I've done it in teams as well, but I haven't done the latest yet.
Speaker BI want to see if what I created for Class in four works in the latest version of what it gave me.
Speaker ASo I think if you have a paid version, you should have Agent mode now.
Speaker ASo basically what Agent Mode does is you give it a task and it creates its own little desktop and goes out and does that task, and you kind of have to play around with it.
Speaker AIt's not something we can describe very well on audio, but what I did, we're going to have a conference in Flagstaff in May of 2026, and I said, well, let's test this out and see if it can come up with a visitor's guide to Flagstaff.
Speaker AAnd Flagstaff is an amazing place to be in the summer, so there's lots to do right around there.
Speaker AAnd since I lived there for six years, I kind of know what a lot of those things are.
Speaker AAnd within a matter of probably five or ten minutes of my time and maybe a half an hour of ChatGPT's time, it had this visitor's guide that I literally just downloaded put into a Google Doc, and I'll go in and edit it up when we get a little bit closer to the conference.
Speaker ASo it was pretty remarkable how well it did with that Task, it's kind of cool because you can see it working and if it gets stuck, like if it gets blocked by a paywall, it'll try to find a way around that barrier and we'll just keep going.
Speaker AIt may ask you to take over from time to time, like if you've got to log in to some system.
Speaker AI don't know that I would deploy this for real tasks that I'm going to have to do over and over and over again.
Speaker ABut for one off tasks, it's pretty good.
Speaker AI mean it'll plan trips, it'll, oh, I had it track down some healthcare resources.
Speaker AIt's pretty interesting how well it works.
Speaker ASo the other thing I did which might be more relevant to our audience is I'm on sabbatical and one of my goals is to learn more about agentic AI.
Speaker ASo AI agents are AI tools that go out and do a task.
Speaker AAgentic AI is groups of those agents working together and I think that's where the future is going.
Speaker AWe've talked about this before, so I want to know more about it.
Speaker ASo I said create a learning plan for agentic AI and I gave it a little bit more background and kind of that I wanted a business focus rather than a technical focus, but I wanted to understand the technology in maybe 25, 30 minutes.
Speaker AI had a pretty nice 24 week learning plan and I think, I think I shared that with you.
Speaker ARob.
Speaker ADid you have a chance to look at it?
Speaker BYeah, I looked at it.
Speaker BIt was pretty detailed.
Speaker BI was impressed with the resources it had and I was like, I'd take this class.
Speaker AYeah, yeah, it was.
Speaker AI mean that's the next thing to try is to have it create an entire class, which I think it could probably do.
Speaker BYeah.
Speaker BThe one thing I would like to see talked about in this world of agentic AI and having agents actually do tasks for you is the role of trust in all of this.
Speaker BAnd so I put my hat on as a department chair.
Speaker BIf I used agentic AI to do a lot of my department chair job of managing people, I would probably create struggle points of people who no longer appreciated my leadership because it was was the AI tools that were making decisions and doing various different things.
Speaker BSo finding that balance between the human in the process and just letting the agency process take place I think is important because you violate trust, that's hard to win back.
Speaker ABut then you wouldn't have to be chair anymore.
Speaker ASo there's that.
Speaker BThere is that.
Speaker ASo yeah, you raise a really good point.
Speaker ASo if I was going to put a number on it, I would say that most of my tasks have gotten me about three quarters of the way there.
Speaker ABut I mean, think about what it would take to go out and find the resources to put together a credible learning plan for anything agentic, AI or whatever it might be by using Google searches.
Speaker AI mean, it would take hours and hours and hours.
Speaker ANow I've got this learning plan.
Speaker AI'll go in and look at the individual resources, maybe work with ChatGPT or Gemini to replace some of them.
Speaker ABut what would have probably taken me several days to do, I can now probably do in a couple of hours.
Speaker BRight.
Speaker BAnd the thing that I think is important in that is you're still the decision maker.
Speaker BRight.
Speaker BWhere I struggle is when people say that AI is going to make decisions for us.
Speaker BAs someone who is in the cybersecurity space is where I do research and even information privacy space.
Speaker BWhen you start thinking about turning over decision making and a final decision making process to a machine, that to me gets really, really scary for ethical reasons, for security reasons and all sorts of things.
Speaker BSo that's where I am curious of where this is going to go is are we going to get to that point of decision making or is it just going to speed up the process of researching and gathering information so you can reliably make decisions as the human in the process?
Speaker AYeah, I think it's going to be a hybrid where some of the more straightforward decisions get made by AI, where we can get into some of those semi structured decisions that are kind of tough now to do with technology.
Speaker AAnd what I'm thinking is that for quite some period of time, the best we're going to get is that some of these agentic AI networks will do some of the tasks and then kick some over for human review.
Speaker BRight.
Speaker BSo Craig, I want to point out a lawsuit that's happening right now because of agents currently workday is being sued and it's turning into something similar to a class action lawsuit because of discrimination in reviewing people's resumes and the fact that there is potentially age discrimination discrimination.
Speaker BSo it's being litigated, Nothing's been decided.
Speaker BBut it's the idea that there was bias in those algorithms and people weren't actually having their resumes reviewed.
Speaker BQualified people who were getting rejection after rejection, all of whom were over 40 years old.
Speaker BAnd I think that's as we rely on agents to make decisions, to do things.
Speaker BAs the person making that decision, you begin bearing the risk of the decision that it makes.
Speaker BAnd so what is the unintended consequences of gaining those efficiencies, efficiencies in doing something more rapidly.
Speaker BIf there's something incorrect in how those algorithms work, I don't think it means don't do it.
Speaker BBut I think it means as you use these tools, you need to go in with your eyes wide open about what you're trusting it to do on your behalf.
Speaker AWell, that's interesting.
Speaker ASo first, I feel very depressed that somebody substantially younger than me is being discriminated against because of age discrimination.
Speaker ASo let's just get that out of the way.
Speaker ABut I think maybe the failure there is in governance.
Speaker ASo it's not so much the technology, it's that there wasn't enough human oversight, there wasn't enough testing.
Speaker AI mean, it seems like that would have been a pretty easy thing to have a big test bed to see that sort of thing happen.
Speaker AYeah, you'd think, but I'm guessing that didn't.
Speaker BAnd as the court case plays out, we'll find out if this truly was discrimination or not.
Speaker BAnd N workday says wait till we can present our evidence.
Speaker BAnd they're denying there was anything going on there, but they're going to spend a lot of money on lawyer fees being able to prove this.
Speaker BAnd yeah.
Speaker BSo what does their governance look like?
Speaker BHow did they test it?
Speaker BHow did they do this?
Speaker BVarious different stuff.
Speaker BIt's going to be fascinating to watch.
Speaker BI know.
Speaker BI've read other articles about organizations that had entire databases destroyed by generative AI technology that was given permission to destroy their database.
Speaker BAnd in doing something agentic, and so it created that security issue of data was lost.
Speaker BAnd so again, we're going to learn the hard way as we go through this process of playing with the new technology.
Speaker BBut I'd be remiss not to say, hey, as you play with this, do so with guardrails and do so paying attention to what you're actually allowing technology to make decisions about.
Speaker AAbsolutely, absolutely.
Speaker AYeah, you've got a.
Speaker AGovernance is huge.
Speaker AWe're just starting to talk about that sufficiently.
Speaker AOne other quick comment on the workday case.
Speaker AThat probably was agentic AI in some respect, but I'll bet that was not generative AI.
Speaker AAnd with generative AI, it's even worse because you don't know exactly how it's doing it.
Speaker AYou know that algorithm that they were using to screen applicants?
Speaker AMy guess is somebody could go in and look at the code and figure out what's going on, which you can't really do with generative AI.
Speaker ASo one last topic really quickly.
Speaker AWe're coming up against our time.
Speaker ABoth Gemini and ChatGPT introduced new learning functionalities.
Speaker AHave you played with either?
Speaker ASo it's Study and learn and Chat GPT and it's guided learning in Gemini.
Speaker AHave you played with either one of them yet?
Speaker BI've not played with them, but I've read about them.
Speaker BThey're pretty exciting what they can do, but some concerns a little bit in there too.
Speaker AYeah.
Speaker AI'm a little underwhelmed in my experiences so far, but I need to dig into them more deeply.
Speaker AI have not played with Geminis at all, although they had similar systems that were kind of standalone, that were pretty interesting.
Speaker AChatGPT, after I got a few prompts in, I didn't see what it was doing any differently than what I could have done without Study and learn.
Speaker ABut that could be me not understanding the tool yet.
Speaker AThe reason I wanted to bring this up briefly is for faculty members and really anybody, any of our listeners who haven't tried these tools yet, you should probably play around with them.
Speaker ASo even if they're not suitable for doing more formal kinds of teaching tasks, they can be great for learning on your own, you know, learning something that you need to learn.
Speaker BYeah, and I think that's going to be one of the challenges as we enter into this next academic year is there's going to be a of lot of new technologies have become available.
Speaker BI just got notified today that Grok 4 is available for free to everybody.
Speaker BI haven't played a whole lot in Grok, but I'm like, oh, it's free.
Speaker BI may go see how their Pro version compares to other versions and be able to talk more about that.
Speaker BMy son was given access to Gemini Pro as a college student for the entire year for free.
Speaker BAnd so we're going to see different tools come available.
Speaker BThey're going to become available to different groups of people based on what's going on.
Speaker BAnd I'm concerned about the paralysis of which one do I even use as we add more and more and more to the process.
Speaker BAnd so, you know, I would encourage people to use what's working for you and talk to your colleagues about it, what they're doing.
Speaker BBecause everybody can be their own use case, their own test bed.
Speaker BAnd you may be able to learn because somebody else took a different approach than you did.
Speaker BAnd that's going to ramp up our speed to adoption, I think.
Speaker AYeah, that's a really, really good point.
Speaker AI'm a big fan of satisficing when it comes to these tools.
Speaker AIf something's working for you, just stick with it.
Speaker AMaybe every six months or every year, go back and look at maybe what some alternative tools are.
Speaker ABut unless you're like us and you just enjoy figuring out what the different tools are good at, I think for most users, any of the major tools are going to be okay.
Speaker ASo you can waste a lot of time trying to get that little bit better.
Speaker AThat probably doesn't make a material difference.
Speaker BMy last public service announcement about this, Craig, and I've been talking to my faculty here about this, is as academics, we're required to follow FERPA as well as other sorts of regulations.
Speaker BAnd as you're putting data into these different tools, understand how they're using them and what they're using them for.
Speaker BBecause at the end of the day, if I do something incorrect with data that's protected, say, by ferpa, I'm responsible for it.
Speaker BSo pay attention to it.
Speaker BAnd I don't say that to put fear into your life.
Speaker BBut it is a thing that at wsu, they give us a grid of what's allowable technologies that their lawyers have looked at.
Speaker BAnd for us here, it's copilot.
Speaker BIt's the only one that they've signed off on as being FERPA compliant.
Speaker BAnd anything outside of that, we're on our own as far as what we use it for.
Speaker BSo pay attention to that because it's important.
Speaker AYep, absolutely.
Speaker AYeah.
Speaker AAnd a couple of quick rules of thumb on that.
Speaker AFirst of all, don't have any data that can possibly be identifying a student.
Speaker AAnd sometimes that's not a single piece of data, but the combination of fields can identify somebody.
Speaker AAnd the second piece of advice is if you have to really think about whether or not you should do it from a FERPA perspective for more than about 30 seconds, don't do it.
Speaker AAnd by the way, for those of you who are outside the United States, FERPA is.
Speaker AIt's a pretty big piece of legislation, but for the context here, basically it discusses what you're allowed to share about students.
Speaker AWho's allowed to see what data about students.
Speaker AAll right, so we would love to hear your questions or comments.
Speaker ASo you can get in touch with me at Craig, that's C R A I G@AIG, I goes to college.com and Rob, you're Rob Crossler@aigostocollege.com right?
Speaker ACorrect.
Speaker AC R O S S L E R. I don't think we have to spell out Rob, so.
Speaker AAll right.
Speaker AAny last thoughts before we close as.
Speaker BWe enter into a new academic year.
Speaker BKeep us looped into the challenges and the things you're facing or your wins, your use cases.
Speaker BAgain, reach out to us, share these stories with us.
Speaker BWe'd love to get other people's thoughts and opinions into the show and see if we can grow our knowledge base by working together.
Speaker AAbsolutely.
Speaker AAnd if any of you are attending the upcoming AMSIS conference in Montreal, look for me and say hi, I will be there.
Speaker AAll right, thanks.
Speaker AThat's it.
Speaker AWe will talk to you next time.