Helping higher ed professionals navigate generative AI
Feb. 28, 2024

Detecting fake answers, Zoom meeting magic, and Gemini is pretty awesome

Detecting fake answers, Zoom meeting magic, and Gemini is pretty awesome

Welcome to AI Goes to College! In this episode, your host, Dr. Craig Van Slyke, invites you to explore the latest developments in generative AI and uncover practical insights to navigate the changing landscape of higher education. Discover key...

The player is loading ...
AI Goes to College

Welcome to AI Goes to College! In this episode, your host, Dr. Craig Van Slyke, invites you to explore the latest developments in generative AI and uncover practical insights to navigate the changing landscape of higher education.


Discover key takeaways from Dr. Van Slyke's firsthand experiences with Google's Gemini and Zoom's AI Companion, as he shares how these innovative tools have enhanced his productivity and efficiency. Gain valuable insights into Google's Gemini, a powerful AI extension with the potential to revolutionize administrative tasks in higher education. I'll delve into the finer aspects of Gemini's performance, extensions, and its implications for the academic community.


But that's not all—explore the fascinating potential of ChatGPT's new memory management features and get a sneak peek into OpenAI's impressive video generator, SORA. Dr. Van Slyke provides a candid overview of these cutting-edge AI advancements and their implications for educational content creation and engagement.


Additionally, you'll receive expert guidance on recognizing AI-generated text, equipping you with the tools to discern authentic student responses from those generated by AI. Uncover valuable tips and strategies to detect and address inappropriate AI use in academic assignments, a crucial aspect in today's educational landscape.


Join Dr. Craig Van Slyke in this enlightening episode as he navigates the dynamic intersection of generative AI and higher education, providing invaluable insights and actionable strategies for educators and professionals.


Tune in to gain a deeper understanding of the transformative role of generative AI in higher education, and learn how to effectively leverage these innovative tools in your academic pursuits. Embrace the future of AI in education and stay ahead of the curve with AI Goes to College!

To subscribe to the AI Goes to College newsletter, go to AIGoesToCollege.com/newsletter

 

--- Transcript ---

Craig [00:00:14]:
Welcome to AI Goes to College, the podcast that helps higher education professionals navigate the changes brought on by generative AI. I'm your host, doctor Craig Van Slyke. The podcast is a companion to the AI Goes to College newsletter. You can sign up for the newsletter at ai goes to college.com/ newsletter. This week's episode covers my impressions of Google's Gemini. Here's a spoiler. I really like it. An overview of an awesome zoom feature that a lot of people don't know about.

Craig [00:00:47]:
A new memory management feature that's coming to chat gpt soon, I hope. OpenAI's scary good video generator, and I'll close with insights on how to recognize AI generated text. Lately, I've found myself using Google's Gemini pretty frequently. I just gave a talk, actually, I'm about to give a talk. By the time you listen to this, I will have given a talk on the perils and promise of generative AI at the University of Louisiana Systems for our future conference. I wanted to include some specific uses of generative AI for the for administrative tasks. I have a lot of use cases for academic tasks, but I wanted something more for the staff side of the house. Gemini was a huge help.

Craig [00:01:33]:
It helped me brainstorm a lot of useful examples, and then I found one I wanted to dial in on, and it really helped quite a bit with that. I didn't do a side by side comparison, but Gemini's performance felt pretty similar to Chat GPT's. By the way, I use Gemini advanced, which is a subscription, service, and it's kind of Google's equivalent to chat GPT 4. One of the most promising aspects of Gemini is that it has some extensions that may prove really useful in the long run. The extensions will let you do a lot of things, for example, asking questions of Gmail messages and Google Drive documents. There's also a YouTube extension that looks interesting. My initial testing yielded kinda mixed results. It did well in one test, but not so well in another.

Craig [00:02:22]:
I'll try to do a longer little blurb on this later. The Gmail extension did a pretty good job of summarizing recent conversations. So I don't know. I I think it's something to keep an eye on. So currently, there are extensions for a lot of the Google Suite, Gmail, Google Docs, Google Drive, Google Flights, Google Hotels, Google Maps, and YouTube. So this is certainly worth keeping an eye on. And if you weren't aware, Gemini is Google's replacement for Bard. They rolled out some new underlying models and did a rebrand a few weeks ago.

Craig [00:02:54]:
So, anyway, if you haven't checked it out in a while, I think it's probably worth doing. One of my favorite new ish, AI tools is Zoom's meeting summary. This thing is really awesome. It actually came out as part of their AI companion, which they released last fall, but didn't really get a lot of press that I saw. And the AI companion does a number of things. I may do a longer, deeper dive on that later. But the killer use for it right now for me is to summarize meetings. It is just awesome.

Craig [00:03:25]:
If you're like me, you might get caught up in the discussions that go on during a meeting, and you forget to take some notes. And you go back a few days later to start working on whatever project you were involved with in that meeting, and you've forgotten some of the details. So this happens to me a lot. I'm kind of sad to admit, but Zoom's AI companion can help with this tremendously. Just click on start summary, and AI companion will do the rest. A little while after the meeting, if you're the host, you'll receive an email with the summary, and you can just send that to the other attendees. The summaries are also available in your Zoom dashboard, and they're easy to edit or share from there. I find the summaries to be surprisingly good.

Craig [00:04:11]:
In the newsletter, AI Goes to College, I give an example, kind of a redacted example, of a summary of a recent meeting, and it's pretty comprehensive. And I I looked over the summary right after the meeting, not long after the meeting, and it really captured the essence of the meeting. It gives a quick recap, and then it summarizes, kind of broken down by topics, the entire meeting, and it also lays out next steps. Couple of things to keep in mind, though. 1st, only the meeting host can start the summary. So if you're just an attendee, you're not the host, you can't start the summary. You can ask the host to do it, but you can't do it. AI companion is only available to paid users.

Craig [00:04:50]:
And because AI companion is only available to paid users, the same can be said about the meeting summaries. If you're using Zoom through your school and you're hosting a session and you don't see the summary button, if I were you, I'd contact your IT folks and see if AI companion needs to be enabled. There are a number of other features of AI companion, but I haven't tried them yet. One particularly interesting capability is the ability to ask questions during the meeting. So you go over to the AI companion, ask questions, and the AI companion will answer based on the transcript of what it's done so far or what it's kinda transcribed so far in the meeting. So if you come in late and you wanna catch up, you don't have to say, hey. Can somebody catch me up with what we've done so far? You can just go over to AI Companion, ask the same thing. Now I haven't tried this yet, but I'm going to.

Craig [00:05:40]:
In the meantime, you can check out the features. There's a link in the show notes and in the newsletter. By the way, you can subscribe to the newsletter by going to aigosetocollege.com/newsletter. There were a couple of interesting news items, kind of developments over the last week or so that I wanted to bring to your attention. One that has a lot of potential is ChatGPT's new memory management features. So soon, at least I hope it's soon, ChatGPT will be able to remember things across conversations. You'll be able to ask chat gpt to remember specific things, or it'll learn over time kind of on its own, and its memory will get better over time. Unfortunately, very few users have access to this new feature right now.

Craig [00:06:30]:
I don't. I'm refreshing my browser frequently, hoping I get it soon. But OpenAI promised to share plans for a full rollout soon. I don't know when that is, but soon. So I'm basing a lot of what I'm gonna say right now on OpenAI's blog post that announced the feature, and I'll link to that in the show notes, or you can check out the newsletter. So, for example, in your conversations, you might have told chat gpt that you like a certain tone, maybe a conversational casual tone, maybe a more formal or academic tone in your emails. In the future, Chat GPT will craft messages based on that tone. Let's say you teach management.

Craig [00:07:09]:
Once ChatGPT knows that and remembers it, when you brainstorm assignment ideas, it'll give you recommendations for management topics, not, you know, English topics or philosophy topics. If you've explained to chat gpt that you'd like to include something like reflection questions in your assignments, it'll do so automatically in the future. And I'm sure there are gonna be a lot of other really good use cases for this somewhere down the road. There's gonna be a personalization setting that allows you to turn memory on and off. That'll be useful when you're trying to do some task where it's beyond what you normally do, and you don't wanna mess up ChatGPT's memory. Another cool feature is temporary chat. The temporary chat doesn't use memory, and it unfortunately also won't appear in your chat history. I think that might be a little bit of a problem.

Craig [00:07:57]:
Seems to me that memory is the next logical step from custom instructions, which pro users are able to use. Custom instructions lets you give ChatGPT persistent instructions that apply to all conversations. So, for example, one of my custom instructions is respond as a very knowledgeable, trusted adviser and assistant. Responses should be fairly detailed. I would like chat GPT to respond as a kind but honest colleague who is not afraid to provide useful critiques. Of course, I'm I do have some privacy concerns about the memory feature. We're gonna need to figure some of those out. You need to be cautious about anything you put into generative AI regardless of this memory feature.

Craig [00:08:41]:
Your school may have policies that restrict what you can put in. Best bet is just to assume that whatever you put into a generative AI tool is gonna be used for training the models down the road. So I don't know. Wanna keep an eye on that. I think it's gonna be a really interesting feature that could improve performance quite a bit. So I'll give updates. As soon as I get access to the memory feature, I'll give you a full review. The other thing that came out recently, kind of stole some of Gemini's thunder, was OpenAI's SORA.

Craig [00:09:10]:
That's s o r a. It creates videos based on prompts. And there are a number of tools out there that will produce videos based on prompts, but, you know, they're okay at best. I've tried a couple of them and have abandoned them pretty quickly. Sora is scary. It is absolutely amazing what that tool will produce given some pretty simple prompts. So Sora creates these realistic and imaginative scenes from text instructions. That's from Sora's website.

Craig [00:09:42]:
And the prompts can be pretty simple. So here's an example of a prompt that they had for a very professional looking video. Here's the whole prompt. Historical footage of California during the gold rush. And there's a link to this video in the show notes, or you can go to Sora's website, which is just openai.com/sora, s o r a. And you can check out the video there. It's probably not as good as what somebody could do who is really a professional cinematographer, but it's really good. When it's released to the public, I can see it being used for a lot of kind of b roll footage, maybe not the part of the main story.

Craig [00:10:22]:
A lot of news organizations like to use b roll. B roll is just kind of this generic or footage that really isn't part of an interview or or the actual story that's being covered. It might be useful for spicing up online learning materials or for creating recruiting videos or some things like that. I don't know. We're we're gonna have to see. It's not available to the public yet. Although the videos are pretty fantastic, there are some odd things about a few of them. There's a great video of a woman walking down a Tokyo street.

Craig [00:10:54]:
Every digital person in the video seems to have mostly the same cadence to their walk. It almost looks choreographed. They're not perfectly in sync, but it's close enough to make everything seem a little bit odd and a little bit artificial. And if you look closely enough, you can see little oddities in a lot of the videos, but you kinda have to look for them, at least I did. Right now, Soarer videos are limited to 1 minute, but that'll probably change in the future. One of the things I really like about the Soarer website is that OpenAI includes some failures as well. There's a video of a guy running the wrong way on a treadmill, and there's also a kind of disturbing video of gray wolf pups that seem to emerge from a single pup. It's a little odd.

Craig [00:11:42]:
Fascinating, though. So I can see this being used for a lot for training videos. I think it could maybe enhance the engagement capabilities of some online learning materials, but I can also see Sora and some of similar tools that are likely to emerge as being a time sink. It's intriguing to create cool new images and videos to add you to your website, your lectures, and presentations. But I can see myself wasting a lot of time on something that, at the end of the day, may not make much difference. I just did this. Was trying to get DALL E and some other AI tools to produce an image for the presentation I'm gonna give or just gave, depending upon when you're listening. And, you know, it got to where it was kind of okay.

Craig [00:12:30]:
But, eventually, I took about 5 or 10 minutes and just went into a drawing tool and drew one that actually was better at making the point I wanted to make. So kind of beware of the rabbit hole of AI. It it's real, and you can really waste a lot of time. Although, I do have to say it's kind of fun. And there's nothing wrong with wasting a little bit of time, not really wasting, but learning the capabilities of the tool. Alright. Here is my tip of the week. If you give any at home assignments in your courses, you probably received responses that were generated by a generative AI tool.

Craig [00:13:07]:
And if you haven't, yeah, you probably will soon. It's just the way things are now. AI detectors do not work reliably, although I think they may be getting a little bit better. So what can you do? Well, the best approach, and this is my opinion and that of a lot of other experts, is to modify your assignments to make it harder for students just to cheat with generative AI. I'll write quite a bit about this in an upcoming newsletter edition, and I'll talk about that here on the podcast as well. But in the meantime, getting better about sniffing out generative AI written text is probably a good thing to do. The first thing I suggest is to run your assignments through 1 or 2 generative AI tools. Do this for several assignments, and you'll start to see some common characteristics of generative AI responses.

Craig [00:13:57]:
Look. Let's face it. A lot of the students who used AI tools inappropriately are lazy, and they'll just copy and paste the answer into Word or into the response section of your learning management system or whatever, they're not gonna work very hard to try to mask the generative AI use. They were willing to work that hard. Maybe they'd use generative AI more appropriately. So if you know a little bit about the TELs, the the indicators of generative AI text, it can be useful in kind of correcting those students. I really encourage you to go to the newsletter, ai goes to college.com/newsletter. You can sign up there and subscribe because a lot of what I'm gonna tell you is kind of hard to explain, but it's pretty clear once you see it.

Craig [00:14:44]:
So go to the newsletter. The first thing is that generative AI has kind of a standard way that it formats longer responses. It goes back to the fact that a lot of these tools use something called markdown language to format the more complex responses. Markdown is a markup language. I know that's kind of confusing that uses symbols to allow formatted text using a plain text editor rather than a normal word processor. I use markdown to create the newsletter. Because generative AI systems often use markdown, they tend to focus text in kind of a limited number of ways. For example, generative AI tools love number or bulleted lists with bold faced headings, often with details set off with a colon.

Craig [00:15:31]:
So it'll be bullet point, bold face, colon. It doesn't always do that, but but it's often something like that. Like I said, I put a couple of examples in the newsletter, so you might wanna check that out. So one of the first clues for me is if I see something that's formatted in that way, I start to get really suspicious. Like, it's a reasonable way to format things, and if you use markdown language, it's a pretty good way to format things. But I'm guessing, unless you're maybe in computer science or information systems or something like that, not a lot of your students are using markdown language. So when I see this kind of formatting, I start to get a little bit suspicious. The next tell is the complexity of the answer.

Craig [00:16:19]:
In my principles of information systems class, the online assignments are really simple. They're just intended to get students to think about the material before the lectures or to reinforce something from a lecture. So I expect 3 or 4 sentence answers. Maybe longer ones for some of the assignments, but usually they're pretty brief. Well, when I get a really long detailed response for example, I've got an assignment where I just say it's along the lines of how did you use web 2.0 technologies during COVID for your school your schoolwork. What technologies did you use? Which worked well and which were challenging? Well, if you put that into CHaT GPT, you get this really nice numbered and bulleted list that's very extensive, and it's quite a good answer in a lot of ways. But it's way too long, it's way too detailed, And so if I saw an answer that was was like that, I'm pretty sure, not just pretty sure, I'm sure the student was using generative AI. And if you look at the answer that that's in the newsletter, you can see that the answer is very impersonal.

Craig [00:17:33]:
The true answers say things like my school or I really liked or I hated this tool or that tool, sometimes they'll crack on their teachers a little bit or on their schools. The generative AI response is very cold and very factual. And then generative AI likes to use kind of bigger words. In the answer that I put in the newsletter, it uses socioeconomic, which, yeah, students know that word maybe. But how many of them are gonna use it? Continuity of instruction. I've never had a student say, I'm concerned about continuity of instruction. That kind of language is a pretty huge indicator that somebody's using generative AI. Of course, clever students who are industrious can take what generative AI produces and put it in their own words.

Craig [00:18:22]:
In some cases, I'm kind of okay with that and may even encourage that since they're still making connections and they're still processing information and learning. But in reality, the clever industrious students aren't the ones who are most likely to use generative AI to simply spit out assignment answers. So there's also a pretty distinct writing style that most generative AI tools will use. For example, it it's kind of impersonal and mechanical, very clean with respect to spelling, punctuation, and grammar. And then there are some words and phrases that generative AI just loves. One is moreover. I don't think I've ever heard a student say moreover. Furthermore, additionally, in conclusion, and my personal favorite, henceforth, may also be an indicator that somebody's using generative AI, especially if those phrases or those words get used repeatedly.

Craig [00:19:14]:
Some other generative AI favorites, thus, therefore, consequently, indeed, notably, nevertheless, nonetheless. And then generative AI also tends to use a consistent set of idioms and cliches. At the end of the day, needless to say, all things considered, which is a great or at least was a great radio show, but it's not in most students' vernacular and neither is vernacular, the fact of the matter is and we could go on with the list. So if you see those kinds of things in student writing, it might be time to have a little chat with a student. Generative AI is also prone to mistakes, factual inaccuracies or inconsistencies, especially inconsistencies with class material. So one big tell for me is when a student's response doesn't follow what we discussed in class or what's in the textbook. For example, the systems development life cycle is kind of a big deal in information systems, but there are a ton of different flavors of it. They're all kind of the same thing, but they vary in how many phases that they break it down into and in the particular names they use for each phase.

Craig [00:20:21]:
They're all correct, but they're all slightly different. And some people like one over another. The textbook I use, since I'm coauthor on the textbook, I use the flavor that's in the textbook in my lecture. If a student's answer uses a different version, I'm pretty sure something's going on. Now, if they say, you know what, I prefer this version over the one in the book, okay. But if they just, you know, ours, I think, has 6 phases. And if they give me one with with 5 phases or with 4 phases, yeah, something's going on. Generative AI answers also lack any contextual nuance.

Craig [00:20:59]:
They're kind of generic, especially if somebody isn't really skilled in prompting generative AI. If you wanted to give you a non generic response, you've got to give it a lot of context, and most students aren't going to know that. But this is also a reason that contextualizing your assignments is a good way to adapt them to, I won't say prevent, but to discourage generative AI use inappropriately. AI responses will also lack personal insights or references to personal experiences. This is another good way to to adapt your assignments. So require students to relate the material to their personal experience. That's one of the ways I could very easily see that students are inappropriately using AI with the web 2 point o assignment I mentioned earlier. You know, if they don't talk about their school or something they struggled with, they they used AI.

Craig [00:21:49]:
So at the end of the day you like that? Couldn't resist. I think of generative AI detection as a kind of a collection of circumstantial evidence. No one indicator might be enough to believe that a student has used generative AI. But when the indicators start to pile up, yeah, they probably are. You have to decide how to gentle nudge or a formal accusation and write them up? That's up to you. But to respond, you first have to detect AI use. Hopefully, this little blurb has helped. There are more details in the newsletter, aigosetocollege.com/newsletter.

Craig [00:22:27]:
I'd love to know what your favorite generative AI tells are. How do you detect generative AI use? You can let me know by emailing me at craig@ethicalaiuse.com. That's craig, c r a I g, at ethicalaiuse.com. Alright. That's it for next time. Talk to you soon.