Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI


AI Goes to College, Episode 33: Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI
Higher education is drowning in accessibility deadlines, grappling with what 81,000 AI interviews reveal about how people actually use these tools, and watching the academic publishing system creak under new pressures. In this episode, Craig and Rob dig into all three, with practical advice, a few uncomfortable truths, and their usual mix of optimism and healthy skepticism.
The Accessibility Crunch Is Here (and AI Can Help)
The episode opens with a problem that's top of mind for faculty everywhere: the April 24 federal deadline requiring public-facing digital content to meet WCAG accessibility guidelines. Universities have been scrambling, and many of the contracted tools designed to help have been, as Craig diplomatically puts it, hit and miss.
Craig shares a concrete example from his own workflow. He took three image-heavy slide decks from his Principles of Information Systems course and handed them to Claude Cowork with a simple instruction: add alt text for all the images. Within about 30 minutes, the job was done. The accuracy? Roughly 75 to 80 percent. A handful of images needed corrections, but instead of writing alt text for 40 or 50 images from scratch, he only had to fix six or eight. Rob tried something similar with Microsoft Copilot on a keynote presentation he gave at the SAIS conference in Asheville; two images, 30 seconds, done.
Rob makes the important point that accessibility isn't just a PowerPoint problem. It extends to whiteboard files, videos, and essentially everything faculty communicate digitally. The burden is real, and it lands on faculty who are already overwhelmed by the changes AI is bringing to their professional lives. Craig adds a note of personal sensitivity here; his wife has a profound hearing disability, which makes these issues more than abstract compliance for him.
The larger takeaway? When you hit one of these friction points in your work, try AI. It won't always solve the problem, but it often will, and the time savings can be substantial.
What 81,000 Interviews Tell Us About How People Actually Use AI
Link: https://www.anthropic.com/features/81k-interviews
Craig's article: https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic
The conversation shifts to Anthropic's large-scale qualitative study, where Claude was used to conduct and analyze 81,000 interviews about how people use AI tools. Rob, who has spent considerable time doing qualitative research the traditional way (36 interview transcripts with families, a labor-intensive process), finds the scale almost hard to believe. Craig wrote a separate article about this study for the AI Goes to College newsletter.
The phrase that catches both hosts' attention is one from the report: "the light and the shade are tangled together." It captures the tension between excitement about AI's possibilities and anxiety about what those possibilities mean for how people work, learn, and think. Craig connects this to a concept from technology studies: this is not technological determinism. The outcomes aren't dictated by the tools themselves. They emerge from the sociotechnical space where human choices and technological capabilities intersect.
Rob observes that most current AI use cases still amount to doing what we've always done, just faster. The real transformation will come when people start imagining entirely new approaches (he draws an analogy to cloud computing, which started as a backup solution and eventually reshaped how people interact with technology in ways nobody initially anticipated).
One quote from the Anthropic study lands hard. A freelance software engineer in Pakistan says: "I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI." Craig points out that if a working professional thinks this way, the implications for students who may not yet appreciate the long-term value of deep learning are sobering. Rob agrees but pushes back slightly: people who lean too far into this mindset will eventually hit a wall where they lack the critical thinking skills to know when or why AI has gotten something wrong.
The hosts converge on what's becoming a running theme for the podcast: higher education's central task is helping students understand the long-term value of cognitive engagement, because without that understanding, the default will always be to let AI handle it.
Academics Need to Wake Up: 10 Theses on a Shifting Landscape
Link: https://substack.com/home/post/p-189705626
The second major discussion centers on Alexander Kustoff's Substack article, "Academics Need to Wake Up on AI: 10 Theses for Folks Who Haven't Noticed the Ground Shifting Under Their Feet." Rob sees it as a useful prompt for conversations the research community needs to have. Craig appreciates the ambition but pushes back on some of the claims.
Take thesis number one: AI can already do social science research better than most professors. Craig's reaction is nuanced. The claim is probably technically true if "most" is read literally, since many professors don't publish much (Rob notes the median number of publications for business school professors may be as low as one). But the implication that AI can replace skilled researchers? Not yet. Craig estimates that a knowledgeable researcher can use AI to cut research production time by about three-quarters, but that knowledge is the key ingredient; without research skill, you'll just produce publishable garbage faster.
Rob raises something interesting: colleagues who are brilliant thinkers but never thrived in research because they didn't enjoy writing may now have a path to contribute. AI could genuinely democratize parts of the research process. Craig extends this point to data analysis; tools like Cowork can run Python and R analyses without expensive specialized software, which matters enormously for under-resourced institutions and researchers in developing countries.
The conversation turns to the strain AI is putting on the peer review system. More submissions (many of them better written thanks to AI) are flooding journals, but finding reviewers was already difficult. Craig, speaking from his role as a journal editor, argues that well-trained AI could do a better job reviewing than roughly half of current human reviewers. Rob agrees but emphasizes that journal leaders need to come together and define norms for what's acceptable. Right now, the rules are either nonexistent or unrealistically restrictive ("just don't use AI for anything"), which creates the same kind of confusion faculty have imposed on students with inconsistent classroom policies.
One of the most provocative moments comes when Craig reads a quote from the Kustoff article: "I don't envision a research assistant role in my workflow anymore. What I want from collaborators is original thinking, domain expertise, and intellectual challenge. This is a genuine loss for the traditional apprenticeship model, and I don't have a clean answer for how to replace it." Both hosts take this seriously. Craig argues that senior scholars will need to accept some suboptimal results in the short term to continue mentoring the next generation. Rob suggests the apprenticeship model isn't dying; it's transforming. The mentorship shifts from teaching students how to do tasks to teaching them how to direct AI tools and critically evaluate what those tools produce.
Craig closes with a characteristically honest observation: senior scholars get stuck in their ways of thinking, and one of the real values of working with early-career doctoral students is the occasional moment when their unformed, messy thinking reveals a perspective that nobody in the room had considered. That's worth protecting.
AI-Generated Lesson Plans and the Bloom's Taxonomy Problem
The final segment covers a paper by four researchers from UMass Amherst, "Civic Education in the Age of AI: Should We Trust AI-Generated Lesson Plans?" The study found that roughly 90 percent of AI-generated lesson plans hit only the lower levels of Bloom's taxonomy (remembering, understanding) rather than the higher-order thinking skills like analyzing, evaluating, and creating.
Craig's first reaction was that the prompts used in the study were terrible. But he acknowledges the researchers had a reason: they were mimicking how most teachers would actually prompt. And that's the real finding. The problem isn't that AI can't produce sophisticated lesson plans; the problem is that untrained users produce unsophisticated prompts, and the output reflects the input. Rob agrees and broadens the point: if even a fraction of teachers are prompting this way, that's affecting a lot of students.
Craig shares a personal anecdote from his one year as a high school teacher. He diligently wrote lesson plans; a veteran teacher (whom he describes as one of the best he'd ever seen) simply copied his plans to satisfy an administrative checkbox. The experienced teacher didn't need detailed plans because she could read the room and adapt in real time. Some lesson planning, Craig suggests, falls into a compliance category where the quality of the plan matters less than the quality of the teaching.
But the bigger message is one both hosts keep returning to: we have to teach people how to use these tools well. Craig suspects that even a slightly more complex prompt ("address this level of Bloom's taxonomy and make sure you include demographic diversity") would produce dramatically better lesson plans.
Rob makes a final observation that resonates beyond lesson planning. People who spend a lot of time thinking about AI (like Rob and Craig) can easily forget that most people don't. Understanding what AI use looks like for someone without deep expertise, and then helping to lift them up, is the real work ahead.
Craig's response? Maybe the strategy should be seeding the field with AI evangelists, a small number of engaged opinion leaders who help others one conversation at a time, rather than trying to train everyone through top-down institutional programs. That's how innovations actually spread.
A Meta-Moment: Who Wrote This, Really?
In a brief but revealing aside, Craig mentions that his Substack article about the Anthropic study was entirely generated and posted by an agentic AI workflow using Claude Code and Opus 4.6, built on his custom "write like Craig" skill. He asks Rob to guess the accuracy. Rob says 75 percent. Craig confirms. The question lingers: if AI can write in your voice with 75 percent accuracy and post it autonomously, who's really the author? Craig leaves that for the listener to decide.
Key Takeaways
AI is a practical solution for the accessibility crunch. With the April 24 WCAG deadline looming, tools like Claude Cowork and Microsoft Copilot can generate alt text for images at roughly 75 to 80 percent accuracy, dramatically reducing the manual burden on faculty.
"The light and the shade are tangled together." Anthropic's 81,000-interview study reinforces that AI's benefits and risks aren't separable. Higher education's job is to help students navigate both, not pretend one side doesn't exist.
AI adoption follows a predictable pattern. First we use new technology to do old things faster. The real transformation comes when we start imagining fundamentally new approaches. Higher ed is still mostly in phase one.
The prompt is the bottleneck, not the tool. AI-generated lesson plans that hit only lower-order Bloom's taxonomy levels aren't evidence that AI can't do better. They're evidence that untrained users produce unsophisticated prompts.
Academic publishing is under real strain. More submissions, better surface-level writing, reviewer shortages, and undefined norms for AI use are all converging. Journal leaders need to establish clear, workable standards.
The apprenticeship model is transforming, not dying. Mentoring doctoral students shifts from teaching them to do tasks toward teaching them to direct AI tools and critically evaluate the output. Senior scholars need to stay open to messy, unexpected thinking from early-career researchers.
Seed the field with opinion leaders. Rather than top-down institutional training programs, Craig argues for cultivating AI evangelists who spread knowledge one conversation at a time; that's how innovations actually diffuse.
Links
Anthropic's 81,000 interviews: https://www.anthropic.com/features/81k-interviews
Craig's article: https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic
Academics need to wake up on AI: https://substack.com/home/post/p-189705626
AI generated lesson plans: https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/
Companies/Products mentioned in this episode:
- Claude Cowork
- Microsoft Copilot
- Anthropic
- University of Central Oklahoma
- UMass Amherst
Mentioned in this episode:
AI Goes to College Newsletter
00:00 - Untitled
00:41 - Untitled
00:41 - Introduction to AI in Higher Education
01:05 - The Impact of AI on Accessibility in Education
11:17 - The Impact of Technology on Qualitative Research
18:54 - The Role of AI in Higher Education
24:33 - The Role of AI in Academic Research
31:41 - Engaging with AI in Academic Research
38:21 - The Impact of AI on Lesson Planning
43:52 - The Role of AI in Lesson Planning
Welcome to another episode of AI Goes to College, the podcast that helps higher ed professionals figure out what in the world is going on with all the changes wrought by generative AI. I'm joined once again by my friend, colleague and co host, Dr. Robert E. Crossler from Washington State University. Rob, how are things there?
RobThings are good today, Craig. Staying busy. And the semester is over. Seems like every semester flies faster than the last.
CraigI know my sabbatical flew faster than anything I've ever been involved with before. But enough of my whining. So I wanted to start off with a really quick example, something that you may be dealing with.There's a is it April 27 deadline for higher ed institutions to make their public facing content accessible through the WCAG guidelines? Do I have that right?
RobI think so. April 24th is the date.
CraigSorry.Universities have contracted with some tools that are supposed to help with this that apparently are hit and miss, but I first became aware of this when I was at the University of Central Oklahoma giving a workshop and somebody asked, can AI help with this? And said absolutely it can. And then my advice was to take an image, put it into your AI of choice, and say, can you write the alt text for me?More recently, I I am enamored with Claude Cowork. It is just an amazing, basically agentic AI system. It's a wrapper around Claude code.So this morning I took three slide decks from my junior level Principles of Information Systems class, the one that uses our textbook, and they're all image heavy. One of them talks about hardware and it's got pictures of old calculators, all this kind of stuff.And so I just gave it to Cowork and said add alt text for all the images in these three slide decks. And I don't know how long it took it because I was doing other things, but within 30 minutes or so when I went back and checked, it was done.And I would say the accuracy is probably 75, 80%.It got a couple of things wrong and a couple of them I wasn't really happy with the alt text, but Instead of having 30 or 40 or 50 images, I had like six or eight or 10, whatever.
RobCraig, let's back up a second and talk about the accessibility part of this because I think it's important to focus on this and I don't think it's just for PowerPoints. Basically everything we communicate to our students in the classroom has to be accessible to people that have disabilities.So that way it has to be read to them or otherwise that they're able to consume what it is that we do in the classroom. And this is federal law that we have to do this.And it's a huge burden because we have so much material that is digital that may not have that baked into it. So it's PowerPoint. If you've got whiteboard files, all sorts of things, even videos, have to have this sort of alt text in place.And it creates a huge, huge burden for faculty who are already going crazy with all the changes that AI is causing to their lives to also have to be doing this. And so I think you make a great point. This is a great way to do it. Cowork does it beautifully.I saw what you did with that, and it was, I use Microsoft Copilot. I've got the M365 version of it. And I opened a slide deck that I used for the keynote I gave on AI last week in Asheville at the SAIS conference.And I gave it the exact same prompt you did to add the alt text or the images in the slide deck. And it only had two images, so not nearly as many as what you talked about.But in 30 seconds, it had the alt text that I could copy and paste into that alt text and just be done with it.
CraigSo that was one little difference. And there may be a way to do this in Copilot. I'm not a Copilot user, so I didn't have to add the alt text.It gave me new PowerPoint files that had all the alt text already embedded in them, which was pretty awesome.So if you're struggling with this as a faculty member, even if you have to do the copy and paste, it's still a lot faster than writing all the stuff yourself. But Cowork and there could be other systems that can do this as well.The cynical part of me says 75 or 80% is good enough because this, the assessment tools are going to see that there's text in there and it's going to move on to the next one. I don't think it's really going to judge the accuracy of the text. Now, that being said, there's an ethical issue involved in this as well.I have a wife who's got a profound hearing disability, and so I'm a little more sensitive to some of these things than maybe a lot of people are, but they're the pragmatics involved too.
RobYeah, yeah, No, I think this is going to be a huge time saver, and I would encourage, for whatever you're working on, to start plugging into these AI tools to really help streamline this process.Because this is another example of a friction that oftentimes feels unnecessary that you're not really adding a whole lot of human value by doing this.But it is a necessary thing for humanity, for people who are paying for what we're giving them, and if we can do it in a way that is efficient and makes best use of people's time. So I think it's an obvious great use case for these tools.
CraigAnd the mildly infuriating thing is we had a long time to do this. I don't remember when this law came out, but it was several years ago. It's been a while.
RobSo, Craig, professors are just like everybody else. We procrastinate until we have to do things.
CraigYeah, I've been meaning to do something about that, but haven't gotten around to it yet. So sorry. I think the larger point is, if you're facing, as you said, Rob, one of these friction points, try AI, because a lot of times it'll help.Sometimes it won't, but a lot of times it will help.
RobGreg, there's a great article that came out from Anthropic and one thing I love about Anthropic is they try to be fairly transparent with use cases, how people are using their AI tool and how it's changing things.I'm always skeptical when the person who creates the software is also telling me all the great ways to use the software because they have a vested interest in that. But they did an interview using Claude to look at 81,000 people's responses qualitatively for how they used the tool.So did you happen to read that article?
CraigI wrote an article on that article. There will be a link in the show notes.I'm pretty sure that I participated in the interviews and either one of my quotes made it in or somebody said almost exactly the same thing I did.
RobThat means you might be a little bit famous, Craig.
CraigI could be. I could be. I'm infamous. I'm more than famous. But Hazel will like that reference, by the way.81,000 Interviews and you actually interacted with the chatbot and it seemed kind of like you were interacting with a person on the other end.
RobSo, Craig, let me give some idea of the scale of this. So I've had an opportunity to do some qualitative research. And we interviewed 12 families three times. So we had 36 of these interview transcripts.And the amount of time it takes to look at just 36 interviews, doing things manually, the way things we have Always done has been mind boggling long. It's just been a very labor intensive, thought intensive process.And so when I read that they interviewed 81,000 and then analyzed and came up with these themes that they reported on, which we'll talk about here in a minute, it just blows my mind. They were able to do this in the short amount of time they actually were able to.And as an academic researcher, it makes me really wonder what's possible now from a qualitative perspective that used to be so labor intensive that would be nearly impossible to get this level of saturation and data.
CraigI'm glad you brought this up because there are two pieces to it. One piece is conducting the interviews. So if you're interviewing 36 families, that's probably a minimum of 36 hours to do those interviews.You know, it could be more or less, but it's probably somewhere in that range. But then there's the analysis part as well.And so by cutting down both of those, you really get rid of one of the knocks on qualitative research is that it's not really broadly generalizable because you can't do tens of thousands of interviews. Now maybe you can.I mean, I can envision where it wouldn't be all that hard to create a chatbot that would go through and do hundreds and hundreds of these interviews without much trouble.
RobAnd when I was a doctoral student, I had people that told me, don't do qualitative research as part of your dissertation because the journey is long and unknown and it may be years and years until you finish. By the time you've done the interviews and analyzed all the data and done all these sorts of things.I look at this and I see from this approach, using these tools, it being much easier to accomplish that in that condensed timeframe that you're trying to complete a doctoral dissertation.
CraigYeah, absolutely. And by the way, I want to mention that cloud research has a relatively new product called Engage that will do this sort of thing.Matter of fact, I'm getting ready to test it, so I'll report back when I do. Real quickly.Before we get into the substance of these interviews, I was playing around with a different anthropic study where they released data based on uses. I don't remember exactly how they did it, but they had like a thousand different ways people were using AI, actual chat sessions.I put that in cowork and had a pretty credible analysis done without much trouble at all. Yeah, and some it's not going to be perfect. But even down to analyzing the metaphors people used. So it's pretty amazing.So let's move away from the scholarship piece of this and let's look at the content of the report that Anthropic put out. I'll put a link in the show notes and I really encourage everyone to check it out because the report is very interactive.Make sure you go in and check out the quotes as well. I thought that some of those quotes were very revealing. So Rob, what'd you think?
RobYeah, I think it was really interesting.The most important thing that I saw in there, and I've never heard of this, the phraseology is the light and the shade are tangled together that you wrote about in that article.And it's that whole tension between the excitement and the thrill of what's possible as well as the fear and the anxiety of what that means to how we do things and what our roles are and the tension that we're going to be fighting is the benefits also have risks.
CraigDid you feel personally attacked by that? I did a little bit, yeah. It's like, oh, I think I've seen that before.
RobI love the way they frame that.And I think that as we think about teaching and with our students, I think this is what really needs to be wrestled with is that benefit, that light of what AI can do while addressing and pressing into the value proposition of what we bring in the midst of the fear of what the presence of what it can do means.
CraigYeah. And we have a fancy term for what this points out. Points out that this is not technological determinism.
RobWhat does that mean, Craig?
CraigTechnological determinism means that technology determines what happens and we deal in more the socio technical realm, which means that it's somehow the coming together of the social aspects, whether it's people or institutions or whatever and the technological capabilities that really determine the outcomes. And I think what Anthropic found in this study is exactly that.To put that a little bit more concisely, it's not about the tool, it's about how the tool gets used. That's what this really illustrates clearly.
RobYeah.And I think in the midst of this there's going to be some really interesting opportunities for different ways to think about how these tools are used. A lot of what I see right now is very much pressing into the how do I duplicate what I have always been doing?Because that's the known, that's what people understand and they're amazed by what it can do.We're going to enter a season where we start to get past that sort of thinking and we're going to start to come up with completely different ways of looking at use of technology that was never before part of our understanding. We've seen the same thing happen with cloud computing. The idea that you could automatically backup stuff out to the cloud was mind blowing.When people had external hard drives or whatever that they would back things up to manually. Took a while for that change to occur.But the cloud has changed a lot of how we've done things that we all take for granted now with Apple pay and all these sorts of different devices.And so I think we're going to see as we start to break out of the way, we've always thought of different ways that we can accomplish the tasks we have in life.
CraigThis is a natural progression. With these truly disruptive technologies. We first try to do what we've always done more quickly or more effectively. And then we really.How does this really change things in a fundamental way?But I think to me, one of the big messages of this light and shade aspect of the report is that education in general, but certainly higher ed, our job is to help students figure out how to use it in ways that enhance their knowledge, skills and abilities, including their ability to think, rather than in ways that lead to cognitive atrophy and other problems. We cannot depend on them to do this on their own. We have to help them figure out how to do this. I think that's our big job here.And as we pointed out repeatedly, faculty are going to have a really hard time doing that if they don't have experience with these tools.
RobYeah, I think this is the challenge of higher education is really to ensure cognitive atrophy or whatever terms we're going to use for saying that people basically are just setting aside their ability to think and kind of losing that ability, that they don't do that, that there's going to have to be a purposeful, different way of looking into things and also assessing when is it okay to offload that cognitive work where the value of engaging with it isn't there. And I don't know that anybody knows the answer to that yet.What we're going to see is a lot of trial and error and experimental approaches to ensuring we do this and do this well. But I think the number one question we have to ask as we press into this is how do we ensure that learning occurs in the presence of AI?
CraigYeah. And the answer to that is going to change as these tools continue to evolve. But I want to read a quote.This quote is from a freelance software engineer in Pakistan. Not From a student. So this is from a practicing professional. I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI.So if you've got some working professional that's doing that, what do you think that students who are still maybe not fully clued in to how important learning some of the stuff we're trying to teach them is, how are they going to react, especially if they don't see value in what we're asking them to do? I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI. That's a scary quote for those of us in higher ed.
RobIt is a scary quote, but here's where I think the people who lean into that too far are going to run into troubles is there's going to come a point where that's not going to work for them and they're not going to have the abilities or the critical thinking skills necessary to know when or why or how to question that.And that's where I really think education is going to be important is, you know, at what point is the machine training the machine on what the machine's created? Because nobody is adding critical stuff to the midst of those things.And so I think that's what we have to teach students how to do is to move beyond what is possible by the AI alone and bring in the socio part of the technical. Like you mentioned before, how does human value adding to this value?
CraigYeah, it's not going to be easy because the natural inclination is for students and probably all of us to use it in ways that are simple, straightforward and save us time doing things that we don't think bring any value. We're starting to see some convergence on what I think higher ed really needs to be doing here. We've talked about that a lot.We've got to help students understand the long term value of what it is that we're asking them to do. Otherwise they're just going to default in, let AI do it. And that's not an unreasonable way to think about it from their perspective.All right, Rob, anything else on this one?
RobNo, I would just encourage people to read your substack that you're going to share but also to go check out the article if they're interested because it is really interesting read.
CraigYeah, it really is. And there's a lot in the article. The substack just scratches the surface. There's really a lot in the article.So again there will be a link in the show notes, so make sure you take a look at that report. All right, so what's next, Rob?There's another substack article from Alexander Kustoff that came out a couple of weeks ago, and it's entitled Academics need to wake up on AI 10 theses for folks who haven't noticed the ground shifting under their feet. So, Rob, what did you think of this article?
RobIn some ways, I see this as a great prompt for something that researchers in higher education need to be talking about. We're disjointed right now on what AI should be allowed to do in research and what it shouldn't do, able to do.And this article spells out a lot of the different things that we really need to be focused on talking about.It questions a lot of what social science research does and whether there's value add from what people are doing beyond what the machine can do for them. It is scary. As someone whose career has been built around doing this kind of research.
CraigYeah, makes me glad I can retire, although only kind of, because I think there's an opportunity to push our research further, to maybe make it more relevant and impactful, which is something most academic fields, including ours, have struggled with for ages. Is the whole rigor versus relevance thing. We've never adequately solved it. It's always been talked about, but never really.I don't know, I haven't seen anything change much. But let's run through some of these 10 theses. That's a really weird word.
RobWell, you might be like Martin Luther.
CraigAre you gonna nail this somewhere? How do you do that digitally? Huh?
RobYou post it on substack.
CraigYeah, put it on substack. That's right. So, all right, here's the first one. AI can already do social science research better than most professors. I think that's wrong.Well, although, all right, it's probably right the way it's written, but the way it's likely interpreted is probably wrong. Because if I read this as better than most professors, it's probably true. Better than good researchers, even better than decent researchers.It's not there yet. Now, that doesn't mean it doesn't have a lot of promise. It doesn't mean that it can't cut down a lot of the labor involved in research.But on its own, it is not going to produce something publishable in a first quartile journal all by itself.
RobAnd so I think one thing for listeners that aren't plugged into the academic space and what publishing is, people who are publishing in the top quartile of journals are probably in the top 10% of professors. There are a lot of professors that don't publish publish a lot. And that's where I think most actually probably makes sense.Because if the median number of articles published by professors in business schools, I think I read at one point in time was one, because there are a lot of people that have published zero and so to say better than most is not actually saying a whole lot. The argument I would like to see is, well, what percentile of professors are we getting to? And then what kind of noise?If we're allowing tools to publish at those levels, when the second, third, fourth quartile, wherever those might be publishable, what kind of noise does that add? That doesn't add a lot of value.
CraigI would qualify that a little bit. What sort of noise is it currently adding? Because it's already adding. I mean, there's a lot of AI research getting published right now.Yeah, I'm totally on board with what you just said. I would also point out that if you're not skilled in research, you're not going to be very skilled in getting AI to do research for you.You might be able to one shot or have a 10 or 12 sentence prompt that pumps out some garbage that you can probably get published somewhere, but not in a top quality, even a decent quality journal.But I mean, my recent experience tells me that somebody that knows what they're doing can use these tools to probably cut down the time it takes to produce solid scholarship by, I don't know, three quarters. That seems to be my go to figure today is 3/4.
RobWell, statistics are all made up, Craig, at least 76% of them.
CraigThat's right, 76, 75 to 80%. Yeah.
RobWell, what I think is actually interesting about this, and I've had people, colleagues that I've known that have not thrived in doing research a lot of times because I really didn't like writing.And I actually see AI helping to bring people who are brilliant thinkers that just didn't enjoy that process of writing to bring some of their brilliance out and part of the conversation because they had those abilities and for whatever reason had those, those writing barriers in place. So, you know, in some ways I think it could be great, but I don't think it's going to replace us now.
CraigI think it does democratize research to some extent. So, for example, something I'm starting to explore is using AI more for analysis.We pay lots of money for software to do data analysis because I don't want to code it. You can do this in Python.You can do it in R. You can do everything that we pay all this money to do, you can do with those other tools, at least most of it. But it's just with something like Cowork and there could be other tools, then I can do that without all the specialized software.And that's pretty amazing. And it makes some of these analytic techniques accessible to people who couldn't afford to do them before. We're lucky.We're in reasonably well resourced schools.I get a nice account for paying for those kinds of things, but that's not the case in a lot of teaching schools, especially in a lot of developing countries. So the other thing is that the writing, we've talked about this before, the writing is so much better.I just finished reviewing a paper for a pretty good journal, kind of an A minus level journal that was written by a team from India. And you know, they're trained in English and they speak English probably better than most Americans do, but it's a very particular kind of English.So it's a blend of Indian English and British English. I saw almost none of that in this paper, almost none in the papers that are coming from people that aren't native English speakers.Oh God, they're so much better in terms of the language.
RobRight. And here's where I see an interesting challenge though, Craig, and it's the strain on the system.So even before AI, getting reviewers to agree to review papers was hard. It would take a long time to go through the process.And now as we increase the number of papers that enter into the process, are we going to be able to handle these in a way that isn't just AI doing all the reviews for everybody of the papers that AI had written or the editor in chiefs of journals having to desk reject so many papers. And what that looks like is, is our system of peer review going to be able to handle an increased capacity of submissions?
CraigYeah, that's a huge problem.In fact, I have my Sigmis chairs message coming out on the next edition of the Database for Advances and Information Systems and it's on exactly that problem. It's so hard to get reviewers now because there's really beyond doing one or two a year, there's absolutely no individual benefit from doing it.But I think AI may be the answer. You kind of sounded a little skeptical there.But a well trained AI will probably do a better job of reviewing than, I'm not going to say 75%, but probably half the human reviewers and it can get it done in no time at all. So I don't ever want to see a world where it's only AI. But I don't see any problem with either human guided AI reviews or AI as a reviewer. Right.
RobAnd what this is going to require, Craig, is a coming together of the leaders of academic journals of saying what is allowed and allowable. Because right now it is in many places an empty space of what the rules are or the rules are. Just don't use AI to do anything.What are the norms of what is acceptable? And is it the same from journal to journal?It's almost the same world that we've put our undergraduate students in by having different policies and different classes. And as a researcher or as a reviewer, it feels like it's the same sort of world where it's ill defined.So you're not sure what you're supposed to do or what you're allowed to do.
CraigYeah, and if any of you journal editors that are telling me I can't use AI at all in my reviews, don't send them to me, I'm not going to put them out on, you know, anything that's public, the manuscripts obviously. But I just put my review into my pro version of Chat GPT and said, can you find any holes in this? Is there anything that's unclear?Why not use it that way? I think the whole journal system may collapse under the weight of this. And that's one of the points that Alexander makes.So yeah, you brought up something in the pre call about the double standard when it comes to hallucinating. Why don't you elucidate us on hallucinating?
RobYeah, I think this goes back to something I realized as a doctoral student, Something I saw 20 years ago is I found a paper that had a citation in it and I'm like, oh, that's exactly what I want to say.And so I went and found the paper that was cited and I read it and from the abstract it looked like, okay, this paper says what I think it's going to say.And then I read the paper and it didn't exactly say what the person who wrote the paper that I found originally was saying that this paper they cited had said.And so I think as humans we've hallucinated and said, well, this paper is close enough to support my argument and we've used it and at some level we've been okay with that.And now the argument is when an AI tool does that for us, that this is somehow more wrong than when the human had made a decision to misappropriate what a paper it cited had said. So I think this is an interesting double standard. Who does it more often and who does it worse, the human or the AI? And then what does that mean?
CraigIt is interesting. We see this over and over again with AI.I hear or read about people being just aggravated beyond belief dealing with AI chatbots for customer service. But I think we've all been pretty aggravated by humans when it comes to customer service. And so, yeah, there's a double standard.AI is going to get better. So what we need to do is figure out how to reduce those hallucinations and leverage AI so we do less of that ourselves.One of the points that he makes in this article is that junior scholars face the biggest disruption and opportunity. I know whether it's going to be harder to publish, but it's going to be harder to publish things that get noticed.The landscape is going to shift about what's allowed and what's not allowed. There are going to be people behaving opportunistically to take advantage of flaws in the system. And so there's a lot to be worried about.But it's a huge opportunity. What do you think about that?
RobYeah, I think there is huge opportunity. And being bold and brave in a world where you're still learning and you don't have your set ways is. I think it's a risk reward sort of thing.If it's done well, you might see some great opportunities. What I think is an interesting challenge, and I'm wrestling with this myself, about ready to teach a doctoral seminar.What level of AI engagement do I want my students using and where and how?Because I want to ensure that they don't have the cognitive debt of not learning certain things that they should learn, while at the same time I want them to recognize the benefits and the advantages of what AI provides. And that's not cleanly defined of exactly what that looks like.
CraigThat is the question, isn't it? Not just for doctoral students, but all the way down. Yeah, it is a tough time. It's an exciting time as well.But I think what I'm going to emphasize with my doctoral students is that you need to learn how to leverage these tools. So it might take some trial and error to figure out what the right balance is.But if you don't know how to use something like Codex or Claude Code or GitHub Copilot to help you carry out your research and to do things like data cleaning and some of those sorts of more mundane tasks, you're going to be behind.
RobYeah, here's what I'm thinking about doing, Craig, and I think this works in an undergrad class as well, is moving away from emphasizing the writing of the thesis in the class, the seminar paper, whatever that is, and leaning into their ability to orally defend and describe what it is that they've written and created.So that way, as they learn to use tools and they have the tools, it emphasizes the importance, importance of understanding and knowing what you're putting your name on and what you are saying, because you have that expertise to be able to talk about it and to answer those questions and to know what it is. So that way, those places where there's.Okay, that used to take me 12 hours of unnecessary time to clean my data and to get it all nice, and now I got it done in an hour using these tools. Great. Yay. Because that really added no value to the research that you were doing.But if you're citing papers and you're saying, this person said this and then this person said this, therefore I'm saying this, well, talk me through that process. Explain that thinking to me of why you can confidently say A, then c,.
CraigRob's getting all Socratic, But I think that's a really good way to approach this, is, yeah, let's lean into humanity and talk me through this. You know, help me understand, defend this point. You know, I like that a lot. I think this is a rare opportunity. But like a lot of.I kind of hate to use the word disruptive again, but like a lot of disruptive events, the waters are going to be choppy before they smooth out. So I think we're going to have to be willing to accept some trial and error here, which is another one of our running themes.There is one big danger here, and maybe the most concerning thing in the article, and this was his point number six. I don't envision a research assistant role in my workflow anymore.Now, he does clearly state it's invaluable to have mentees and co authors and that sort of thing. But now I'm going to quote, what I want from collaborators is original thinking, domain expertise, and intellectual challenge.This is a genuine loss for the traditional apprenticeship model, and I don't have a clean answer for how to replace it.So I think as senior scholars, those of our listeners who are senior scholars, we're going to have to accept some suboptimal results from all of this to help train up the next generation.It's always been the case that it's really easier to do the research yourself than it is to do it with a bunch of, especially the first couple of years, doctoral students. So, you know, take on a little bit of the load.
RobWell, I would question that a little bit. Does the apprenticeship change?And what I envision is if we have AI tools that help us to do the things, is my apprenticeship teaching them how to do the tools, casting the vision of what I want them to go off and create and then they bring back the tools or go use the tools and bring back what. What's created. And we have that conversation.So it's not they went off and wrote, but they were guided on how to use tools properly, which is a different kind of apprenticeship. But as scholars who've been doing this for a long time, we're going to have the ability to critically evaluate what those tools bring back.And in that demonstration of how we're critically evaluating it is where that apprenticeship work and training and rising up of future scholars happens.
CraigAbsolutely. One other point on this before we move on.I think one of the things that those of us who are more senior often forget is that we get stuck in our ways of thinking. And one of the real values, especially of earlier career doctoral students, is if they haven't been pushed into our way of thinking yet.So a lot of times their thoughts are messy and they may not really help us much. But every once in a while it's like, huh, that's a different way to think about this.And so we need to be open to listening to them when they have those moments and not just automatically think we know better, even though we do know better. All right. Anything else on this one, Rob?
RobNo, I think we've covered that pretty well.
CraigAll right, again, I'll put a link in the show notes about this, but before we move on, did you notice the PS and PPS in this article? I just now noticed them.
RobI didn't. Can you share them with me, Craig?
CraigYes. P.S. This post was entirely generated and posted on substack by agentic AI using my new Claude code opus 4.6 workflow. Make of that what you will.PPS that is entirely generated based on my artisanal, handcrafted human social media posts and thoughts on the topic. So who wrote it, really? You tell me. I have a write like Craig skill in Claude. And what percentage does it get right, Rob?
Rob75.
Craig75%.
RobPretty close. I'm gonna go to what the kids say these days, Craig. Six, seven.
CraigAll right, all right. On that note, we should move on. Rob, you sent this article to me after I sent the article. We Just discussed to you article. Yeah.
RobAnd so this article was about high school students and talking about lesson plans and how high school teachers are doing lesson plans.And they compared lesson plans that have been created by AI tools with Bloom's taxonomy and another way of assessing what level of critical thinking that we're asking students to do. And it found that the vast majority of AI generated lesson plans were hitting the very lower order thinking, thinking.So like, 90% of what they found were at that lower level thinking and not promoting the higher level thinking in Bloom's taxonomy, which would be analyzing, evaluating, and creating.And so this article really critiqued whether, at least in the high schools, the lesson plans being created were being done to the level of engagement that students should be having in a world where AI could just answer those lower orders order thinking activities. So it was pushing back on saying this approach is a panacea to lesson planning and making every educator's life better.
CraigYeah, it's an interesting article. So it's four authors from UMass Amherst. The title is Civic Education in the Age of AI. Should We Trust AI Generated Lesson Plans?And again, there'll be a link in the show notes. One of the things that struck me as I went through the paper is that their prompts were universally terrible.At first it was like, what is this garbage? These prompts are terrible. But then when I read the article a little more carefully, they had a reason. I mean, they asked some how did they do it?Did they ask some teachers how they would prompt it?But they had some way that they came up with these prompts, and they're probably pretty similar to what most teachers would come up with unless they have fairly significant expertise in AI. So it's kind of create a lesson plan for me on this thing, which is not a great way to prompt.So did you pick up on that or did you notice how to do that?
RobI did, and it didn't bother me a ton.And that's what they did because assume that 50% of teachers prompt that way and they're creating lesson plans that way, that's affecting a lot of students. And so I don't even know that the number of how many people prompt in a very basic way.But if that's what's resulting in the lesson plans that's being used to educate people, then we've got work to do as a society in helping the people who should be providing classroom opportunities with the skills necessary to do this well and to do this better. Because with good prompting, I think they can overcome this right So I don't think this is an issue with using AI to create lesson plans.I think it's an issue with using AI to create lesson plans by untrained people.
CraigYeah, I think that's the big message here. There's also a skeptical piece of me. So I was a high school teacher for exactly one year and then realized I didn't want to be a high school teacher.And so I very diligently did my lesson plans.And we put these in a big, giant notebook, and then a vice principal came by and would check those notebooks to make sure you had a lesson planned for every day of the academic year. So I was a new teacher. So I did all of this very faithfully, very detailed, and then one of the best teachers I've ever had.I'm not going to name her just in case. She was an amazing teacher. She came to me and said, hey, Craig, do you have your lesson plans for this class? I said, sure, of course I do.She said, can I borrow them and copy them? She had been teaching this stuff for so long, she didn't need any lesson plans.She just needed to have a basic sketch about what she was going to do, and then she could wing it.She could figure out where the class was and change accordingly and do all those kinds of things that great teachers do, which I couldn't do as a novice teacher. But she literally just copied my lesson plans. And the assistant principal came by, checked a box, and moved on.And so I think some lesson planning falls into that category, and then, you know, who cares? But I think it goes back to, we've got to teach people how to use these tools. They're just like anything you give me a hammer and a saw.We've talked about this before. I might produce something functional at best, but it's going to be ugly and will probably fall apart at some point.You, on the other hand, you can produce something amazing with it, and AI is really no different when it comes to that.
RobYeah, I think what this is a great example of is just critically evaluating what you get and learning how to get something that you're happy with, again through the practice and the repetition of becoming better. My guess is these lesson plans are based on some biases that are baked into the data.One thing I talked about was a lack of diversity in the examples of what's being used. And AI tools are only as good as and as knowledgeable about what they're trained on.And my guess is they're trained on lesson plans that have been around since the 60s, that diversity wasn't something that people worried about baking into their examples and the different things that they're doing in the classroom.
CraigYeah, I think that's absolutely true. With skill, you can force that. Yep, this was a good paper.So I was initially pretty critical, but then, then when I realized the researchers intentionally use these simple, broad prompts to mimic what most teachers would actually do, it's like, okay, it's a. It's a fair point. But what worries me is somebody's going to just read the headline and say that AI is terrible at this.Because I suspect that even a slightly more complex prompt, you could even say, address this level of Bloom's taxonomy and make sure you include demographic diversity. You're going to get a lot better lesson plans, for sure.
RobAnd one of the things that I think is important in what you said there as well, Craig, is as people who spend a lot of time thinking about this, and I put you and I in that bucket of people who spend a lot of time thinking about this, it's real easy for us to think that other people will think about this like we do and to be intentional about removing the hat and saying, for that person who doesn't have the time or the interest or the whatever to dive as deep as we have, what does the use of AI tools look like for them? And how do we understand that and help to lift them up as society moves in this direction?
CraigYeah, preach it, brother. I agree totally.I'm starting to wonder if maybe we shouldn't focus on creating AI evangelists at this point, have a small number of people that really are engaged and willing to push on this, and then they start to help others, because that's really how this kind of thing gets done.We can have institutional adoption and diffusion programs all we want, but what works best is Rob coming to Craig and saying, did you know that AI or whatever will do this cool thing? And then I tell somebody else and they tell somebody else, and that's really how this spreads. These sorts of things spread.And I think maybe if we have those opinion leaders, we seed the field with those kinds of folks rather than trying to. Trying to train everybody on anything is just exhausting and rarely works well, for sure.
RobNo 100%. I think that's a great note for us to wrap this up on Craig.
CraigThat's code for, Rob's got another meeting he's got to get to.
RobThat sucks to be calendar full of meetings, but it is what it.
CraigNo, but I think that is a good stopping point. So all right with that. Thank you very much. And join us next time on AI Goes to College. Thank you.















