S1E1 Academic Integrity - 3:5:26, 2.11 PM
===
[00:00:00]
My name is Jennifer Maddrell and I'm really glad you're here and I'm equally excited to get started on this new podcast series focused on teaching and learning in the age of AI.
I want to start this first episode with a personal story that might sound familiar.
For years I assigned a fairly standard literature review in one of my college courses. Students were asked to find at least 10 credible and relevant sources, synthesize them, and then identify key themes. It was a staple assignment in the course.
I'd refined it over time, tweaked the rubric and clarified the instructions. I felt really good about it and students gave me feedback that they really appreciated how much they were able to dig into the theory and the research to pull out evidence-based practices.
But a couple of semesters ago, I decided to test something.
I prompted a couple of widely available [00:01:00] generative AI tools with my assignment to see what would happen. Unfortunately, what came back looked very much like what I'd been asking students to produce, but in seconds.
I received a well organized annotated bibliography with thematic groupings, and it even identified some gaps in the literature. Was it perfect? Not by a long shot, but did it produce a document that would allow students to leapfrog past many of the learning aims that I had baked into the original assignment?
Absolutely.
That was a really hard realization for me, and I'm guessing most teachers have had a similar moment recently.
Maybe you're reviewing a submission and you have that sinking feeling. The writing is maybe too polished, the ideas are too shallow, or maybe the voice doesn't sound quite like the student you know. And then you find yourself thinking, did they actually write this or was it AI?
This challenge of academic integrity in the age of AI is one of the biggest pain points I hear from educators right now, and that's exactly why we're digging into it today.
Overall, my aim with [00:02:00] this podcast series is to reach educators, instructional designers, and learning leaders who want to explore the most pressing challenges of AI integration.
We're starting with academic integrity because it's where so many of us are feeling the most immediate pressure.
If you spent any time talking with groups of educators about this, you'll find perspectives that tend to fall along two ends of a continuum.
On one end, some are taking a defensive posture, they're seeing ways to detect and police AI use, and they're often trying to catch the technology with more technology and then penalizing its use as cheating.
They might require handwritten in-class essays or scan every submission with an AI detection software,
or they're adding strict penalty clauses for using AI.
On the other end, are those who view AI as a new given. They see it as a learning experience design challenge that we as educators need to address directly.
They're not asking how to block AI, but instead how to redesign the learning experience.
They're asking what do authentic assignments and [00:03:00] assessments look like now? How do I help students develop good judgment about when and how to use AI?
It's the second perspective where I want to focus because it's where we as educators have agency to use our professional development. We can use our judgment to design effective learning experiences and make informed decisions in this new reality.
For many educators, this is a shift. It's a shift from policing to designing, and it feels like a page turn.
It's where we're able to move from being rule enforcers to being designers.
Also, it's really important to point out here that detection and banning have their own challenges. For one, detection tools still can't reliably distinguish between human and AI generated content. Research on these detection tools are suggesting that the accuracy is in the 70 to 80% range.
So what this means in a class of about 30 students, you're looking at potentially six to nine false positives.
And unfortunately for non-native speakers or students with certain learning disabilities, they get flagged at an even higher rate.
And then [00:04:00] beyond accuracy, a detection orientation really creates an adversarial dynamic between the students and the teacher.
Students look for ways to outsmart the detectors, and then teachers spend their time investigating submissions instead of supporting the student's learning. And unfortunately that damages trust that's essential to good teaching.
And unfortunately, outright banning also runs counter to preparing students for their futures.
Like it or not, AI is here. Students are using it in their daily lives and they'll be expected to use it thoughtfully beyond the classroom, whether that's at school or at a workplace.
So, if we treat AI as something to hide and avoid, we're missing an opportunity to help them develop the judgment and literacy they actually need.
So instead of focusing on how to catch students using AI, I want to explore a different question. How do we design learning experiences that maintain academic integrity, while also at the same time acknowledging AI's existence?
But before we get into that, I want to be clear that my aim isn't to solve this [00:05:00] challenge today, or even by the end of this podcast series.
That would be an impossible goal.
Instead, I'd rather frame AI integration as what we call an instructional design, an ill-structured design problem.
An ill-structured design problem is one where there is no single right answer. It's where the constraints and the goals are often unclear or conflicting, and where solutions depend heavily on context.
In other words, this is the kind of problem that requires your professional judgment.
Also, the solutions are heavily dependent on your learners and the learning environment.
From my perspective, that's exactly what we're dealing with regarding AI integration and teaching and learning.
For example, an approach that makes sense for a graduate seminar and philosophy might be completely wrong for a high school geometry course. The same way that strategies to facilitate a small discussion-based class don't translate well to a large lecture.
So my overall argument is that there's no universal AI integration solution. Instead, every educator must make decisions [00:06:00] that are right for their own learners, their discipline, and their context.
So, with that framing in mind, I want to walk through several design considerations I feel are relevant as educators tackle the challenge of supporting academic integrity in the age of AI.
But I also want to say this isn't an exhaustive list. Instead, think of these questions as entry points, places where you can add your professional judgment.
I'm posing these questions to be explored to help you make informed design decisions in your own work. And keep in mind your answers might be very different for different courses or assignments or even different points in this semester.
As you listen to me go through these questions, you might be thinking about a university course or maybe a high school classroom, maybe even a professional learning program or even workplace training.
So while the specifics might look different, the design considerations and questions I feel are universal.
So please grab a pen and jot notes as we go.
And as we get underway, keep in mind your beliefs about AI matter. Whether you see AI [00:07:00] primarily as a threat to be controlled or conversely as a tool students need to learn your beliefs, shape how you interpret every question we're about to explore.
So with that, let's get into the design questions.
The first design question is the big one. Is your assignment AI vulnerable?
In other words, is your assignment measuring human learning or AI capability?
And as I shared in my example, the best way to find out is to try it yourself.
Can the assignment be completed almost entirely with AI?
If so, what does that reveal about what the assignment currently teaches and measures? If AI can produce a passing submission, it's most definitely AI vulnerable.
And if so, you have decisions to make about the learning experience. Also, it could be a sign that the assignment is measuring lower order skills such as grammar, formatting, summarization, rather than the higher order learning you actually intended.
And if that is the case, has your assignment become more busy work than meaningful learning?
This issue will be the subject of our [00:08:00] next podcast episode,
but if AI can reasonably complete the entire assignment, you've identified a design problem
and that's a really important starting point.
The second question I'd like you to consider is, are you assessing the product or the process of learning?
The answer to this question shapes a lot about what comes next in terms of learning experience redesign.
If the student's grades depend solely on a final polished submission, in other words, a neatly packaged product, it's very difficult in today's world to be assured that what you're measuring is student learning versus the AI's output.
But if the student's grade is spread across multiple checkpoints such as maybe drafts or revisions and reflections, then you gain more opportunities to observe and support their learning process.
AI might assist with parts of the process, but with many checkpoints involved, it gives you more opportunities to make the students thinking visible.
So what might this look like in practice with the assignments you're finding to be AI vulnerable?
Are there ways you can introduce checkpoints? Are there moments when students can apply their [00:09:00] ideas to their own experiences or to their own data that they've collected or maybe to their own context?
I'm thinking about things like progressive drafts that show how ideas evolved or maybe brief reflections where students explain their decision making.
Annotated bibliographies, where the synthesis must be in their own words, or
one-on-one conversations with you, where the students walk you through their process
And take a look at your rubric. Are you rewarding the evolution of ideas and iteration or only the polish of the final delivery? Because if we highly reward polish, we're incentivizing students to use AI as a tool to produce the most polished output.
And I also want to acknowledge that these design choices don't eliminate AI use. However, they can make it much harder for a student to just hand over their learning efforts to AI.
Also, these changes shift the focus from, did the student produce the polished product that meets the final requirement to did the student show their thinking along the way?
Moving on to the third question, what [00:10:00] skills does this assignment require that AI cannot easily replicate? In other words, what parts require judgment, context, or reasoning that AI currently struggles with?
So in thinking about this question, there are many things you can think about. For example, does the assignment require specific knowledge from your class or your local community, or this week's news, or does it ask for a simple, correct answer that AI can easily provide?
Or does it ask students to navigate competing perspectives and justify a trade-off decision?
Maybe it's ethical reasoning. Maybe it's synthesizing conflicting perspectives. Maybe it's applying theory to messy real world situations.
Or maybe it asks the student to act as the critic.
In this case, it might be evaluating AI's own output for bias or accuracy.
We'll dig deeper into these issues in future episodes, For example, next episode will be rethinking learning goals. Then, in a future episode, we'll be looking at our students' needed AI skills.
Let's move on now to our fourth [00:11:00] design question.
How clear are my expectations for AI use? In other words,
will students interpret my expectations the way I've intended?
And the big takeaway here is that ambiguity in AI's expectations only creates anxiety. It creates anxiety for both the students and for you. If students don't know what's acceptable, they'll make assumptions and unfortunately, those assumptions might not match yours.
So as you're thinking about your learning experiences, step back and think about what you've communicated to your students about acceptable AI use.
Do students understand what AI use looks like in your class and at different stages and for different aims? Think about whether you've shown them the differences between using AI for brainstorming versus using it to produce the final draft.
Also, have you explained the difference between using AI to help find sources versus handing over the work of selecting the sources to AI?
This leaves all the responsibility and judgment of relevance and credibility to the AI and not the student.
And what I [00:12:00] hope I'm emphasizing here is that acceptable AI use isn't simply allowing it or not allowing it.
Instead, AI acceptable use exists on a spectrum.
For example, for a given class or a given assignment, AI might be permissible for brainstorming a paper topic. However, you might decide that the critical thinking aspects of the assignment, such as making an argument or recommendations, you may say that that has to be drafted by the student.
Or in a different context with different learners, you might take an experimental approach.
In this case, you'd place a heavy emphasis on teaching the students how to critically evaluate the AI output.
And again, the important point I'm trying to make here is if YOU aren't crystal clear about where those lines are, your students will draw their own lines.
And unfortunately, you might not like where they draw their lines.
Toward the end of this podcast series, we'll be looking more closely at both classroom and institutional AI use policies.
But for now, the takeaway is this: if your expectations are unclear, your students will fill in the gaps for you.
And for the last question [00:13:00] I want to leave you with, I'm stepping back from the assignment mechanics to ask, what does your approach to academic integrity signal to your students about your overall classroom culture? Whether we intended or not, the policies we put in place and the way we talk about AI also communicate something about learning and trust and what we believe about our students' capabilities.
So step back and ask yourself some additional questions about this. Do your policies signal that AI is something to hide and avoid, or something to examine and evaluate openly?
Also, are you spending time explaining the reasoning behind your expectations, or do the rules feel arbitrary?
Does your approach invite students to make informed decisions about their own learning, or does it put them in a cat and mouse dynamic with you?
And finally, there's a real practical trade off to consider. How much of your time and energy do you want to spend enforcing compliance versus facilitating your students' learning?
And I'm not suggesting there's a single right balance, but your approach has a downstream implication and these implications are worth examining.
So [00:14:00] as I wrap up this first episode, I want to bring this back to my literature review example I shared earlier.
I'd like to try to highlight what these questions look like in my practice.
When I realized the assignment was AI vulnerable, I had choices to make.
And these choices aligned with the design questions I shared here.
And as I just mentioned, I had to think about my beliefs about AI within the context of these students and with this course.
In my case, I was working with graduate students,
so I didn't want to ban AI as I do see it as a valuable tool for their learning and for their future work. Also, I had no interest in detection software. I have a lot of concerns about the accuracy and I don't want to set up a dynamic in which my role is to catch students cheating.
So instead I look to redesign a lot of elements of my course.
First, I held a few class discussions about acceptable AI use.
My goal was for them to have a better understanding of how they could use AI as a tool to support their learning, and I restructured the assignment, so their thinking became a lot more visible at more [00:15:00] stages.
Here's a quick summary of what I changed. First, students now discuss their purpose statements in small online discussion groups.
These discussion groups allow me to check in on the depth of those conversations as they're developing their focus.
Then after they've talked with their peers, they submit their purpose statement along with three articles they found credible and relevant. With this, they include a brief reflection on why those articles respond to their purpose statement.
So far, these are all relatively low stakes assignments.
However, they're making their learning more visible, and it's giving me a baseline to check that they have a solid starting point.
It also helps to give me a baseline to ensure that there's a through line from their aims to their subsequent assignments.
If I see a break in that through line, it could be a signal that they're relying on AI. After these lower stake submissions, they build their annotated bibliography.
This is where they're starting to identify themes and contradictions and gaps.
And here is where I lean into AI rather than away from it. I encourage these graduate students to use AI [00:16:00] tools such as Consensus to find additional sources or to expand their synthesis.
However, I'm also asking them to critique the AI summaries to look for nuance and bias.
So the way I'm encouraging them to use AI is to have their output become the subject of analysis, but not a shortcut.
Again, if I see the focus slipping from their purpose statement, it could be a sign that they're letting the AI output guide their focus.
Then, before the students submit an outline, I'm asking them to meet with me one-on-one. These can be really short calls, maybe 10 to 15 minutes.
My main objective is to ask them to walk me through their process. What AI tools did they use? Which sources are they finding to be most credible? Did they verify the citations from the AI's output?
And again, where did the AI summaries miss the nuance when they actually read the articles?
If the student is ceding their assignment to AI, this quick conversation will help to tease that out.
But importantly for these graduate students, I did not ban AI. I didn't surveil the student's [00:17:00] submissions with detection software, but
instead I restructured the assignments so their thinking had to be more visible and at multiple points in their own words and in conversations with me.
And trust me, I know this sounds like a lot more work, but here's what really surprised me.
When I added a few more check-ins, it actually made my job easier. What I'm finding is that I'm catching problems earlier and I'm helping to scaffold their progress rather than trying to backtrack from a final submission that went sideways.
Also, I didn't have to second guess the students, and probably most important, I started feeling like I was teaching again.
So here's where I want to leave this today.
AI integration isn't easy. It takes time. It takes experimentation, and it takes your willingness to reimagine your assignments. These are assignments that may have worked fine before AI, but they're just not working anymore.
But if it's any consolation, you aren't alone in this. All educators I speak with are navigating these same challenges right along with you.
To help you contemplate [00:18:00] these design considerations within your context with your learners, I've created a free Design Brief companion to accompany the episodes in this podcast series.
For this episode, the Design Brief builds on the design questions I shared, and I've also included a simple worksheet to help you audit one of your own assignments for AI vulnerability.
To find the Design Briefs and explore all of our current and upcoming offerings, please head to our website at nextpathdesign.com/join.
And finally, looking ahead to our next episode, we're going to explore learning goals in the age of AI.
As I touched on today, many of our existing learning goals emphasize content reproduction that AI can now easily perform.
So the big question we'll dig into is what essential capabilities should students be able to demonstrate when AI can handle so much of what we typically ask them to do?
Also, keep an eye out for new episodes, which are going to be released about twice a month on Apple Podcasts, Spotify, YouTube, and nextpathdesign.com/podcast.
And most [00:19:00] importantly, thank you so much for taking the time to consider these issues with me.