S1E3 AI Literacy
===
[00:00:00]
[00:00:10] Hi, and welcome back to the AI for Educators Design Lab podcast. I'm Jennifer Maddrell.
[00:00:16] In our first episode, we looked at how AI is exposing vulnerabilities in our assignments.
[00:00:22] I used my own literature review as a running example of a task that had become almost fully AI completable.
[00:00:27] Then, in our second episode, we stepped back from the assignment itself and examined the learning goals beneath it.
[00:00:34] If AI can handle so many of the tasks our goals were built around, we now have to ask: What do students need to know? What do they need to be able to do and whether our current aims are still pointing them in the right direction?
[00:00:46] At the end of the second episode, I flagged something I want to pick up today.
[00:00:50] I had mentioned that metacognition and AI literacy are deeply connected.
[00:00:54] If students are going to work thoughtfully with AI, they need two things at once.
[00:00:59] Of [00:01:00] course, they need a critical understanding of how AI works, but they also need awareness of their own thinking while using it.
[00:01:05] And so this brings me to where I want to go today as we consider the topic of AI literacy.
[00:01:10] And I want to spend this whole episode on AI literacy because it's the piece that often gets skipped. Or it's reduced to a one-time lesson on AI tools to try or maybe prompting tips.
[00:01:21] But in this episode, I'm going to try to make the case that AI literacy needs to be a consistent design intention that needs to span the whole learning experience.
[00:01:29] And so I want to start this episode by telling you a little bit more about what I observed when I restructured my lit review assignment that I've used as an example in the prior episodes.
[00:01:38] These were graduate students, so I knew they were already using AI.
[00:01:42] But I could also tell that many of them didn't really understand what was happening or how it might make mistakes.
[00:01:47] For example, there were a lot of times when they would accept a source as relevant without pulling the article and reading it themselves,
[00:01:54] and sometimes they would just accept the AI summary at face value.
[00:01:58] And in the early days, many had no [00:02:00] idea that AI could produce a fabricated reference or that it might cite an article that was never written by the named authors
[00:02:07] or point to a journal that didn't exist.
[00:02:10] This was especially true if the reference was cited in a perfect APA format.
[00:02:14] So if it looked polished, that was enough verification.
[00:02:18] And what I was seeing wasn't carelessness. Instead, it was a literacy gap.
[00:02:23] Students saw the polish and viewed it as an authoritative source. They weren't skeptical of the output.
[00:02:28] And in most cases, I think they just weren't aware of the need to be skeptical.
[00:02:32] And they didn't have the skills to evaluate what they were getting from AI.
[00:02:35] And on a much larger scale, it reminds me of the conversations we often have as teachers about the need to find and use primary sources rather than rely on the filter of secondary sources.
[00:02:45] And because AI has become part of our students' lives so quickly, they've had little time to learn even the basics. Like, how does it work?
[00:02:53] Where do the outputs come from? What might it get wrong?
[00:02:58] Or even the broader ethical stakes for [00:03:00] themselves and all of us.
[00:03:01] And when you don't have those basic skills and knowledge, you're really not able to use AI critically.
[00:03:07] Instead, you're just consuming it and ceding your own thinking.
[00:03:10] So that's the challenge I want to explore today. And what makes this whole issue particularly tricky is that AI produces really confident polished language. It sounds authoritative, credible, and ready for submission. Overall, it just feels complete.
[00:03:27] So it's understandable why they don't see that there's a problem.
[00:03:31] And again, without those needed knowledge and skills to evaluate what you're looking at, it's very easy to take what AI produces at face value.
[00:03:38] So the design question I want to explore today is this, how do we move students from being passive consumers of AI output to those active, effective, and critical users and evaluators of it?
[00:03:51] And just as importantly, how do we design that shift within our learning experiences instead of just hoping it happens on its own?
[00:03:58] So the case I want to [00:04:00] make is that AI literacy has to be incorporated into the learning experience.
[00:04:04] It has to show up in the moments when students need to be evaluating and deciding and judging what they know and what they don't.
[00:04:12] So as I've done in prior episodes, I want to walk through five design considerations related to this challenge.
[00:04:18] And as always, these design considerations aren't exhaustive and they certainly aren't prescriptions. Instead, they're suggested entry points with relevant questions where you can bring your own professional judgment to examine this topic within your own context.
[00:04:32] So let's get into it with the first design question that asks, what does AI literacy mean in your discipline for your specific learners and at their stage of development?
[00:04:41] And as we begin, I want to acknowledge that pinpointing a definition of AI literacy in general is itself a challenge.
[00:04:48] A quick search on AI literacy reveals an already large and growing roster of frameworks that are emerging from a range of context.
[00:04:55] Each framework emphasizes slightly different knowledge and skills and [00:05:00] competencies.
[00:05:00] As you start digging into this topic, you'll see publications from UNESCO, EDUCAUSE, Digital Promise, and even the US Department of Labor.
[00:05:08] When you compare and contrast them, most include some version of a conceptual understanding of how AI systems work.
[00:05:14] And most touch on the importance of evaluation of AI outputs for things like accuracy and completeness.
[00:05:20] And many layer in ethical consideration such as what biases might be embedded in these systems, and who benefits, or who might be harmed.
[00:05:28] Many extend this to considerations surrounding human agency, focusing on capacity to make intentional values, grounded choices about when and how to use AI, if at all.
[00:05:39] And then depending on the context, you'll see variations in practical applications that often vary by a specific discipline.
[00:05:45] Alongside these AI literacy frameworks, there's also a growing focus on AI fluency. And I want to spend a moment comparing the two because the distinction matters for how and what we design into our learning experiences.
[00:05:58] If AI literacy [00:06:00] focuses on a conceptual foundation, such as, again, how AI works or where it fails, what's at stake, then AI fluency is about applied competence.
[00:06:09] Another way to look at it is that AI fluency is the skillful collaboration of AI in practice.
[00:06:15] And both AI literacy and AI fluency skill sets are important.
[00:06:18] A student who understands the need to engage critically with AI but can't work with it effectively is under prepared. Likewise, a student who uses AI fluently but without critical judgment is at risk of accepting errors, or worst for us is educators outsourcing their own thinking.
[00:06:35] So one way to frame the relationship is that AI fluency gets the students into the conversation with AI, while AI literacy keeps them thinking while they're using it.
[00:06:44] As I was preparing this episode, I came across a recently published report by Anthropic, the parent of Claude AI.
[00:06:50] The report is called the AI Fluency Index, and it reports on their study of how skilled AI use develops.
[00:06:57] They found the strongest predictor of fluent AI [00:07:00] use is iteration and refinement.
[00:07:02] This means staying in the conversation and building on earlier exchanges rather than just accepting the first response and moving on.
[00:07:10] Users who iterated were far more likely to question the AI's response and catch what the AI had missed.
[00:07:16] And here's the finding from the report I find most relevant for educators.
[00:07:20] When AI produced a polished looking output, such as a document, a summary, or a finished artifact, the users were much less likely to fact check or question the reasoning behind it. This means that when the work looked finished, it was treated by the user as finished.
[00:07:35] And so circling back to the point I've been trying to make, that gap between fluent use and critical evaluation is exactly where AI literacy comes in.
[00:07:44] And it's also where our design decisions matter the most because that gap doesn't close on its own.
[00:07:51] So the takeaway for design is that literacy and fluency reinforce each other and getting specific about what each means for your learners is where the work [00:08:00] begins.
[00:08:00] Going back to my lit review class, the graduate students I work with need both. They need enough conceptual understanding of how AI models work, to appreciate why things like hallucinations happen and why bias might creep in.
[00:08:13] But they also need the practical skills to evaluate AI generated summaries, to verify citations, and notice when a synthesis has lost nuance.
[00:08:22] And so designing from those specific AI literacy and fluency targets looks very different from the vague goal of merely learning about AI.
[00:08:31] So to wrap up this first design consideration, the relevant questions here are, first, have you defined what AI literacy and AI fluency mean for your learners? And have you made both AI literacy and fluency visible within your learning outcomes and within the learning experience you're designing?
[00:08:49] And now moving on to the second design consideration. I want to make a case for being sure you include what is too often treated as an optional add-on: the ethical and human implications of AI [00:09:00] use.
[00:09:01] These issues that are related to things like accuracy and bias are too often overlooked.
[00:09:06] However, it's really important to keep issues like this at the front and center because AI systems have real consequences for real people.
[00:09:14] And when students see those implications, the evaluation work we're about to talk about has more meaning and urgency.
[00:09:20] So what does this layer of AI literacy involve?
[00:09:23] It includes things that we often gloss over. Such as who built this system? What values and assumptions are embedded in them? Whose voices are represented in the training data? And who benefits or who might be harmed by the way they're used?
[00:09:38] Overall, this is the human context of AI.
[00:09:42] While these issues are often separated from the practical evaluation, it's this focus that clarifies the consequences.
[00:09:49] One way I like to bring this into my own courses is by asking students to test an AI tool for embedded assumptions.
[00:09:56] This can be a really quick activity, but it generally has a significant [00:10:00] impact on their appreciation of the potential implications.
[00:10:03] For example, I might ask them to have the AI describe a professional in a given role in our field. Or generate a narrative about an event that happened in the past that students know about.
[00:10:13] And then I ask them to compare what comes back from the AI.
[00:10:17] Did the output contemplate multiple vantage points? Did the AI pick up nuance that the students already know exist?
[00:10:25] And what I find is that students almost always find something that suggests embedded underlying perspectives in the model's training data.
[00:10:33] Again, this is just one really quick and effective way to highlight for students that AI tends to reproduce the perspectives of the sources it was trained on. And then it produces output assuming those sources were neutral, default, or complete.
[00:10:47] And again, making that point visible to students can be as simple as pausing to look for evidence of those embedded assumptions.
[00:10:53] Questions like this, move the conversation from abstract ethics to real world [00:11:00] analysis. Whose voices are reflected in what this model was trained on? What might be missing or distorted, and why?
[00:11:06] And then you can also look at the bigger questions with your students, such as what decisions would we never want to rely on AI at this point? And what makes those situations different?
[00:11:16] And perhaps more importantly, you can start working with your students to ask, how do you want to engage with these tools in a way that reflects your own values?
[00:11:24] These types of questions open up the door to discussions about student agency within human AI collaboration. So beyond critical evaluation of the output, it's also the capacity to make intentional values grounded choices about how and when and for what purposes you use AI.
[00:11:41] So the point I'm trying to make here is it's a different conversation than teaching compliance with a policy. It's about helping students take personal ownership of their relationship with these new tools.
[00:11:52] And when this type of activity is embedded in a learning experience, it can be one of the most powerful AI literacy moments in the whole course.
[00:11:59] [00:12:00] It gives you, as the educator the chance to connect the AI tool directly to the human decisions that shaped it.
[00:12:06] And it also gives critical evaluation work a highly relevant purpose.
[00:12:10] So the design question here is this: are you creating opportunities for students to examine AI, not just as a tool to evaluate, but rather as a system built by people with human assumptions embedded in it?
[00:12:23] And now moving on to our third design consideration. Once you have a sense of what AI literacy and fluency mean for your context, and your students have some grounding in why it matters, the third design question I'm proposing you ask is:
[00:12:36] where do these skills live in your learning experience?
[00:12:40] And unfortunately this is where many well-intentioned approaches tend to miss the mark.
[00:12:45] The instinct is often to address AI literacy as a standalone subject matter to be taught as a separate event. It might be a dedicated tech session or a unit on responsible use or a first week conversation about [00:13:00] expectations and policy.
[00:13:01] And while any of these might be a useful starting point. Isolated coverage of these topics tends not to produce lasting critical thinking and application during the AI use.
[00:13:13] Often if students learn about AI in a standalone module, then they go back to using AI as a productivity tool. That knowledge often doesn't transfer to their own application.
[00:13:24] And borrowing what we've learned for decades about EdTech integration and digital literacy more broadly. What tends to work better is embedding AI literacy into those moments where students are already doing the things AI literacy requires.
[00:13:38] So this includes things like evaluating sources or considering evidence, or weighing perspectives and questioning assumptions.
[00:13:46] And when AI literacy is embedded during these integration points, the contextual application makes the learning meaningful in a way that standalone sessions can't.
[00:13:56] So framed as a design question, where in your course do students already [00:14:00] fact check, evaluate credibility, or examine competing interpretations?
[00:14:05] Rather than adding a separate AI evaluation assignment, you could then look for those existing moments and then build AI literacy into the tasks that students are already doing.
[00:14:15] And this approach is what I ended up doing when I redesigned my lit review assignment.
[00:14:19] Rather than adding a separate AI literacy unit into the course, I built evaluation moments into the existing assignment structure.
[00:14:27] So when students use AI to help surface possible sources, they're also required to verify those sources exist.
[00:14:33] And I ask them to explain whether the AI summary accurately represents the article.
[00:14:39] And then instead of just turning in an annotated bibliography, I'm also asking them to document where the AI summaries fell short compared to what they found when they read the articles themselves.
[00:14:49] So in this case, the AI output becomes the subject of critique.
[00:14:53] And it's also worth noting that this kind of embedded iterative engagement with AI is where fluency skills also develop.
[00:14:59] [00:15:00] As the Anthropic findings I mentioned a moment ago suggest, the most fluent AI users are the ones who stay in the conversation. They push back and they refine. So when we then embed AI literacy into those authentic learning tasks, we're not just building critical judgment. We're creating the conditions where fluency and literacy can grow together.
[00:15:22] So the key design questions I want to leave you with here are: First, where in your existing course structure could AI become an object of analysis rather than just a tool?
[00:15:32] And tied to this, where are students already doing work that AI literacy builds on?
[00:15:37] And this brings us to our fourth design consideration. I want to now pivot and spend a bit more time reflecting on the importance of evaluative judgment and consider how it extends beyond the knowledge and skills required for what I'm calling here as functional proficiency
[00:15:52] As I'm using it here, functional proficiency includes the necessary basic procedural knowledge to operate the tool.
[00:15:58] So this includes [00:16:00] things like the mechanics and conventions of getting a useful output.
[00:16:03] Or knowing how to write a clear prompt, or knowing what different tools can do and how to use AI to move through tasks efficiently.
[00:16:13] And clearly all these things are important.
[00:16:15] But the layer I want to drill down into is evaluative judgment.
[00:16:19] And this involves knowing what to do with the result once you have it. It means assessing whether the output is accurate, complete, and appropriately nuanced by asking a few questions:
[00:16:29] Whose perspective is reflected? What's missing? It also means calibrating how much scrutiny a given output warrants.
[00:16:37] When is it reasonable to accept and move on? And when did the stakes require you to push back and verify?
[00:16:43] And ultimately it means being satisfied and ready to stand behind the output and take responsibility for it as part of your work.
[00:16:52] But remember from the Anthropic findings from earlier polished outputs suppress critical evaluation. When AI produces [00:17:00] something that appears finished, users are far less likely to question it.
[00:17:04] And this is because the polished output gives the impression of a confident, well structured result.
[00:17:10] So if we want students to evaluate AI output critically, we need to design experiences that make evaluation an explicit, required part of the learning task rather than something students just know in the abstract that might be a concern.
[00:17:24] Adding to the examples I've shared earlier, here's another simple approach I've built into my courses.
[00:17:29] I often share AI generated summaries for articles we're already reading in class.
[00:17:34] I then asked students to compare those summaries to the actual articles using an annotation tool called Hypothesis.
[00:17:41] Within this annotation tool, they can crowdsource their evaluation.
[00:17:45] I asked them to highlight both the article and the summaries and try to tease out what the AI gets right, where it makes mistakes or misses nuance.
[00:17:54] And what's really nice about this type of direct comparison is that it allows for
[00:17:57] guided application of [00:18:00] evaluation. It's not abstract. Instead, it's tied to real sources about real topics in the field, and the errors become visible and they're called out.
[00:18:09] It's important to note also that assessment design matters here as well. If your rubric is rewarding only the polish of the final product, and students will use AI to produce that polish.
[00:18:21] That's just a rational response to the incentives you've already set up.
[00:18:25] But some simple tweaks to your rubric can include evaluation checkpoints with explicit criteria that might ask students to document where they applied their own judgment, or it could mean a verification check, a comparison of the AI's output to a primary source,
[00:18:43] or maybe an explanation of what they chose not to use and why.
[00:18:47] At this point then you're assessing your students' thinking you're not just giving them credit for how well they turned in a polished outcome.
[00:18:54] So the design question here is this: are your assignments and assessments [00:19:00] building evaluative judgment alongside proficiency, or are you rewarding the polish of the result?
[00:19:05] And finally the fifth design consideration I want to end with today really speaks to the how of AI integration. Specifically, I want to talk about the practice of making AI use visible and reflective.
[00:19:18] As I've already touched on, when students use AI without being asked to reflect on it, the use tends to be very invisible.
[00:19:26] They get an output. They use what seems to respond to the learning task and then they move on.
[00:19:32] The aim becomes completing the task, not pausing to examine what the AI did well or didn't do well.
[00:19:39] There's no reflection on whether their own thinking was sharpened or worse bypassed entirely. There's no documentation of what happened in their human AI collaboration and why.
[00:19:50] And so this kind of invisible AI use stands in the way of building AI literacy.
[00:19:56] One approach to make AI use visible is through a simple documentation [00:20:00] requirement. This doesn't really have to be elaborate or a burden for the student or for the teacher. It could be as straightforward as simply asking students to append a short reflection to their submission. You might ask them to know which tools they use.
[00:20:14] What they asked for from the AI? Or what output they kept and what they discarded, and then why they made those choices.
[00:20:23] I've tried this approach in a few different ways and with varying success, but I found it usually works best when I can follow up in a one-on-one conversation.
[00:20:32] But if that's not feasible, any type of pause to consider their AI use and to take stock of their own decisions opens up a learning reflection that otherwise wouldn't have happened.
[00:20:42] Also educator modeling is equally powerful here. When we make our own AI use visible, we show students what thoughtful, critical engagement looks like in practice.
[00:20:52] Overall. I'm quite transparent with my students about when and how I use AI in my own work. I often identify which tools [00:21:00] I used
[00:21:01] or what I did to verify before I relied on it. Or what I needed to rewrite because the AI didn't get it quite right.
[00:21:09] And also when I decided not to use AI at all and why.
[00:21:13] I think that type of transparency sends an important message that critical AI use isn't just about following the university's policies or the rules for a given assignment.
[00:21:23] Instead, it's setting up a practice of thinking reflectively about their human AI collaboration. Also, when students see that transparency and reflection modeled, it can really help you to frame your ongoing conversations about their AI use and how to use AI thoughtfully.
[00:21:39] So the final design questions here are this, what structures do you have in place that make AI use visible and reflective?
[00:21:47] And what are you modeling in your own visible practice about what a thoughtful relationship with these tools looks like?
[00:21:54] So as I wrap things up, my primary argument in this episode is that designing for AI literacy takes [00:22:00] upfront work.
[00:22:00] It means carefully examining your course and asking where these moments of critique and reflection can fit within your learning experience, and then building them in.
[00:22:10] To help you think through these design considerations in your own context, I've created another free companion Design Brief for this episode. It includes a worksheet I'm calling an AI Literacy Integration Map.
[00:22:22] I designed it to help you identify two or three natural touchpoints in an existing course where AI literacy could be embedded.
[00:22:29] And then to get specific about what students would need to practice at each of those moments.
[00:22:34] You can find it along with all of the Design briefs in this podcast series at nextpathdesign.com/designbriefs.
[00:22:43] When you sign up for the free Design Briefs you get access to the full library. This includes this episode and all future briefs as they're released in this podcast series.
[00:22:52] And looking ahead to episode four, we're going to take up another question a lot of educators are wrestling with.
[00:22:58] As AI has more of a presence in our [00:23:00] learning experiences, what is the role of human connection and interaction?
[00:23:05] What does it mean to preserve human to human connections in environments that are increasingly shaped by AI?
[00:23:10] So keep an eye out for that in future episodes, which I release twice a month on Apple Podcasts, Spotify, YouTube, and at our website at nextpathdesign.com/podcast.
[00:23:23] And as always, thank you so much for thinking through these design considerations with me.
[00:23:28]