S1E2 Learning Goals
===
​[00:00:00]
​
Hi. Welcome back to the AI for Educators Design Lab podcast. I'm Jennifer Maddrell.
In our first episode, we explored academic integrity and the ways AI is forcing us to think differently about what our assignments actually measure and how we make student thinking visible.
Now in this episode, I want to explore the deeper questions underneath those integrity concerns.
If AI can now complete so many of our assignments, what are students supposed to be learning?
What should we be aiming for and measuring in our class projects or in our assignments?
However, I want to start out by saying that this is hardly a new issue.
I've talked with educators for years about how much of education was built on a model of information scarcity.
When information was harder to access, there was real value in being able to find it, retain it, and reproduce it.
But now AI is intensifying that long [00:01:00] building tension by making it much harder to ignore that learning is more than simply knowing information.
Instead,
AI is just highlighting the issue because AI can now perform many of the tasks our current learning goals were built around.
so it's not just an issue of our assignments being AI vulnerable, which is what we talked about in our last episode.
It's also revealing issues in the learning goals that underpin those assignments.
The point I want to explore today is that AI is not only changing what students can produce, it's forcing us to rethink which knowledge and capabilities are now most essential in a world shaped by human and AI collaboration, as well as that abundant access to information that AI gives us.
And when you get right down to it, many legacy learning goals were written for a time when education placed a lot more emphasis on information transmission.
So in this case, that's when recall and reproduction and summary are treated as the dominant indicators of learning.
And to be 100% clear, I'm not suggesting that foundational knowledge suddenly no longer matters now that we have [00:02:00] AI.
Instead, I think the challenge is distinguishing between information that can be easily accessed using AI and the foundational knowledge students still need.
So I like to frame it as AI bringing about an unintended audit of our educational aims and practices.
It's revealing not just which assignments are vulnerable, but which learning goals may no longer be sufficient.
To make this idea more concrete, let's return to the literature review assignment I mentioned in the last episode. Unfortunately, for me, it's a very good example of how this learning goals problem is showing up in my own practice.
As I talked about in the last episode, this lit review assignment is clearly an AI vulnerable task.
Students can now use AI to condense weeks' worth of effort into a few minutes of prompting.
And while it's easy to see that I need to redesign the assignment to focus more on the process than the product, I also see a whole set of older learning goals that need to be reexamined.
During the original version of the course, I spent a lot of time in the early weeks of the semester teaching [00:03:00] students how to build a vetted list of credible, relevant scholarly sources.
We'd work on how to access quality journals, and then we'd sift through databases to find relevant articles and all of this being done within the walled garden of our university's library.
Up until a few years ago, that time spent on information gathering made a lot of sense.
Finding good information was a slow and tedious process that few students were used to. It took a lot of effort on their part and mine.
It required knowledge and skill to find sources, and then to cut through the overwhelming options they would find.
And so the knowledge and skill to search and find information was a big part of what the learning experience was meant to develop.
But fast forward to now. AI can generate a huge list of sources with summaries in just a matter of seconds. Of course, not perfectly. Sometimes the sources are hallucinated. Sometimes the summaries flatten nuance or miss what really matters.
But my larger point is the task of finding and accessing information has completely [00:04:00] changed.
If students wanted to, they can now cut out all of that early stage information gathering work.
But all of this raises a really big question that many educators are struggling with. Should students still need to perform learning tasks that AI can readily do?
And this is the question I keep coming back to. If AI is able to perform these learning tasks we had baked into our former assignments, what is now considered essential learning in the age of AI?
In the case of my lit review class, is it still the act of manually finding and gathering sources? Or is it something more, something like the judgment that comes after that. For example, the ability to verify whether a source even exists or decide whether it's credible or notice whether the summary is misleading. Maybe to compare conflicting perspectives or then determine what actually matters for the argument you're trying to build.
So hopefully this has given you a better idea of the pain point I want to explore today.
Many of us are teaching or designing learning experiences [00:05:00] with goals that were written well before generative AI.
So within this episode, I'm going to again, walk through five design considerations that I think can help us audit whether our learning goals are still relevant, if they're sufficient, and if they're pointing us toward the kinds of knowledge and capability that matter the most now.
As I did in the first episode, I want to frame this issue as a set of design questions. These aren't prescriptive or universal solutions, but instead, considerations to help you make decisions that fit your learners, your discipline, and your context.
However, I really hope it goes without saying that this is not an exhaustive list.
Instead, think of these questions as entry points. These are the places where you can add your own professional judgment.
If you listen to the last episode, I was asking questions at the assignment level.
But here I want to focus on the backbone of the assignment. In this case, the learning goals that the assignments are built to serve.
So as I go through these design considerations, have one of your own learning experiences in mind. Think about those [00:06:00] learning goals and then let's walk through the questions together.
For our first design consideration, do your learning goals, prioritize content coverage or cognitive capability?
As I mentioned before, many of our educational aims were built at a time when information was harder to access. It was slower to verify, and it was a lot more effort to gather.
As a result, a good question to consider is the extent to which your learning goals are focusing on things like recall, reproduction, or maybe summary of the content.
Is the goal there because the information you're asking them to recall or reproduce relates to truly foundational knowledge? Or, is the goal there because it's inherited maybe from an earlier time when information was scarce and hard to come by.
And I really want to be clear that the presence of these types of objectives, like recall or reproducing something, they're not automatically a problem. But it is worth asking how these goals are serving learners today when we're in a time of information abundance.
If AI can now tackle one of your content related [00:07:00] learning tasks, then you really do need to step back and ask whether and how that goal is building the learner's capabilities.
Let me circle back to my lit review example.
As I mentioned, in the earlier versions of that assignment, much of the time was spent in finding journals, identifying credible sources, and comparing articles for relevance.
Again, at that time, it made a lot of sense. Finding good information was slower, more manual, more effort.
But when AI can now generate a respectable looking list of sources in a matter of seconds, it really does force me to ask whether those earlier goals were foundational, or instead whether they were shaped by a different information environment.
So this first design consideration is really about noticing. It's about noticing which of your goals may still be carrying along a lot of assumptions from a different era and a different time of information access.
And likewise, what does that really reveal about what you've been asking your students to do.
So now moving on to our second design consideration. The next question [00:08:00] focuses on the impact of AI.
Does a student's use of AI support the learning goal or does it undermine it?
And I think it's really important to say that the answer will not always be the same.
There are many examples of where AI could support learning. It might help students generate possibilities or surface examples or compare perspectives or maybe just get unstuck.
But there are certainly other times when using AI might get in the way of the very thinking your goal aims to develop.
In this case, when a learning goal focuses on tasks that AI can easily perform, there is the risk that some are terming "cognitive offloading". This is where learners skip productive struggle.
So this design consideration asks you to look carefully at the relationship between the goal and the AI tool. For a given learning goal, where does AI use strengthen the learning? And conversely, where does AI let the student bypass the struggle, the judgment, or sense making that actually matters?
So in my lit review example, AI clearly can support some parts of the [00:09:00] process. It could help surface possible sources or narrow them down based on relevance and just overall accelerate their early search process.
But if the goals are for students to learn to evaluate credibility or to determine relevance or assess what a source is actually saying versus what the AI output says, then relying on AI too heavily will then undermine those goals rather than support them.
And I think this is especially important. Because one of the risks of AI use is assuming that more efficient performance means better learning. Sometimes efficiency might help students move on to more advanced learning goals, which is great. However, sometimes efficiency allows students to bypass them altogether, and so our job now is to get a clearer understanding about the difference.
So moving on to our third design consideration. Next, I want to connect learning goals more directly to authentic practice.
And this is because one of the most useful questions we can ask is not simply can AI do this? [00:10:00] Instead, it's this: what does thoughtful performance in this field of study look like now that AI is available?
In other words, if someone were doing competent work in the related discipline or the profession, or within this context today, what kinds of knowledge and judgment and skill would still be required?
So this might be creative and analytical thinking, interpretation or decision making. It could also be just simply resilience that's still needed when AI is in the picture, and actually maybe even more so because of it.
I also want to acknowledge the contextual relevance has always mattered in education. But AI is now forcing us to reexamine whether some of these legacy goals are centered on the knowledge and the task that align with today's authentic performance.
So this third design consideration is asking you to step back and think about the real work of the field. Do your goals reflect what thoughtful practice looks like today?
Or did they emphasize tasks that matter less now that AI has entered the [00:11:00] picture?
Going back to my lit review example, my learning goals can now de-emphasize some of the knowledge and skills related to accessing information.
And I can now spend more time focusing on other relevant capabilities, such as evaluating claims or checking sources for quality, maybe noticing contradictions in the literature, synthesizing across perspectives, and then also ultimately making arguments and recommendations, which really is the whole point of the lit review.
So let's now move on to our fourth design question.
Do your learning goals encourage metacognitive awareness?
In a nutshell, metacognition is the ability to monitor your own thinking.
It's noticing when you understand something, when you don't, and when a strategy is helping.
Metacognition is really important when AI is in the mix because learners can now often finish a learning task without much effort.
And that creates a huge risk for learning.
When using AI. Learners might feel more capable in the moment, but they [00:12:00] might not actually be building the foundational knowledge and skills the learning experience was supposed to support. Again, this might be judgment or independence.
So as you look at your own learning goals, consider whether they explicitly ask students to reflect on their own thinking or maybe monitor their own understanding or make decisions about when to rely on AI and when to do the cognitive work themselves.
Also, step back and ask, is metacognition something you assume or you hope will happen, but don't actually name or teach.
Let's pause a minute and think about ways you can explicitly focus on metacognition and your learning goals.
You might include something like, students will reflect on how their thinking is evolving throughout the project and what is contributing to that development.
Or maybe students will evaluate their use of AI tools
and they'll need to articulate how those tools are supporting or limiting their learning.
And I really want to emphasize that goals like this make metacognition a more explicit part of the learning process. And then also [00:13:00] tying back to our last episode, it can also help you make learning more visible.
And again, tying this back to my lit review example, a student might feel like they understand a body of literature because the AI is handed them this very authoritative sounding tidy synthesis.
But asking AI to run a prompt is absolutely not the same as having evaluated the arguments or checked the sources for quality or noticing whether their own understanding is still shallow.
So if students are going to work thoughtfully with AI, then part of what they need to learn is not just how to use the tool,
They also need to notice what its use is doing to their own thinking.
And I think of all the things I've said today, this matters most because if students are going to live and work in a world of human AI collaboration, and a huge part of what they need to learn is not just how to use AI, but how to notice when their thinking is essential.
And I also want to point out that this is one of the places where AI literacy starts to come into the picture.
I'm not going into AI literacy here because that's the focus of our [00:14:00] next episode, but I really do think metacognition and AI literacy are connected.
If our students are gonna work well with AI, they need both a critical understanding of the tool, as well as the awareness of their own thinking while using it.
And now for the final design question I want to focus on today. I want to bring us back to something I raised in the last episode. I asked you to consider whether or not you were assessing the product or the process of learning and how you would be able to tell.
In other words, would the student's learning be visible?
And I raise those questions because a single final product, such as a final paper, is not very reliable evidence of a student's own thinking when the AI can generate the polished output so easily.
But now I want to revisit that issue from the perspective of the learning goal itself. If you're really aiming to support learners in the iterative process of developing judgment, authentic performance, or a foundational understanding, and even their metacognitive awareness, then what would be the best evidence the students develop [00:15:00] those capabilities?
A polished output may not support these goals or allow us to assess and measure them. In fact, the more our goals shift toward the capabilities essential in a world of human AI collaboration, the more carefully we need to consider whether our assignments and assessments align with these goals.
And again, my lit review example makes this point really clear for me. A final paper may still be worthwhile as a deliverable for the course, but turned in by itself without any indication of what the learner went through during the learning process, it likely doesn't tell me very much. I don't get to see how the student evaluated credible sources, how they caught AI errors, how they made sound judgments about relevance, or developed a position of their own.
Importantly, once you start to see the learning goals differently, it also changes how you plan to see the evidence of their learning.
So to wrap things up, here's what I want to leave you with today.
AI isn't only revealing design issues in our assignments, which I talked about in the [00:16:00] first episode. It is also forcing an audit of our learning goals. This becomes an invitation to look carefully at our learning experiences and to ask better questions about our aims.
We can then start to notice where our goals are essential or where they may no longer be enough, and what that reveals about what we want our students to learn.
As we either design new learning experiences or revisit existing ones, we need to pause and really look at what AI is revealing about the aims of those experiences and then what we can and should be doing to bring them into alignment with a world where human AI collaboration is our new reality.
Because that's what this episode and really this whole series is about.
And notice that I'm deliberately sidestepping the question of whether AI is good or bad in this episode, and really across the whole podcast series. Instead, I'm taking the design approach that AI is our new reality. And very likely, many of our inherited learning goals were written for a different time.
And we [00:17:00] need to take the time to determine whether they are still sufficient for learners who will live and work in a world shaped by human AI collaboration.
So if I were doing one small thing after listening to this episode, it would be this.
I would pull up one course, one unit, or one learning experience and scan its goals.
What are you asking learners to know and do? To what extent will that prepare them for the future? I'd also ask where AI use supports that goal, or conversely, where it might undermine it.
I'd ask whether the goal reflects what authentic performance in my field looks like now, and I'd also ask whether students would need judgment and metacognitive awareness to meet that goal.
And then I'd ask one final question. If this goal still matters, what would actually count as evidence of learning?
To help you consider these questions and more, I've created another free companion Design Brief for this episode
The Design Brief includes a quick worksheet you can use to examine a small set of your own [00:18:00] learning goals, and
start identifying which ones feel AI completable, which ones still feel essential, and which ones need a closer look.
You can find the free Design Brief library that I've created as a companion to this podcast series at our website at nextpathdesign.com/designbriefs.
My hope is that if you start with these questions, you'll start to get more clarity on which goals are essential, which ones may no longer be enough, and where our learning experiences need more intentionality.
And looking ahead to our next episode, I'm going to be drilling down into design considerations related to AI literacy,
And I'm moving to this topic next, because if learners are going to work thoughtfully with AI, they need more than access to a tool. They need to see what AI can do, what it can't do, how its outputs should be evaluated, and how to make informed decisions about when and how to use it.
That's where we're headed in episode three.
As I wrap things up, I know AI integration is bringing up a lot of strong emotions right now and [00:19:00] understandably so. But for better or worse, it's also prompting a call to revisit goals and assumptions that may have shaped our teaching for a long time.
But I also think this is where some of the most important design work is beginning.
So please keep an eye out for new episodes, which I'll be releasing about twice a month. You can find them on Apple Podcasts, Spotify, YouTube, and our website at nextpathdesign.com/podcast.
And most importantly, thank you so much for taking the time to consider these issues with me.
​