S1E5 Data Privacy and Security
===
[00:00:00]
[00:00:11] Hi! Welcome back to the AI for Educators Design Lab podcast. I'm Jennifer Maddrell. This is our fifth episode in a series focusing on design considerations for AI integration in teaching and learning. I've said before that AI has both messy and magical sides. And this episode is firmly on the messier side.
[00:00:30] Today, I wanna turn our attention to data privacy, security, and safety issues. These issues often get pushed to the margins, treated as someone else's problem to worry about, maybe in IT departments, legal teams, or compliance offices.
[00:00:45] But the argument I want to make in this episode is that data privacy and safety affects all of us. And for educators, learning experience designers, and educational leadership, I think addressing them is a design responsibility.
[00:00:59] And like
[00:01:00] most of the challenges we've explored in this series so far, our responses to data and privacy issues don't live only in policy documents They show up in our everyday practice, in the tools we choose, the workflows we adopt, and the decisions we make.
[00:01:15] While I've always had this issue on my top list of topics to cover in this podcast series, today's episode coincides with what may be one of the largest cyber attacks education has ever faced.
[00:01:27] As I'm recording this, reports are emerging about a major security incident affecting Canvas, a widely used learning management system, and its parent company Instructure.
[00:01:37] Early reports suggest the attackers accessed data linked to as many as two hundred and seventy-five million students, faculty, and staff.
[00:01:46] And this incident has disrupted access to Canvas at thousands of schools and universities worldwide.
[00:01:52] The hacker group ShinyHunters has claimed responsibility, and it's threatening to release stolen data if their ransom demands aren't met.
[00:02:00] The exposed information may include names, email addresses, student IDs, course enrollments, grades, academic records, and even internal messages.
[00:02:12] Instructure has confirmed an unauthorized access, but at the time I'm recording this, the full scale and scope are really unclear.
[00:02:19] What is clear, however, is that students and educators are already dealing with the outages at the end of a semester.
[00:02:26] This, along with the real fear about what data may be released.
[00:02:31] Since I'm recording this as the situation is developing, it's only a guess how it happened or how it might be resolved.
[00:02:38] But I wanted to include this story because it's a stark reminder of what's at stake when private information escapes the systems we trust to protect it.
[00:02:46] However, large-scale breaches like this aren't the most common privacy risk educators face on a day-to-day basis, especially now when working with AI. The far more typical scenario is a well-intentioned teacher
[00:03:00] using, say, Claude to draft an update with detailed feedback about a student's progress, or maybe uploading an assessment report into ChatGPT for help with grade analysis that then includes a list of identifiers like names and grades.
[00:03:17] Or maybe attaching a student essay into Gemini to get help offering feedback. And doing all of this without pausing to think about where that data goes, how long it's retained, or whether it's used to train the model.
[00:03:30] However, no matter the scale, these risks are related. What the Canvas situation illustrates is that student data has real value and real consequences when it's mishandled. The everyday classroom version is just a different entry point into the same problem.
[00:03:46] But part of what is making this hard is the data collection associated with AI use often feels invisible.
[00:03:53] A chatbot, writing assistant, tutoring system, or analytics dashboard can feel private enough that it doesn't [00:04:00] trigger a second thought.
[00:04:01] But AI systems can collect far more than we realize when you pause to think about all the data that is shared.
[00:04:08] This includes prompts, the student's work product or account identifiers, as well as behavioral data associated with click and keystroke patterns. And even engagement metrics from audio and video recordings of students working on an assignment or exams.
[00:04:26] And while the Canvas breach is dominating the news today, educators and advocates have long been concerned about student data privacy.
[00:04:34] And there's a lot of survey data out there to confirm this.
[00:04:37] For example, survey data from the Center for Democracy and Technology suggests that K12 students, parents, and teachers feel largely in the dark about policies and procedures related to the responsible generative AI use.
[00:04:50] Likewise, a survey by EDUCAUSE of higher education employees found that while nearly all respondents used AI tools for work, only about half were aware of [00:05:00] the institutional policies to guide their use.
[00:05:03] And then organizations ranging from US Department of Education Student Privacy Online Office to the National Education Association Labor Union, and also then parent privacy coalitions. They've all repeatedly urged schools to more carefully vet online and AI-enabled tools.
[00:05:22] All have warned that third-party platforms can expose student information to misuse or breaches.
[00:05:28] So the way I want to approach this episode is to absolutely acknowledge that data and privacy security are and have long been top institutional concerns.
[00:05:38] But they are also our individual responsibilities to defend.
[00:05:42] Awareness and protections need to be built into daily teaching practice and to the design of learning experiences.
[00:05:49] Further, these risks need to be considered in advance, and because the risks continue to evolve as more AI-enabled tools become available to both students and educators, our [00:06:00] practices must evolve with them. Prevention can sometimes feel really burdensome, but at the end of the day, it's far less costly than dealing with the consequences after something goes wrong.
[00:06:11] So as I've done in previous episodes, I'm now gonna walk through five of the many potential design considerations that I think are especially relevant to this challenge. You can think of these as entry points and diagnostic questions meant to help you examine how data privacy and safety issues show up in your own context and with your own learners.
[00:06:32] I'll move through these design considerations in an arc that starts with you, the educator or the designer.
[00:06:38] Then I'll look outward at day-to-day practices, including how you use AI, how you select tools, how privacy shows up in everyday use, and how you're helping students learn to navigate these issues themselves. And finally, I'll broaden the lens to consider this issue from the perspective of transparency, consent, and
[00:07:00] trust.
[00:07:00] But first, I want to pause here and say it's beyond the scope of this episode, and frankly, beyond my expertise to give any legal advice about data privacy and safety requirements.
[00:07:12] So I'm going to leave it up to you to do your own dive into the specific laws and policies that apply in your context.
[00:07:20] I will give you a tip that a good place to start is the Future of Privacy Forum at the fpf.org website.
[00:07:28] This is a global nonprofit focused on data protection, AI, and digital governance.
[00:07:35] And I have found it's a really helpful place to start when you're looking for practical resources as well as then regulatory guidance on this topic.
[00:07:43] So kicking things off, the first design consideration relates to what I'm calling educator grounding. The guiding question here is: To what extent are you aware of data privacy and safety requirements and best practices?
[00:07:56] And then tied to this, where might there be gaps between how you
[00:08:00] currently work and a privacy-aware workflow that better protects your students' data as well as your own?
[00:08:05] As I mentioned a moment ago, a common misconception is that data security and privacy are primarily IT's responsibility.
[00:08:16] But instead, many of the most common violations aren't institutional breaches. They're everyday instructional decisions, like teachers pasting student data into an AI chatbot. Educators are the most frequent data handlers, and privacy decisions often occur at the moment of tool choice or prompt design, but not by someone in a central office.
[00:08:38] So if the design responsibility then includes the educator, what does a privacy aware workflow actually look like?
[00:08:45] Unfortunately, it's not as simple as deleting an AI conversation after the fact. Depending on the tool you're using and the plan, data shared in an AI interaction may be stored and used to improve the model. If so, once information
[00:09:00] is shared, it can't be removed.
[00:09:02] The data is now woven into the model, and it can't be unraveled after the fact.
[00:09:06] So from a design and use perspective, the challenge is in recognizing and following policy-focused habits that shape your AI use.
[00:09:15] You might already be using some of these, such as multi-factor authentication that you could be using on every AI account you use.
[00:09:23] and in fact, in recent education breaches, stolen or reused passwords have been identified as a primary source of the problem.
[00:09:31] Therefore, multi-factor authentication can significantly reduce that risk.
[00:09:36] You also might consider using pseudonyms or placeholders like student A and student B rather than revealing real names when working with AI Or you might enable an incognito chat or no training modes when they're available.
[00:09:52] By doing so, you're increasing the chance that your inputs aren't being absorbed into the model.
[00:09:58] Another increasingly important
[00:10:00] habit is treating new AI features as entirely new tools. So, for example, when your learning management system suddenly adds an AI button, that's not a neutral update, it's a design change.
[00:10:13] So in that case, pausing to ask what that new feature does with data and under what terms is part of a privacy-aware workflow.
[00:10:22] And while much of this episode focuses on student data, it's worth remembering that your own data is also at stake. Your account history, your professional communications, and notes about students can all be exposed when protections are weak.
[00:10:38] And many of the same habits that protect students also protect you as the educator. That might include using an institutional email instead of your personal account to sign up for tools, or taking the same advice you give to your students about limiting the information you share about yourself.
[00:10:55] So the design consideration here is to examine your current workflow
[00:11:00] to identify where there may be gaps between how you currently work and then a workflow that better protects your students' data and your own
[00:11:08] And now the second design consideration focuses on AI tool selection.
[00:11:13] The guiding questions here are, how are you currently deciding what AI tools cross the threshold into your classroom? And what trade-offs are you accepting when you choose between a free or paid personal account versus when you're using an enterprise or institution-approved version?
[00:11:31] One of the challenges with tool selection is that data protections aren't uniform. Instead, they exist on a spectrum. And where a tool falls on that spectrum often depends on the specific version or plan you're using.
[00:11:45] And obviously, it's not surprising that free consumer accounts often offer the least protection.
[00:11:51] They typically involve broader data collection, longer retention periods, and then the inputs may be used for model training.
[00:12:00] A paid personal account may offer better protections, but those usually depend on settings, so then you must actively enable those settings to get the protections.
[00:12:09] But even a paid personal account likely has different protection options than an enterprise or institutional license. Those types of licenses are more likely to include what are called data processing agreements.
[00:12:23] These contractual agreements between the institution and the tool vendor typically limit training on student data, and they establish rules around data retention and deletion.
[00:12:33] However, what is striking is how often these protections aren't in place at all.
[00:12:39] Recent industry reporting suggests that less than twenty percent of educational institutions have AI-specific policies and contract requirements.
[00:12:48] And another surprise for many educators is the breadth of data these tools can collect. Beyond the data you upload in attachments or type in, many systems also collect behavioral [00:13:00] data, such as how long a student spends on a problem, their click patterns, their device information, and other engagement metrics.
[00:13:08] There's also a growing class of AI products that act as aggregators.
[00:13:12] These tools can pass your prompts to multiple underlying AI models, each one with its own data practices and policies.
[00:13:21] For example, you may have heard or used a tool like Poe by Quora.
[00:13:25] People like using it because it can route prompts to models such as ChatGPT, Claude, Gemini, and others.
[00:13:32] While this can save you from paying for multiple separate subscriptions, it also introduces a multi-layered risk, because additional third-party providers may now have access to your data.
[00:13:44] And finally, there's the issue of imposter tools and extensions that mimic well-known platforms, but they have no official connection to them.
[00:13:52] For example, do a quick search of ChatGPT in an app store, and you'll find it returns a long roster of tools with names like
[00:14:00] AI Chatbot or AI Chat.
[00:14:02] But none of these tools have any connection to OpenAI, which is the parent of ChatGPT.
[00:14:07] Therefore, unfortunately, it takes a bit of due diligence on your part to ensure you're navigating to the official platform that you're looking for.
[00:14:14] So to summarize, the design consideration here happens before use, at the point of tool selection. It involves taking a pause to examine the tool before use. And then using evaluation criteria that balances your needs with the potential security trade-offs.
[00:14:32] And now moving on to the third design consideration that focuses on privacy and security during tool use.
[00:14:38] The guiding questions to consider are this: What does data minimization look like in practice when you're using AI? And which of your learners could be most harmed if something went wrong?
[00:14:49] As I've mentioned already, most privacy violations in education aren't headline-grabbing breaches. Instead, they happen in small moments during everyday
[00:15:00] use.
[00:15:00] Maybe a teacher using AI to save time or a counselor drafting a support plan that includes a lot of identifying details about the student.
[00:15:09] These uses feel low stakes in the moment and as a way to free up time, but also ways to offer better feedback and guidance to a student.
[00:15:18] However, these are also all opportunities for data to be at risk.
[00:15:22] And when thinking about AI use, it really exists on a spectrum. On one end, there's light brainstorming, such as asking for lesson ideas or scaffolding strategies. But on the other end, there's uploading a student's full work for feedback.
[00:15:39] And then obviously the disclosure risk grows along that spectrum, and so does the ethical weight.
[00:15:45] This is often where your judgment matters most. Because it's easy to underestimate what's being exchanged for convenience.
[00:15:54] Another potential consideration is that student work is also intellectual property.
[00:15:59] [00:16:00] Uploading your student's essay or project isn't just an identification risk. It may also hand over their authorship and creative work to a company that could absorb it into its training data.
[00:16:11] And it's also important to think about who might bear the greatest risk. It's often the students with the most sensitive data or the least power to consent or opt out. This might include students with disabilities or IEPs.
[00:16:27] Or when thinking about your requirements to use AI on a certain assignment or project, think about the impacts on learners who can't afford to pay for paid AI subscriptions.
[00:16:38] They may end up using free tools with weaker protections.
[00:16:42] So privacy and data security in this context isn't only about keeping student data a secret, it's about power and consent, vulnerability, and the risks caused by the choices we make when using AI.
[00:16:56] So to summarize, the design consideration here is about the [00:17:00] implications during use. It's thinking about data minimization and establishing personal protocols for what categories of information may and what should never be entered into which tools.
[00:17:12] It's thinking about the least amount of personal data a task requires and asking what can be generalized, what should be kept offline.
[00:17:23] It's also just simply paying attention to who might be most at risk by our decisions regarding AI use
[00:17:29] And now moving on to our fourth design consideration, we're connecting back to episode three on AI literacy and focusing on teaching students.
[00:17:38] The guiding question is this: How are you building privacy literacy into your learning experiences as a skill a student should practice, not just a rule they should follow?
[00:17:49] And unfortunately, privacy is most often communicated to students as a rule.
[00:17:55] For example, don't share personal information with AI. But that type of [00:18:00] tip likely means little in terms of protecting your learners when they're actually using AI.
[00:18:05] Like most things we teach, what's more durable is practicing privacy during AI use as a skill.
[00:18:12] And major AI literacy frameworks, including UNESCO's recent AI competency guidance for students, position privacy and data protection as core parts of responsible AI use.
[00:18:24] These frameworks often emphasize responsible use and understanding data implication and critically evaluating tools.
[00:18:33] For example, strategies might include practicing safe prompting techniques.
[00:18:37] This could be in assignments where they are asked to remove personal, school, family, health, or their location details before using the AI.
[00:18:48] Or creating an activity where students find and decipher a privacy policy for a particular tool and then translating it into plain language.
[00:18:58] Or it could be running a [00:19:00] data trail to dig in to find out what data a particular tool collects, who then has access to that data, and what humans might read it.
[00:19:10] Or how other parties might be able to use the data, and so on.
[00:19:14] And of course, you can model safe practices yourself by examining the privacy settings when you're using a new tool in class.
[00:19:22] So the key design consideration here is to help learners develop habits that travel with them. The aim is to move beyond warnings and rules, and to instead embed data and security privacy into how students protect themselves as they participate responsibly in a world that's increasingly shaped by AI.
[00:19:43] And now the final design consideration I want to cover today gets deeper into the messier side of AI design tensions.
[00:19:50] Here, I want to end by focusing on our obligations as educators around transparency and consent.
[00:19:57] If AI is part of your course or
[00:20:00] your personal workflow, who knows? Who decides? And who gets to say no?
[00:20:05] So in terms of transparency, if AI is being used in or around a course, what are your obligations to tell students and families?
[00:20:13] And do you yourself actually know all the ways AI is in the loop and what your legal obligations are to inform students or their parents?
[00:20:22] And keep in mind, AI can pop up in lots of places, from proctoring and plagiarism detection to then smart grading tools. This might be analytics dashboards or tutoring systems that are baked into your learning management system.
[00:20:38] Then, as far as consent, if AI use is required, who has the legal right, power, and practical opportunity to opt out? For example, when a student must use a given edtech tool for assignments, are students and families getting enough information in plain language to understand the risks and make a real choice?
[00:20:59] [00:21:00] Or are they instead absorbing those risks as a condition of participation?
[00:21:05] An AI use that isn't disclosed to students removes any meaningful chance for consent.
[00:21:10] This could include tools that silently track their time on a task, or that monitor their behavior, or assist with grading behind the scenes.
[00:21:20] And when AI use is required with no viable alternative, the privacy, quote, "choice" of not using a product disappears. In other words, a student who doesn't know AI is present or who cannot say no without an academic penalty, hasn't been given a real choice.
[00:21:39] So let's spend a few minutes talking about transparency and equitable alternatives.
[00:21:45] For minors, that circle of transparency also needs to include communicating data practices to parents and guardians.
[00:21:52] It's not just politeness, it's actually legal requirements, such as FERPA and COPPA here in the United
[00:22:00] States.
[00:22:00] These types of requirements give parents rights to access, review, and in some cases limit the sharing of their children's educational records with third-party providers. It's part of what meaningful consent requires.
[00:22:14] And some districts are including disclosures about AI tools in their annual parent notifications and vendor lists.
[00:22:20] But what I'm focusing on here is at a personal level. What are your boundaries? How are you articulating them in concrete, accessible language that explains when and how AI is used? What data it touches? And what choices students and families have?
[00:22:38] This might include clear syllabus language or disclosures about your own use. It could be what edtech tools you plan to incorporate in the course.
[00:22:49] And then how you'll offer a real opt-out pathway where that's possible.
[00:22:54] So the design consideration here revolves not only around knowing your legal requirements regarding [00:23:00] transparency and consent, but then also around the trust you have with your learners. Being willing to name where AI is in the loop, explain what that means for their data.
[00:23:10] And as much as you're able within your context, give your students and families a real voice and real options.
[00:23:18] So as I wrap things up, the argument across this episode is that privacy and safety in the AI era isn't just a compliance problem.
[00:23:27] Instead it involves design decisions about your responsibilities, your actions, and choices that require your professional judgment.
[00:23:35] Every assignment you build, every tool you use, every prompt you write is also a privacy decision. And sometimes you're making those decisions whether you know it or not.
[00:23:47] So to help you work through these design considerations in your own context, I've again created a free companion design brief for this episode.
[00:23:55] It includes a worksheet to help you map your current AI workflow against these [00:24:00] five considerations. And then identify where gaps are.
[00:24:04] You can find the design brief for this episode along with all the design briefs from this series at nextpathdesign.com/designbriefs. And then when you sign up, you get access to the full library. That includes this brief and every future one as they're released.
[00:24:22] And finally, looking ahead to episode six. We're going to stay inside the equity conversation, but look at a different dimension.
[00:24:30] We're going to look at what happens when AI integration assumes access that not all of your students have.
[00:24:35] This could be uneven access to devices, connectivity, or paid tools that can quietly widen inequities.
[00:24:44] And then we'll be thinking about how we can design around these issues.
[00:24:48] So I hope you'll join me for episode six, and thank you for thinking through these questions with me.
[00:24:53] And as a reminder, new episodes come out about twice a month on Apple Podcasts, Spotify, [00:25:00] YouTube, and at our website at nextpathdesign.com/podcast.
[00:25:05] And finally, most importantly, I look forward to continuing this conversation with you.
[00:25:11]