S1E5: Designing for Data Privacy and Security in AI-Integrated Learning

Season #1 Episode #5

Who's responsible for data privacy and security in AI-integrated learning? In Episode 5 of the AI for Educators Design Lab podcast, Jennifer Maddrell, PhD, argues that these issues aren't just IT and compliance concerns, or problems tech vendors should monitor and control. They're core design responsibilities for educators, learning experience designers, and educational leaders that shape everyday tool choices, workflows, and prompt decisions.

This episode was recorded amid reports of a major Canvas/Instructure security incident that may have exposed data for up to 275 million students, faculty, and staff. While large-scale breaches grab headlines, Jennifer argues the more common everyday risk is far quieter. It could be a well-intentioned teacher pasting student names, grades, or full assignments into tools like ChatGPT, Claude, or Gemini without pausing to consider where that data goes, how long it's retained, or whether it's used to train the model.

To work through this challenge, Jennifer walks through five design considerations along an arc that begins with the educator and works outward to students, families, and community:

  1. Educator grounding: Building a privacy-aware workflow with habits like multi-factor authentication, pseudonyms in prompts, no-training modes, and treating new AI features as new tools
  2. AI tool selection: Recognizing that data protections aren't binary but exist on a spectrum from free consumer accounts to paid personal plans to enterprise and education-specific licenses with Data Processing Agreements
  3. Data minimization during use: Asking what the least amount of personal data a task actually requires, and paying attention to which learners would bear the greatest harm if something went wrong
  4. Teaching privacy literacy: Building privacy as a skill students actively practice, not just a rule they follow
  5. Transparency and consent: Knowing the legal and ethical obligations to inform students and families, especially for minors, and adding clear syllabus language, real opt-out alternatives, and parent-facing disclosures

The episode closes with a preview of Episode 6, which extends the equity conversation into the access dimension to ask: what happens when AI integration assumes devices, connectivity, or paid tools that not all students have?

00:00  Welcome and Episode Focus

01:15  The Canvas/Instructure Breach — A Wake-Up Call

02:46  Everyday Classroom Privacy Risks

03:46  Invisible Data Collection in AI Tools

05:28  Why Privacy Is Every Educator's Job

06:11  Five Design Considerations Overview

07:43  DC1: Educator Grounding and Privacy-Aware Habits

11:08  DC2: Choosing Safer AI Tools

14:32  DC3: Data Minimization and Vulnerable Learners

17:29  DC4: Teaching Student Privacy Literacy

19:43  DC5: Transparency, Consent, and Family Communication

23:18  Wrap-Up and Preview of Episode 6

Other Mentioned Sources: