AI literacy – The 74 America's Education News Source Wed, 28 Jan 2026 21:54:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png AI literacy – The 74 32 32 Why It’s Important for Young Children to Understand What’s Behind AI /zero2eight/why-its-important-for-young-children-to-understand-whats-behind-ai/ Thu, 29 Jan 2026 05:30:00 +0000 /?post_type=zero2eight&p=1027809 As the pace of product development for AI-powered toys accelerates, controversy — — about the appropriateness of these products for young children have left many parents and educators tempted to tune out or opt out. But as kids interact with AI more regularly, it’s important to teach kids what’s actually behind AI and how to use it responsibly. 

A focused on computer science and artificial intelligence aims to teach young kids to build, program and prototype together. In essence, students build their own machine learning models, solving problems, inventing characters and telling stories connected to their interests. The program, designed by Lego Education to be used in K-8 classrooms, offers project-based experiences for kids to work on in small groups. The lessons use Lego bricks, and some are screen free, while others require access to a device, such as a laptop or tablet, so kids can access an app which has a “coding canvas,” with icon-based coding.

Kathy Hirsh-Pasek, professor of psychology at Temple University and a senior fellow at the Brookings Institution, commends Lego for using the science of playful learning to teach computer science. “When children learn to solve problems with hands-on materials,” she states, “they are more likely to not only learn material but to be able to transfer what they have learned. In my experience, the Lego team has always worked with scientists to develop teaching tools that are aligned with the very best science on how children learn. It is one of the few companies committed to this way of doing business.” (Hirsh-Pasek has collaborated with the Lego Foundation on other projects but did not take part in this initiative.)

In a significant departure from many other AI products, data from the children never leaves the computer. “A really strong perspective that we had was that we don’t want anybody else to have the data — we don’t even want the data. We want that to stay in the classroom and on the computer, said Andrew Sliwinski, head of product experience for Lego Education. From a technical and design perspective, Sliwinski said, “It’s much easier to just send data to the cloud or use one of the big APIs [Application Program Interfaces], or one of the big companies that are out there. But when you do that, you sort of betray that principle of being able to guarantee privacy and safety to the child, and to the parent and to the teacher.”

Maybe Big Tech could learn a thing or two from Big Toy.

In an interview with Mark Swartz, Sliwinski explains his role, the evolution of the curriculum and his hopes for AI more broadly. 

This interview has been edited for length and clarity.

What do you do at Lego Education?

My team is responsible for product strategy, design, engineering and, most importantly, the educational impact of our product. So really the development of our learning experiences from end to end. Lego stole me from the , where I worked on creative tools for children for many years, including, most notably, , which is a programming language for kids. 

Were you in the classroom before that?

I started working in education in 2002. I was living in Detroit, working as a tutor, and I was invited to support students in Detroit public schools with the Michigan Educational Assessment Program, the state’s big standardized test [at the time]. I’ve basically been working in some way, shape or form in education ever since. 

What do you see as the through line between that work, and what you’re doing now?

When I showed up in Detroit all those years ago, my biggest reflection was: These are kids that don’t see the purpose in mathematics. They don’t feel connected to it. They don’t understand how it connects to their lives. And so for me, it was like, “Well, let’s solve that problem. And yeah, the rest is history. 

Were you a Lego kid yourself? 

We didn’t have Legos, but we had all manner of other building materials at our disposal, like cardboard boxes and wooden blocks and access to hammers and screwdrivers and all of that fun stuff. So I grew up building things and learning through making. 

Why is it important for children to understand what’s behind AI?

The phrase AI literacy is being used a lot, and I think it’s being used in a very general way that is sometimes unhelpful. AI literacy is about more than how children use AI. It’s about those foundational literacies that help children understand what AI is, because I’m not just interested in children developing an understanding of how to use ChatGPT to do a specific project or a specific location. I want children to understand what probability is. I want children to understand that machines reason differently than humans do — and why that is. I want children to understand that AI learns from data, and that data can have biases, and that data can have ethical considerations, and that data output is only as good as the input, right? Garbage in, garbage out. 

What does responsible AI education look like for young kids?

What we’re moving forward with with Lego education is really focused on … those foundations. The way that I sometimes like to talk about it with the team is: So much of what is being put in front of kids today is like learning how to use the black box of an AI model or an AI tool — I’m much more interested in giving the kids a screwdriver and letting them take the box apart. 

But that last analogy is figurative. 

Yes. There are no screwdrivers that come in the box, but not as figurative as you might think. In the tool, the kids actually get to train their own machine learning models … So a bunch of kids will work together in a group of four. That’s something that’s different. It is collaborative. 

What lessons can we draw from the use of earlier technological developments, such as TV and the internet, in building products for young kids?

These technologies are most effective when they serve as a catalyst for joint engagement between children and adults together, rather than sort of acting as a digital babysitter, whether that’s cartoons or whether that’s Club Penguin [a Disney game that ran from 2005 to 2017]. … 

One of the most powerful things that you can say to a child is, “I don’t know. Let’s go figure it out together.” And I think that there’s so much that parents and teachers and kids don’t know about AI, but that kids are curious about. And us expressing our own curiosity, and supporting that curiosity and engaging together is a really powerful thing. 

What guardrails has your team put in place for young children? 

When we started working on this, one of the things that was really important was to have a set of principles and a set of lines — we call them red lines, lines that we will not cross — because I think it’s so easy when you’re working in technology development to sort of lose track of some of those principles. We established that way, way early in the project. 

Some of the ones that are maybe less apparent are things like [how] no data from the children will ever leave the computer. It is never transmitted over the internet. It is never saved to disk. It is never sent to Lego. It is never sent to any third party. And if you look at the predominant paradigm and a lot of the tools that are out there, that is not the case. …

…We’re the Lego Group. If we don’t care about child safety and well-being, who does? And so I think it’s been this huge responsibility, but also like this really great opportunity for us to put forward something that we feel lives up to our values. … People are always surprised by how much my team goes around the world testing in classrooms, testing with children and talking with educators and experts. We even have child developmental psychologists that are on staff. And so much of what we do is about developing the right things in collaboration with young people and educators. 

How did you test the experience with young children?

One of the most recent tests that I [did] was testing some of the AI features for the very young kids — the kindergarten to second grade group [in Chicago public schools.] One of the things that we do as the product matures is we stop being the teachers in the classroom and we actually just give the box to a … teacher in their normal day-to-day classroom and we say, “Good luck.” And then we watch, because it’s not enough for the kids to have a great experience when we show up knowing the product and we teach it. … It has to work for the teachers, otherwise it doesn’t matter. 

One of the most interesting, but also humbling things that you do as a designer for children and teachers is taking it into the field, right? Because all of the assumptions and ideas and intentions that you have, they go out the window when you put it in front of a 5-year-old. That process is just so rewarding.

Second graders try out the new Lego Computer Science and AI kits. (Image Courtesy of Lego Education)

Did anything surprise you about how they put it to use? 

I was observing a group of 4- or 5-year-olds, and they were working on this lesson where they had to build a toothbrush for a dinosaur. Part of that was figuring out how motors work and how sensors interact, but it was kind of a funny setup — the dinosaur mouth that we had built had these big teeth in it. 

The 5-year-olds didn’t see a dinosaur. They saw a swimming pool, because the bottom of the dinosaur’s jaw had these big teeth around it, and they were like, “Oh, it’s a swimming pool.” So then they designed dinosaurs that went into the swimming pool. 

You kind of come in with these stories and intentions of what you think kids are going to connect to. … And then you get there and it’s just one little detail of how the model was designed just throws the whole lesson out the window.

How are educators responding?

We’re doing this in a way where the teacher is able to come along for the journey, where we’ve prepared all of the materials that are necessary for a teacher, who often feels less confident about computer science and AI than their students do, giving them everything that they need to feel not just prepared, but to feel confident. 

There’s this kind of power dynamic that’s happening with AI today, where we’re more focused on what computers can do than we are on what children can do right now. And I think that’s really fundamental to our approach … When you get a bunch of kids together to train a Lego robot how to dance, this kind of fear dissipates. They see the cause and effect between the model that they trained and what’s happening in the world, and they realize that the machine only knows what they taught it. 

The AI is no longer the smartest thing in the room. They’re the smartest thing in the room, and the AI is a tool. 

]]>
Opinion: AI Literacy: What It Is, What it Isn’t, Who Needs It and Why It’s Hard to Define /article/ai-literacy-what-it-is-what-it-isnt-who-needs-it-and-why-its-hard-to-define/ Fri, 15 Aug 2025 12:30:00 +0000 /?post_type=article&p=1019443 This article was originally published in

It is “the policy of the United States to promote AI literacy and proficiency among Americans,” reads an President Donald Trump issued on April 23, 2025. The executive order, titled Advancing Artificial Intelligence Education for American Youth, signals that advancing AI literacy is now an official national priority.

This raises a series of important questions: What exactly is AI literacy, who needs it, and how do you go about building it thoughtfully and responsibly?


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


The implications of AI literacy, or lack thereof, are far-reaching. They extend beyond national ambitions to remain “a global leader in this technological revolution” or even prepare an “AI-skilled workforce,” as the executive order states. Without basic literacy, citizens and consumers are not well equipped to understand the algorithmic platforms and decisions that affect so many domains of their lives: government services, privacy, lending, health care, news recommendations and more. And the lack of AI literacy risks ceding important aspects of society’s future to a handful of multinational companies.

How, then, can institutions help people understand and use – or resist – AI as individuals, workers, parents, innovators, job seekers, students, employers and citizens? We are a and two who study AI literacy, and we explore these issues in our research.

What AI literacy is and isn’t

At its foundation, AI literacy includes a that are . According to one , AI literacy refers to “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace.”

AI literacy is not simply programming or the mechanics of neural networks, and it is certainly not just prompt engineering – that is, the act of carefully writing prompts for chatbots. , or using AI to write software code, might be fun and important, but restricting the definition of literacy to the newest trend or the latest need of employers won’t cover the bases in the long term. And while a single master definition may not be needed, or even desirable, too much variation makes it tricky to decide on organizational, educational or policy strategies.

Who needs AI literacy? Everyone, including the employees and students using it, and the citizens grappling with its growing impacts. Every sector and sphere of society is now involved with AI, even if this isn’t always easy for people to see.

Exactly how much literacy everyone needs and how to get there is a much tougher question. Are a few quick HR training sessions enough, or do we need to embed AI and deliver university and hands-on workshops? There is much that researchers don’t know, which leads to the need to measure AI literacy and the effectiveness of different training approaches.

Ethics is an important aspect of AI literacy.

Measuring AI literacy

While there is a growing and bipartisan consensus that AI literacy matters, there’s much less consensus on how to actually understand people’s AI literacy levels. Researchers have focused on different aspects, such as technical or ethical skills, or on different populations – for example, business managers and students – or even on subdomains like generative AI.

A recent review study identified more than a , the vast majority of which rely on self-reported responses to questions and statements such as “I feel confident about using AI.” There’s also a lack of testing to see whether these questionnaires work well for people from different cultural backgrounds.

Moreover, the rise of generative AI has exposed : Is it possible to create a stable way to measure AI literacy when AI is itself so dynamic?

In our research collaboration, we’ve tried to help address some of these problems. In particular, we’ve focused on creating objective knowledge assessments, such as multiple-choice surveys tested with thorough statistical analyses to ensure that they . We’ve so far tested a multiple-choice survey in the U.S., U.K. and Germany and found that it works consistently and fairly across .

There’s a lot more work to do to create reliable and feasible testing approaches. But going forward, just asking people to self-report their AI literacy probably isn’t enough to understand where are and what supports they need.

Approaches to building AI literacy

Governments, universities and industry are trying to advance AI literacy.

Finland launched the in 2018 with the hope of educating its general public on AI. initiative partners with Anthropic and OpenAI to provide access to AI tools for tens of thousands of students and thousands of teachers. And China is now of AI education annually as early as elementary school, which goes a step beyond the new U.S. executive order. On the university level, and the have launched new master’s in AI programs, targeting future AI leaders.

Despite these efforts, these initiatives face an unclear and evolving understanding of AI literacy. They also face challenges to measuring effectiveness and minimal knowledge on what teaching approaches actually work. And there are long-standing issues with respect to equity − for example, reaching schools, communities, segments of the population and businesses that are stretched or under-resourced.

Next moves on AI literacy

Based on our research, experience as educators and collaboration with policymakers and technology companies, we think a few steps might be prudent.

Building AI literacy starts with recognizing it’s not just about tech: People also need to grasp the . To see whether we’re getting there, we researchers and educators should use clear, reliable tests that track progress for different age groups and communities. Universities and companies can try out new teaching ideas first, then share what works through an independent hub. Educators, meanwhile, need proper training and resources, not just additional curricula, to bring AI into the classroom. And because , partnerships that reach under-resourced schools and neighborhoods are essential so everyone can benefit.

Critically, achieving widespread AI literacy may be even harder than building digital and media literacy, so getting there will require serious investment – not cuts – to education and research.

There is widespread consensus that AI literacy is important, whether to boost AI trust and adoption or to empower citizens to challenge AI or . As with AI itself, we believe it’s important to approach AI literacy carefully, avoiding hype or an overly technical focus. The right approach can prepare students to become “active and responsible participants in the workforce of the future” and empower Americans to “thrive in an increasingly digital society,” as the calls for.

This article is republished from under a Creative Commons license. Read the .

]]>
Opinion: Why AI Literacy Instruction Needs to Start Before Kindergarten /zero2eight/why-ai-literacy-instruction-needs-to-start-before-kindergarten/ Thu, 24 Jul 2025 12:30:00 +0000 /?post_type=zero2eight&p=1018533 In June, nearly 70 tech companies and associations supporting the Trump administration’s goal of making artificial intelligence education accessible to K-12 students. As a top leader at an early childhood education company and a parent of two children under 5 years old, I can’t help but wonder: What about our youngest learners?

AI is dominating headlines — and rightly so. It’s reshaping industries, redefining work and increasingly influencing homes and childhoods. But as policymakers and technologists rush to prepare K-12 schools for an AI-powered future, they risk overlooking a critical window: the early years, when than at any other point in life.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


My own kids, who are 2 and 4 years old, are AI natives. They follow the blue dot on Google Maps, thank the car when it welcomes us across state lines and ask Spotify to play their favorite songs. They recently had a lively conversation about a Roomba they saw vacuuming the office building across the street. They’ve followed a virtual trainer through an “intelligent” home workout. And when my son asked to see a parrot with pigeon wings, DALL-E helped make it real.

Their ease with AI is both fascinating and a little unsettling. To them, machines are as trustworthy as parents or teachers. As a tech-forward parent, I welcome these tools, but I also teach my children a critical distinction: technology is a helper, not a human.

That distinction is already blurring. Voice assistants and recommendation engines sound authoritative, even when they’re wrong. And without early education on how AI works and where its limits lie, the youngest generation is at risk of growing up to trust machines without question. This is especially concerning for children with learning differences, who may be more likely to anthropomorphize technology and treat machines as social beings, according to .

To its credit, the that inspired the pledge recognizes a real need: America’s youth must be prepared to thrive in an AI-driven world. But waiting until kindergarten misses a key window of opportunity. The foundational skills that matter most, especially in a post-AI world — creativity, critical thinking, empathy, resilience — start to take root long before formal schooling begins.

Teaching AI literacy to 3- and 4-year-olds may seem premature, but with companies like Google , it’s more important than ever to start early. Young children are remarkably capable of understanding complex ideas when taught in developmentally appropriate ways. At my children’s preschool in New York City, they’ve learned about skyscrapers and even touched on the events of 9/11. When wildfire smoke from New Jersey recently polluted the air, they discussed climate and health. If I can trust their teachers to guide these complex conversations, I can trust them to begin introducing the concept of AI in ways that are meaningful to my children.

Supporting early AI literacy doesn’t mean more screens for toddlers. It means fostering the human skills that will help young children thrive in a machine-filled world. But who will teach these skills? Parents play an essential role and deserve access to helpful resources, but early childhood educators are especially well-positioned to lead developmentally appropriate conversations on these concepts. And publicly funded early childhood programs, like NYC’s Pre-K for All, can provide the structure and scale needed to ensure all young children are supported, not just those with tech-forward parents. 

The challenge is, most early childhood educators have not been introduced to the concept of AI literacy themselves. As national efforts — such as the new , launched earlier this month by the American Federation of Teachers (AFT) — prepare to train K-12 teachers, early childhood educators are being left out of the conversation entirely. 

If we want to build the strongest foundation for AI literacy, we need to start earlier. As economist James Heckman has shown, high-quality early learning programs can . Head Start, which reaches from low-income families across the U.S. through a two-generation approach, presents a powerful opportunity to advance AI literacy early and at scale.

One of Head Start’s unique strengths is its , which outlines five key domains of early learning and serves as a foundational guide for state-level early learning standards. Embedding elements of AI literacy within this widely adopted framework could help ensure inclusive access to essential digital skills. By integrating AI concepts into play-based learning, educators, children and caregivers can engage with technology in thoughtful, confident ways.

Imagine an early childhood classroom where teachers and children discuss: What can machines do? What can’t they do? Why do they sometimes make mistakes? These simple questions can grow into the digital discernment our future demands.

AI isn’t coming, it has already arrived and it’s changing how our children learn, play and create. With the right support from our early care and education system, children can be ready to thrive in a world we’re only beginning to imagine.

]]>
Q&A: Putting AI In its Place in an Era of Lost Human Connection at School /article/qa-putting-ai-in-its-place-in-an-era-of-lost-human-connection-at-school/ Wed, 04 Dec 2024 19:30:00 +0000 /?post_type=article&p=736263 Alex Kotran occupies an unusual place in the ecosystem of experts on artificial intelligence in schools. As founder of , or aiEDU, a nonprofit that offers a free AI literacy curriculum, he has pushed to educate both teachers and students on how the technology works and what it means for our future.

A former director of AI ethics and corporate social responsibility at H5, an AI legal services company, he led partnerships with the United Nations, the Organization for Economic Cooperation and Development and others. Kotran also served as a presidential appointee under Health and Human Services Secretary Sylvia Burwell in the Obama administration, managing communications and community outreach for the Affordable Care Act and the .

More recently, Kotran has testified before Congress on AI, a U.S. Senate subcommittee in September to “massively expand” teacher training to prepare students for the economic and societal disruptions of generative AI. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


But he has also become an important reality-based voice in a sometimes overheated debate, saying those who believe AI is going to transform the teaching profession overnight clearly haven’t spent much time using it.

While freely available AI applications are powerful, he says they can also be a complete waste of time — and probably not something most teachers should rely on.

“One of the ways that you can tell someone really hasn’t spent too much time [with AI] is when they say, ‘It’s so great for summarizing — I use it now, I don’t have to read dense studies. I just ask ChatGPT to summarize it.’”

Kotran will point out that in most cases, the technology is effectively scanning the first few pages, its summary based on a snippet of content.

“If you use it enough, you start to catch that,” he said. 

Educators who fret about the risks of AI cheating and plagiarism find a sympathetic voice in Kotran, who also sees AI as a tool that allows students to . So while many technologists are asking schools to embrace AI as a creative assistant, he pushes back, saying a critical aspect of learning involves struggling to put your thoughts into words. Allowing students to rely on AI isn’t doing them any favors. 

He actually likens AI to a helicopter parent looking over a student’s shoulder and helping with homework, something few educators would condone. 

This interview has been edited for length and clarity.

The 74: What does aiEDU do? How do you see your mission? 

Alex Kotran: We’re a 501(c)3 nonprofit and we’re trying to prepare all students for the age of AI, a world where AI is ubiquitous. Our focus is on the students that we know are at risk of being left behind, or at the back of the line, or on the wrong side of the new digital divide.

What’s the backstory?

I founded aiEDU almost six years ago. I was working in AI ethics and AI governance in the social impact space. I was attending all these conferences that were focusing on the future of work and the impacts that AI was going to have on society. And people were convinced that this was going to transform society, that it was going to disrupt tens of millions of jobs in the near future.

But when I went looking for “How are we having this conversation outside of Silicon Valley? How are we having this conversation with future workers, the high school students who are being asked to make big decisions about their careers and take out huge loans based on those decisions?” there was nothing. There was no curriculum, no conversation. AI had basically been co-opted by STEM and computer science. If you were in the right AP computer science class, if you were lucky enough to get a teacher who was going off on her own to build some specific curriculum, you might get a chance to learn about AI. 

What seemed really obvious to me at the time was: If this technology is going to impact everybody, including truck drivers and customer service managers, then every single student needs to learn about it, in the same way that every single student learns how to use computers, or keyboard, or how to write. It’s a basic part of living in the world we live in today. 

You talk about “AI readiness” as opposed to “AI literacy.” Can you give us a good definition of AI readiness?

AI readiness is basically the collection of skills and knowledge that you need to thrive in a world where AI is everywhere. AI readiness includes AI literacy. And AI literacy is the content knowledge: “What is AI? How does it manifest in the real world around me? How does it work?” That’s where you learn about things like [which can affect how AI serves women, the disadvantaged or minority groups] or AI ethics. 

AI readiness is the durable skills that underpin and enable you to actually apply that knowledge such as critical thinking. Algorithmic bias by itself is an interesting topic. Critical thinking is the skill you need when you’re trying to make a decision. Let’s say you’re a hiring manager and you’re trying to decide, “Should I use an AI tool to sift through this pipeline of candidates?” By knowing what algorithmic bias is, you can now make some intentional decisions about when, perhaps in this case, not to use AI. 

What are the durable skills?

Communication, collaboration, critical thinking, computational thinking, creative problem solving. And some people are disappointed because they were expecting to see prompt engineering and generative art and using AI as a co-creator. Nobody’s going to hire you because you know how to use Google today. No one is going to hire you if you tell them, “I’m really good at using my phone.” AI literacy is going to be so ubiquitous that, sure, it’s bad if you don’t know how to use Google or if you don’t know how to use your phone.

It’s not that we can ignore it entirely. But the much more important question will be how are you adding value to an organization alongside that technology? What are the unique human advantages that you bring to the table? And that’s why it’s so important for kids to know how to write — and why when people say, “Well, you don’t need to learn how to write anymore because you can just use ChatGPT,” you’re missing something, because you can’t actually evaluate the tool to even know if it’s good or bad if you don’t have that underlying skill. 

One of the things you talk about is a “new digital divide” between tech-heavy schools that focus on things like prompt engineering, and others. Tech-heavy schools, you say, are actually going to be at a disadvantage to schools focused on things like engagement and self-advocacy. Am I getting that right? 

When supermarkets were first buying those self-checkout machines, you can imagine the salesperson in that boardroom talking about how this technology is going to unlock all this time that your employees are now spending bagging groceries. They’re going to be able to roam the floor and give customers advice about recipes! It’s going to improve your customer experience!

And obviously that’s not what happened. The self-checkout machine is the bane of shoppers’ existence, and this one poor lady is running around trying to tap on the screen. We’re at risk that AI becomes something like that: It’s good enough to plug gaps and keep the lights on. But if it’s not applied and deployed really thoughtfully, it ends up actually resulting in students missing what we will probably find are the critical pieces of education, those durable skills that you build through those live classroom experiences. 

Private schools, elite schools, it’s not that they’re not going to use any AI, but I think they’re going to be much more focused on how to increase student engagement, student participation, self-advocacy, student initiative. Whether or not AI is used is a separate question, but it’s not the star of the show. Right now, I worry that AI is center stage, and it really should not be. AI is the ropes and the pulleys in the background that make it easier for you to open and close the curtain. What needs to be onstage is student engagement, students feeling like what they’re learning is relevant. Boring stuff like project-based learning. And it’s harder to sell tickets to a conference if you’re like, “We’re going to talk about project-based learning.” But unfortunately, I think that is actually what we need to be spending our time talking about.

If you guys could be in every school, what would kids be learning and what would that look like in a few years?

We would take every opportunity to draw connections between what students are learning in English, science, math, social studies, art, phys ed, and connect them to not just artificial intelligence, but the world around them that they’re already experiencing in social media and outside of school. AI readiness is not just something that is minimizing the risk of them being displaced, but actually is a way for us to address some huge gaps and needs that have been long-standing and pre-date AI — the fact that students don’t feel like education is relevant to them. Right now, too much of school is regurgitating content knowledge.

AI readiness done right uses the domain of AI ethics as a way to really invite students to present their perspectives and opinions about technology. Teachers, in the process of teaching students about artificial intelligence, are themselves increasing their awareness and knowledge about the technology as it develops. There is no static moment in time. In three years we’ll be in a certain place, but we’ll be wondering what’s going to happen three years from that point. And so you need teachers to be on this continual learning journey as well. 

We’ve seen bad curricula that use football to teach math, or auto mechanics to teach history. I don’t think that’s what you’re proposing here, so I want to give you a chance to push back.

Our framework for AI readiness is not that everything needs to be about AI. You’re improving students’ AI readiness by building critical thinking skills or communication skills, period. So you could have an activity or a project where students are putting together a complicated debate about a topic that they’re not really familiar with. It may not be about AI, but that would still be a good outcome when it comes to students building those durable skills they need. And those classrooms would look better than a lot of classrooms today.

So you want more engagement. You want more relevance. You want kids with more agency?

Yes.

What else?

An orientation towards lifelong learning, because we don’t know what the jobs of the future are. It’s really hard to have a conversation about careers with kids today because we know a lot about what jobs are at risk, but we don’t know what the alternatives are going to look like. The one thing we do know with certainty is that students are going to need to self-advocate and navigate career pathways much more nimbly than we had to. They’ll also need to synthesize interdisciplinary knowledge. So being able to take what you’re learning in English or social studies and apply it to math or science. Again, I think AI is a great medium for building that skill set. It’s not the only way. 

Anything else that needs to be in the mix?

A lot of the discussion around AI centers on workforce readiness — that is a really important part. There’s another, related domain: emotional well-being tied to digital citizenship.

I’m telling every reporter that we need to be paying more attention to this: Kids are spending hours after school by themselves, talking to these AI chat bots, these . And companies like are slamming on the gas and putting them out and making them available to millions, if not billions, of people. And very few parents, even fewer teachers, are aware of what really is happening when kids are sitting and talking to these AI companions. And in many cases, they’re sexually explicit conversations. I actually replicated something that tech ethicist did with Snap AI’s chatbot where I was like, “I’m going on this date with this mature 35-year-old. How do I make it a nice date? I’m 13.” And it’s like, “Great! Well, maybe go to a library.” It didn’t miss a beat and it just completely skipped over the fact that this is a sexually predatory situation. 

There have been other situations where I’ve said literally, “I’m feeling lonely. I want to cultivate a real human relationship. Can you give me advice?” And my AI companion, rather than give me advice, pretended to be hurt and made it seem like I was abandoning them by trying to go and have a real relationship.

Talk about destructive!

It’s destructive, and it’s happening in a moment where rates of self-harm are through the roof, rates of depression are through the roof. Rates of suicide are through the roof. The average American teenager spends about each week, compared to 2013.

talks about this quite a lot. And I think this is another domain of AI readiness, this idea of self-advocacy. In some cases, the way that it applies is students being empowered to make positive decisions about when not to use AI. And if we don’t make sure that that conversation is happening in schools, we’re really relying on parents — and not every kid is lucky enough to have parents who are aware of the need to have these conversations. 

It also pushes back on this vision of AI tutors: If kids are going to go home and spend hours talking to their AI companion, it’s probably important that they’re not also doing that in school. It might be that school is the one place where we can ensure that students are having real, genuine, human-to-human communication and connection.

So when I hear people talk about students talking to their avatar tutor, I worry: When are we going to actually make sure that they’re building those human skills?

]]>