AI – The 74 America's Education News Source Tue, 14 Apr 2026 13:48:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png AI – The 74 32 32 Five Things to Know About the New Khan TED Institute /article/five-things-to-know-about-new-khan-ted-institute/ Tue, 14 Apr 2026 13:01:00 +0000 /?post_type=article&p=1031081 Three well-known but very different names in nonprofit education say they’re coming together Tuesday to launch an improbable enterprise: a new, AI-focused college, designed for a world in which artificial intelligence is reshaping what employers want. It promises a bachelor’s degree in applied AI, delivered almost entirely online in as little as two years — for less than the price of a used Toyota Corolla. 

Applications are expected to open in 2027 for the Khan TED Institute, a joint project of Khan Academy, TED — the purveyors of the popular TED Talks — and the Educational Testing Service.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“I think there’s always been, frankly, some need for a program like this,” said Khan Academy founder Sal Khan. Many people, he said, can’t afford a college degree or can’t take the time out of their work lives to attend four years of classes. “It could be that they have pursued a degree, but it’s not giving the signal that would give them the opportunities that they would want.”

Another founder, Amit Sevak, who leads ETS, acknowledged that they are still working out many of the details, but that the new institution could someday enroll “tens of thousands” of students, rivaling flagship state universities. Sevak said he’s “100%” anticipating that its instructors will be humans, most likely a large network of adjuncts.

“We still believe in the value of a human teacher,” he said. “We think that there’s so much socialization and collaboration that takes place [in the classroom]. There’s also the classic need for classroom management and some pedagogical oversight over the assessments.”

Here are five things you need to know about the new enterprise:

1. It’ll offer a bachelor’s degree in applied AI in various fields such as business, marketing, human resources, healthcare and more.

The college will offer a full undergraduate bachelor’s degree organized around three pillars: core academic knowledge — math, statistics, economics, computer science, science, history and writing — applied AI skills and “durable” human skills such as communication, leadership, collaboration, peer tutoring and public speaking. 

Early employer partners include Microsoft, Google and , an AI app development site.

2. It’s expected to be competency-based, cost less than $10,000 and take as little as half the time of a traditional bachelor’s degree.

The college’s founding partners say its total cost will likely be under $10,000, a fraction of the of a four-year degree.

Amit Sevak

Rather than requiring four years of seat time, Sevak said, the institute is built around a competency-based model, offering students the opportunity to advance when they demonstrate mastery. That means students could potentially complete the degree in two to three years, he said, depending on how quickly they demonstrate required competencies.

That opens it up to many different kinds of students, he said, including motivated high schoolers who want to earn undergraduate credits quickly before graduation, working adults seeking advancement in their jobs and students already enrolled in traditional colleges who want to stack an AI credential on top of their existing undergraduate credits.

Khan said the new college “is something I’ve thought about doing in some way, shape or form, for many years, and the changes within the job market, because of AI, only accelerated that.”

He said the idea came out of conversations with TED chairman about a year and a half ago. “We started saying, ‘It feels like there’s something powerful between Khan Academy and TED. We’re both learning organizations. Khan Academy is known for academic learning from K-through-14. TED is known as [embodying] lifelong learning. And it’s about human connection. And it feels like we both have fairly unique brands in the not-for-profit space and the education space.’”

Khan later spoke at an ETS trustees dinner and got to know Sevak.

“They’ve been looking at the same things,” he said, “and they’ve also come up with a framework on durable skills and thinking about ways to assess them. And we realized, ‘Look, the world needs this. And if the three of us come together, this will be very credible and hopefully has a high chance of helping a lot of people.’”

3. It’s an “AI-first” institution, weaving artificial intelligence into how courses are designed, taught and assessed.

Sivak said courses will be shaped by AI and teaching will be supported by AI agents, software systems that can tutor students, answer questions and provide feedback. And students will be prepared for work in “AI-native” environments.

Instruction will likely be 100% online at the college’s launch, with an emphasis on asynchronous coursework to accommodate students in different time zones and life circumstances. Over time, Sevak said, they’ll likely explore a hybrid format.

4. Khan Academy will provide the college’s learning platform and pedagogical infrastructure, despite its founder’s tempered enthusiasm about AI and learning.

TED, the conference organization best known for its short, , will incorporate its content into the curriculum, giving students access to live talks, Q&A sessions and community-based learning with TED speakers.

And ETS, the testing and measurement organization that produces the GRE and TOEFL tests, will contribute its assessment expertise, said Sevak.

Khan Academy, the popular free tutoring website, which has about and operates its own , will offer its technology to deliver the college’s coursework, organizers said. Khan, who founded it in 2008, will hold the title of “TED Vision Steward” in the new partnership.

Sal Khan

The announcement comes just a few days after Khan told Chalkbeat that the learning revolution he predicted in 2023, upon Khanmigo’s release, .

In September 2022, Khan and Kristen DiCerbo, the organization’s chief learning officer, were among the first people outside of Open AI to get access to GPT-4, the large language model that at the time powered ChatGPT. Their experiments gave rise to a revolution in Khan’s thinking: In 2023, he delivered a TED Talk in which he predicted “the biggest positive transformation that education has ever seen,” saying we’d soon be able to give “every student on the planet an artificially intelligent but amazing personal tutor.”

In 2024, Khan’s book, , bore the subtitle “How AI Will Revolutionize Education.” 

But more than three years after Khanmigo’s launch, Khan admitted, “For a lot of students, it was a non-event. They just didn’t use it much.”

A few students, he said, have used the AI chatbot readily, while others haven’t. AI tutoring, he concluded, doesn’t necessarily motivate students to learn or fill in knowledge gaps they need to learn more. He’s still optimistic about AI in education, but also sees its limits. ”I just view it as part of the solution,” he said. “I don’t view it as the end-all and be-all.”

On Monday, Khan told The 74 that AI is “just going to be part of our arsenal to help make more engaging tools. Maybe we’ll be able to give more rich assessment practice. Instead of having multiple-choice questions, you can start to have ‘explain your thinking’ [questions]. So it starts to open up the aperture.”

5. It’s very much a work in progress.

Speaking four days before the launch, Sevak admitted that nearly everything about the venture “is still evolving,” and that the team is “workshopping the pedagogical design” of the new college.

Sevak said the institute is in talks with regional and national organizations that can offer “the highest form of accreditation,” a step that would set it apart from a growing number of online certificates, micro-credentials and boot camps. 

“We’re really in the early days, and it’s just going to take some time for us to adapt,” he said. 

The college’s curriculum isn’t yet finalized and applications are 12 to 18 months away. Likewise, the specific structure of its hybrid and asynchronous models, its faculty roster and the full range of majors are all still in development.

“Our intention is, over time, to have a whole range of specializations,” said Sevak. But the program’s core is designed to prepare students “to be really AI-centric” for a new reality. “We’re seeing [AI] as ripping through the economy,” creating a lot of uncertainty for young people. 

More to the point, said Khan, “Work is changing very fast. AI is changing everything.”

]]>
Gen Z Increasingly Skeptical of —And Angry About —Artificial Intelligence /article/gen-z-increasingly-skeptical-of-and-angry-about-artificial-intelligence/ Thu, 09 Apr 2026 04:01:00 +0000 /?post_type=article&p=1030884 While some might envision Gen Z welcoming artificial intelligence into their lives, a new Gallup survey finds people between the ages of 14 and 29 are becoming increasingly skeptical of — and downright mad at — AI.

Compared to a , they’re less excited and hopeful about the change it could bring and more angry at its existence, citing concerns about AI’s impact on their cognitive abilities and professional opportunities.

Respondents said they used AI at nearly the same rate they did before — they reported only a slight increase in daily and weekly exposure — but when asked how it makes them feel, the answers revealed growing misgivings. 

Thirty-one percent said it made them angry, up 9 percentage points from 2025. And just 22% said it made them feel excited, down 14 percentage points from last year. Only 18% of respondents said it made them feel hopeful, marking a nine-point drop. Forty-two percent said it made them feel anxious, roughly the same as last year. 

Zach Hrynowski, senior education researcher at Gallup, said the switch was swift. 

“One of my working theories is that (it’s) the high schoolers, who are in their senior year, or especially those college students, who are maybe thinking, ‘AI is taking my job. I just went to college for four years: I spent all this money and now it’s turning my industry upside down,” he said. 

Only 46% of respondents believed AI would help them learn faster, down from 53% the prior year, Gallup found. Fifty-six percent of respondents said it would help them to expedite their work compared to 66% last year. 

Hrynowski notes, too, that users’ unease wasn’t entirely tied to the amount of time they spend engaging with AI. 

“Year over year, among that super user group, they’re much less excited, they are much less hopeful — and they are more angry,” he said. “So this is not a case of some people who are adopting it and loving it and some people who are just avoiding it and feel negatively about it.”

Nearly half of respondents said the risk of the technology outweighs the benefits in the workforce. Just 37% believed it would help them find accurate information, down from 43% the prior year and only 31% believed it would help them come up with new ideas compared to 42% in 2025. 

The survey also notes some disparities by age and race. For example, older Gen Zers are more likely than younger ones to voice concerns about AI’s impact on learning in general. 

Asked how likely is it that AI designed to mainly complete tasks faster will make learning more difficult in the future, 74% of K-12 respondents said it was “very likely” or “somewhat likely” compared to 83% of Gen Z adults who said the same. Men and Black respondents were also less concerned about learning impact than their peers overall.

Results are based on a survey of 1,572 people spread throughout every state and Washington, D.C., conducted between Feb. 24 and March 4, 2026. It was commissioned by the Walton Family Foundation and , Global Silicon Valley. Together, Walton Family Foundation and Gallup are conducting ongoing research into Gen Z’s attitudes toward AI.

Hrynowski believes there might be a link between recent revelations about the harmful nature of social media and AI-related distrust: Many of the respondents came of age, he notes, just as former surgeon general Vivek H. Murthy called for a about its use. 

shapes the user experience in social media. Just last month, a California jury found social media company Meta — owner of Facebook, Instagram, WhatsApp, Messenger and Threads — and YouTube injured a young woman’s mental health by design in that could encourage untold others. 

This was the second of two critical decisions: Just a day earlier, a New Mexico jury found Meta — and hid what it knew about child sexual exploitation on its platforms.

I’ve always been very impressed from the start of this work with Gen Z that across the board, not just with AI, they are keenly aware of the risks of technology, whether it’s social media, whether it’s AI or screen time,” Hrynowski said. 

They are not the only generation to harbor these worries. A growing number of parents of K-12 students are pushing back on their screen time, not just , but  

Despite respondents’ skepticism about AI, they’re also readily aware that the technology won’t be walked back: 52% acknowledge that they will need to know how to use AI if they go to college or take classes after high school, while 48% think they will need to know how to use AI in the workplace.

An earlier Gallup study, released just last week, shows 42% of bachelor’s degree students have reconsidered their major because of AI.

Gen Z, in its reluctant acceptance of the technology, wants help in how to navigate it, both in an academic setting and in the workplace. Schools are stepping up, the survey revealed: The share of K-12 students who say their school has AI rules moved from 51% in 2025 to 74% this year.

Disclosure: Walton Family Foundation provides financial support to The 74.

]]>
Behind the Reinvention of Summit Public Schools With AI /article/behind-the-reinvention-of-summit-public-schools-with-ai/ Tue, 07 Apr 2026 14:30:00 +0000 /?post_type=article&p=1030804 Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In the latest episode exploring new school models powered by artificial intelligence, Summit Public Schools’ Cady Ching and Dan Effland join Michael Horn and Diane Tavenner to discuss Summit’s transformation into an AI-native school model. The conversation examines how clarity around school outcomes and model design enables the effective integration of new technology, followed by insights into the evolution of Summit’s expeditions. Ching and Effland emphasize the importance of a holistic, purposeful education, as well as the need for a robust technology infrastructure to scale innovation.

Listen to the episode below. A full transcript follows.

Cady Ching: I think what has been really helpful for me is to list the ways that a model is not. It’s not a curriculum, it’s not an LMS, it’s not a schedule by itself, it’s not a set of beliefs or a graduate profile by itself. Those are parts of a model, but a lot of the building that we’re seeing right now is focused on building for parts versus building for an actual whole model. And so the AI-native model is how all of those model elements are working together. And it is not going to be replacing a school model. It’s going to expose whether or not you actually have a model. And I think AI is forcing a lot of school systems right now to get really honest, because if you don’t know what students are supposed to be learning and you’re not sure how they’re showing that or what adults are responsible for, AI just layers on complexity and, quite honestly, chaos. But if you do have the level of clarity of what Dan is speaking about, AI is actually making systems work a lot better, or it can make systems work a lot better.

I think the jury is out on the tools that we need and how we can create the tools that we need. But AI really isn’t replacing, it’s revealing whether or not your school model actually exists.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane, it is good to see you with some excitement for today’s episode.

Diane Tavenner: Yeah, we have a real treat today. We’ve got two of my favorite educators in the world joining us for what I’m sure is going to be just a really interesting conversation.

Michael Horn: Well, and for years, as obviously I’ve learned about Summit from you, direct from you, and yet it’s been nearly 3 years, I think, since you passed the baton, if math is still a thing. And I know from afar that the team continues to be among the most innovative schools in the country and so I know that they continue to think about reinvention, and frankly, you know, what does Summit need to look like? How can it get even better? All these questions for its learners. And so I’m incredibly excited to dig in and learn about what they’re calling Summit 3.0 on today’s show. I will say it’s also interesting to have this conversation because we’re sort of in our model geek out, if you will, at the moment, right? While we’re having this conversation, we’ve had the founders of Alpha School, Flourish on, both of which are designed as AI-native models. And for those who listened to those episodes we sort of created a little bit of a side-by-side, if you will, where we said, hey, Summit is here as this baseline for a pre-AI model trying to do personalization or optimization of each kid’s learning. And we explored what can you do in an AI-native world? How can you design differently? But today what’s exciting, I think, is we’re going to get to dig into what does it look like for an existing model with that orientation to become, quote unquote, AI-native.

And as you know, transformation and how organizations reinvent themselves, that’s something I get really passionate about and excited. So I cannot wait to learn from the real-life example in progress.

Diane Tavenner: Well, we’ve got the two perfect people for that conversation, Michael. And so let me introduce you to Cady Ching, who is the CEO of Summit Public Schools, where she was an extraordinary teacher and school and network leader for a decade before taking on that role. So she brings this full spectrum of experience to this next phase. And Dan Effland, who is the senior director of innovation at Summit, where he was also an extraordinary teacher and school leader before taking on this new role of leading for the second time in the history of Summit, the reinvention of the model. And so welcome, Dan and Cady. We’re so happy that you’re here with us and excited to talk to you about the work you’re doing.

Cady Ching: Thank you. Thank you so much. I’m excited too. It’s coming at this moment for Dan and I where we’ve been trying on a lot of language about where we’ve been, where we are today, and where we’re going. So selfishly, this is a milestone for us.

Michael Horn: Well, and I get to feel like I’m jumping in on a team huddle of y’all. Yeah, this will, this will, this will be fun.

Cady Ching: Welcome, Michael.

Michael Horn: Thank you.

What Is a School? 

Diane Tavenner: Dan and Cady, a few weeks ago we got together and you walked me through the thinking and planning you’re doing. And honestly, I was captivated, you know, because I got stuck on it and I wanted to dissect every word. By this simplest definition of school, it’s honestly the simplest definition I’ve ever read of a school. And I wanted to start there today because I really think we always have talked about getting to the simplicity on the other side of complexity. And I think you’ve done it with this definition, and I think it’s going to be really powerful in this next chapter. And so maybe, Dan, kick us off. And if you will share that definition and a little bit about how it came to you or how you all came to it in your process and what you think it unlocks.

Dan Effland: Yeah, happy to. And thanks for having me here. I’m so excited to talk to you all. Yeah, so, I mean, we’ve been working on this for years, right? What is simplicity on the other side of complexity? And I think as we’ve been digging into what does redesigning look like, it became really clear that you have to get down to some foundational elements to avoid designing within conventions and not even really realizing you’re doing it. And so the way we’re thinking about schools is simply, it’s a group of young people. It’s a set of outcomes or competencies. And then it’s a set of resources that help you support young people to achieve those outcomes or competencies. That’s it.

Kids, outcomes, resources. And stripping all the way back to that has allowed us then to engage with our community, because all this work is like with students, caregivers, and educators, and go like, OK, what do we really want? What do schools really need to be? With full freedom, we call them dreaming sessions, where we can really engage off the simplest foundational elements and not get hooked by any of the conventions that have existed, you know, for decades or longer than that in a lot of cases.

Summit 2.0: Evolution and Vision

Michael Horn: It’s really cool because you’ve sort of, like you said, you sort of have a conversation around what those end posts, and we can sort of figure out what’s inside the box to get there apart from what’s always been there. But before we go to that sort of Summit 3.0 vision and where you’re thinking currently is, because I’m imagining you’re going to have lots of trade-offs and changes as you go through the design process, but I think it would be helpful to do a quick turn on Summit 2.0. Both to ground, frankly, our audience, but also to set up a question of how things are changing and where and so forth so that we can understand that. And so I’d love, and maybe Cady, you dive in on this first, how would you describe the Summit 2.0 model, which was not only in your schools, but schools across the country? It’s one of the reasons I think it can be called a model,  it’s scaled beyond Summit itself, right? And as you think about that, the new model, what is it in the Summit 2.0 that you’d say, we really want to hold on to this? Or where are the things that you’re saying, hey, actually, that’s something we can leave behind or start to question whether we want to change that?

Cady Ching: Yeah, thanks for asking this question. I think it’s so important. The reason why I keep smiling when you all say Summit 2.0 and 3.0 is because Dan and I actually got into it a couple weeks ago about if we wanted to use that language or not. And my issue with it was I think it’s really, it serves a purpose because like to Diane’s point, it is simplicity at the other end of complexity. And there is a danger in the simplification of the 2.0 and 3.0 because at Summit, we really think about innovation in two ways. One just being innovation through refinement, which is the day-to-day tightening of the model elements that we’re building on for these larger moments of innovation, which we call innovation for redesign. And so those are sort of the sector-shifting, big model, what we call Big M changes. But I’m going to use Summit 2.0 and 3.0 language today in shorthand.

Michael Horn: Thanks for doing it for the listeners.

Cady Ching: Yeah, and so Summit 2.0 really speaks to our personalization era at Summit, where we showed personalization doesn’t need to be a luxury. And we did that by designing cohesive student and teacher experience., and it included model elements like mentoring and skills assessment and differentiation using real-time data, which we enabled through tech. And the tech that we co-built was called the Summit Learning Platform. For me, what I think was most remarkable about what we proved in Summit 2.0 is what you mentioned. It was scalable, and it did scale, and schools were able to implement and sustain the Summit model on public dollars. Which was remarkable. And so we reached 100,000 students, 6,000 educators, and 400 schools across 40 states.

And we did it with district, charter, private, rural, suburban, and urban. It was completely shifting the field. And then we normalized mastery-based learning, personalized playlists and skills and habits in a way that now is the foundation and the baseline in so many places that we’re now talking about building these AI-native models on top of. And so to the second part of your question, which I’ll kick off and then, Dan, I’m going to pass it to you to add on, we think about model elements and processes that we want to carry forward into Summit 3.0. In the process side, which is where I thrive, we were successful because we were leading from this intersection of the learning science, community engagement, and technology, and we centered teachers and students at every part of the design.. And we’ve used those same design principles to continuously improve our model since Summit 2.0. For me, I feel like we’re 4 years into Summit 3.0, and we’ve already gotten some really exciting data back about situating us as leaders in the field again around what we’ve built on top of the personalization.

In last year, this is our most recent data, we saw that our Summit alumni have some of the highest post-graduation incomes and lowest debt loads, as compared to other top-performing charters. And this is the type of longitudinal outcome evidence we’ve been really longing for. And when you think back about how Dan just defined the system, what that data does for us is it grounds us in that we do have a really strong set of outcomes and competencies that are timeless. Our young people are now achieving them, and we’re letting go of the old technology to create space for AI-reimagined infrastructure that’s going to help us to better allocate resources. And we think our biggest resource levers are people, technology, and time. So that’s really how we’re thinking about Summit 2.0 setting us up for Summit 3.0.

Michael Horn: Dan, did you want to jump in there and add some?

Dan Effland: Yeah, yeah, I think I’ll just like, you know, I think, you know, Cady and I were both teachers in Summit 2.0. We were both school leaders in this, and so we have a lot of really direct connection to it. And the thing that really makes me think about it is like, you know, the learning platform is no longer in existence, but the elements of the model really deeply took root. Mentoring, mastery, what we called habits of success, I think we’re calling durable skills in our world now. Like, I’m fine with it, whatever we want to call it. It’s become ubiquitous. And I think it really helps. I mean, I think it really gives us a sense of a strong foundation of like, we’ve done this before, we’ve built a model that’s scaled and really stuck.

And it doesn’t matter if the technology, you know, is stuck or not, because that technology is not the model. The tech model is these elements of how you support kids to master these outcomes with whatever available resources you have are. And so, yeah, I think there’s a point of pride when we think about, you know, what we’re begrudgingly calling Summit 2.0. And then I think there’s a sense of the strength of the foundation to then build what’s coming next.

Personalization & Durable Skills

Michael Horn: It’s interesting. And we’ll come back to the technology, I know, and we want to circle back to that. But hearing Cady, you described the model, used a few words that I think are really important for people to hear. One of them was cohesive, because I think a lot of the tech efforts right now around personalization in so much of the country are the opposite of cohesive. And that’s why we’re seeing a blowback sometimes against technology, because it’s sort of all over the place and hundreds of things going on at once for a young person with tons of distractions. And you talked about it being grounded in the learning sciences and personalization as a, as a means, not the ends, right? And, and then you have these longitudinal outcomes. And I’m just calling them out because I think people often lose sight of, this is the bedrock, right, of how we build from, and then go from there. And the other piece, and Dan, you just referenced this, the field is now calling it durable skills.

I still prefer habits of success. Let me just be on record on that one. But one of the things you all really did well around Summit 2.0 was have incredible clarity on the mission, what success looks like, such that you could measure in the way you just said, Cady. And I didn’t know those stats. I mean, it’s fascinating., and then you had these commencement-level outcomes, right? You were super clear on what does it look like from a, you know, for a Summit graduate as they go out in the wild. And it seems in some ways those commencement-level outcomes have been precursors to the movement across states that we’ve seen in the Portraits of a Graduate. And I do think that there’s some key differences. I’ll hold my editorial back on what those are more because I want your take on that.

Like, what, if anything, are the differences and, and between those commencement-level outcomes that you all have defined, the portraits of a graduate that we see states doing, and more broadly, like, what’s the importance of being super clear on what those outcomes are and, and how you’d know, on the other side, if you could speak to that. And I don’t know, I’ll make it a grab bag of which one of you wants to jump in on that.

Dan Effland: Dan, take it away. Awesome. Yeah, I mean, so our vision has been the same for 23 years. It’s preparing young people for a fulfilled life, really all people. We think of our staff as part of that too. And fulfilled life is in some ways, again, simple. It is purposeful work, financial independence, strong community, strong relationships, and health. And so that’s given us a holistic picture, a holistic point B that we’re always going for.

You know, I don’t, I don’t know how I compare it to Portrait of a Graduate or Portrait of a Learner. What I know is it gives us a lot of clarity in that you can’t design a coherent model without clarity of where you’re headed. And that it’s also really important that that clarity is holistic and is not simply a set of academic outcomes. It is much broader than that. And that gives us a huge advantage in this work right now because we’re not spending a lot of time. We certainly talk to our community and affirm, you know, on a regular basis, is this still what people want? Is this still what our communities are after? And it is. And so we can move right to like, okay, how do we get there?

Cady Ching: The thing that I would add on top of that is, I loved, Michael, what you called out around the language of a model. I think that at the operator level, and when I’m talking to, to other school leaders, this word is used in a lot of different ways. And I think what has been really helpful for me is to list the ways that a model is not. It’s not a curriculum. It’s not an LMS. It’s not a schedule by itself. It’s not a set of beliefs or a graduate profile by itself. Those are parts of a model.

But a lot of the building that we’re seeing right now is focused on building for parts versus building for an actual whole model. And so the AI-native model is how all of those model elements are working together, and it is not going to be replacing a school model, it’s going to expose whether or not you actually have a model. And it’s, I think AI is forcing a lot of school systems right now to get really honest, because if you don’t know what students are supposed to be learning, and you’re not sure how they’re showing that, or what adults are responsible for, AI just layers on complexity and quite honestly, chaos. But if you do have the level of clarity of what Dan is speaking about, AI is actually making systems work a lot better, or it can make systems work a lot better. I think the jury is out on the tools that we need and how we can create the tools that we need, um, but AI really isn’t replacing, it’s revealing whether or not your school model actually exists.

Diane Tavenner: I’d love it if we go back to your simple definition, Dan, that we started with, when we sat down. You use the word package of outcomes, and I was obsessed with that word package for this reason, because you know, maybe I will jump in here a little bit on the portrait of a graduate. 

Michael Horn: The table’s been set for you, Diane. 

Diane Tavenner: Yeah. And one of our, you know, Summit’s longtime beloved board chair, board member, who honestly is one of the most forward-thinking, I think, philanthropists who launched a scholarship for Summit graduates going into Pathways years ago, like ahead of the curve, you know, sent us a note the other day with a real critique of portraits of a graduate. He was sort of reading about them and was just very, you know, like, what are these people thinking? And I think what he was responding to was a lot of the portraits of the graduate, like, feel very checkboxy and compliance-oriented. Versus this sort of holistic. And I know that’s not the way they were intended.

AI Evolution in Education Models

Diane Tavenner: They all have good intentions behind them, but the way they have been sort of brought to life and then communicated and then implemented are what Cady, I think, is speaking to, not as a model, but as like these individual components that don’t have a coherence about how they’re actually organized an organized set of resources to achieve those package of outcomes, if you will. And so I think that what you all just described is at the core of your success going forward and what an advantage you have. And it really speaks honestly to the durability that you’re carrying all of that forward in this next phase, that being, living a life of wellbeing it actually hasn’t changed, right? The elements of that haven’t changed, and that’s what you’re equipping young people for. So, you know, in a recent episode, Michael and I had a conversation, just the two of us, which was super fun, and we were dissecting a way of thinking about school models in three buckets. And I know you are both familiar with this framework, which is essentially that, you know, Model 1 will use AI to make sort of the existing industrial model school more efficient and better. Model 2 will stretch the bounds of that industrial model school with integrated AI. And Model 3 will be AI native, you know, essentially built from the ground up with AI capabilities that are assumed to be at the core. And, you know, as you think about where you’re now going with Summit 3.0, how do you view it in the context of this framework? And, you know, what does AI make possible that wasn’t possible in 2.0 because it was designed pre-AI?

Dan Effland: Love this question. And I did listen to that episode. So I’ll start with the model part, and then I really want to get into what AI makes possible and kind of what it pushes us to do. So I love reading like Learner Studios’ 3 Horizons model. I love Bob Hughes’ paper on the 3 models. I find that stuff really, really important for evaluating what exists and really valuable for visioning and for getting into this place of what really is possible. And I think, and that’s really useful. I will say, when we start designing and working with our young people and working with our caregivers and our educators, I actually find it useful to kind of set those categories aside and to ask the more foundational questions around, like, we know where we want to go, we have this clear vision, we have this really simple, you know, conception of what a school is with kids’ outcomes and resources.

And now let’s go from here. And when you get into, like, as we’ve talked about, we have a lot of clarity about our outcomes already. We really believe deeply that this holistic model of a healthy, thriving, you know, young person, young adult, adult is going to be durable regardless of the transitions that are happening in our society. But when it comes to the resources part, now we have this whole huge different potential, one, AI being a resource, but also a way that I think we’re most really interested when it comes to AI is how we can use it if we integrate it into our tech stack. Really how, like, with a really robust knowledge graph and really strong data layer, you could be dynamically reallocating resources in a way that just would be impossible for people. You know, like when I used to build an annual schedule, like the primary schedule with our Dean of Operations, she and I would sit in an office for a week with a spreadsheet to make a schedule for the year that never changed, right? Like, it’s just so labor-intensive. But now I think when we think about AI as part of our infrastructure, and it’s kind of a layer in our tech stack interacting with a really robust knowledge graph and data layer, we can start to ask ourselves, like, how do we get the right resources to the right kids at the right time for the right outcome? And really get very, very precise, and also do that dynamically. And I think that then allows us to think about personalization, just-in-time instruction, integrating real-world experiences, ensuring that personalized learning still happens in community and there’s deep human connection that is part of personalized learning journey in a way that was, was not possible when, you know, 12 years ago when we were thinking about Summit 2.0, the technology just didn’t exist.

And so, I mean, it’s exciting. I mean, I really think there’s incredible possibility there. And while there’s definitely lots of really cool tools being built, we’re much more focused on the, like, where does this fit as part of our technology infrastructure or our tech stack, because we think that’s, like, potentially a huge lever for transforming learning for young people.

Current Applications of AI in Schools

Michael Horn: It’s fascinating to me, ’cause you just named a number of things that AI could do that I had never thought about in terms of, like, dynamically changing the schedule for, you know, the school and students and, like, there’s some pretty cool things you can start to imagine that ripple out of that. One of the things in that conversation that Diane referenced that she and I agreed to hold ourselves accountable for was to get really specific when we talk to school leaders about, so what’s happening today in your schools that’s actually leveraging AI or is quote, unquote AI native, if you will? And so you all are obviously still in the design phase for 3.0. I use that with trepidation now, but put that aside for a second. Like, today, if I were to, you know, get to be in California again and I was hanging out in your schools, what would I see that’s powered today by something that’s AI native? What is it? What are the tools? What does it look like? What does it do? What are you building versus partnering with? Give, give us a sense of some concrete applications. Anywhere in the tech stack or during the day, that is AI-powered?

Cady Ching: I think this would be a good opportunity to talk about a specific tool that we’re using, which maybe not ironically is Futre as one model example of what it can look like. And Dan can speak to specifically what it’s looking like in the student and teacher experience. But one of the reasons why I start with speaking about a specific tool is because I think that largely edtech has not— has been really unsuccessful in solving for what we need to operationalize innovative school models. And Futre has been a nice shift of pace for us because it is truly a tool that is building for the child versus fitting a child into a tool or larger system. And I think that the way in which we’re using it with our young people can work in many H2 and H3 model contexts because it’s able to give us real-time data about our young people and then allowing us to build their student experience based on the data that we have about them. Dan, can you introduce, Michael a little bit more to Futre and how we’re using it at Summit?

Dan Effland: Yeah, absolutely. So Futre right now we’re using with our juniors and seniors, although we anticipate starting younger, in the coming year. And right now, our juniors are really using it to do a lot of career exploration, which the tool excels at, and really like exploring very deeply different possibilities. And then what those possibilities mean as far as what they need to be working on now or experiences they have between kind of their current point A and their future point B. And then our seniors are using it to get more concrete about what really, what is my next step? What does that mean? What is the thing I’m doing immediately after high school?  — I think we deeply believe this and will proudly say it is best-in-class career-connected learning. It is. Absolutely. It is the thing when we do — when I do focus groups, when we do alumni data, kind of research, it just comes up over and over again because our young people actually get out in the community or within the school building and really doing what we now are calling real-world experiences. We’ve called them lots of different things over the decades, but we are — one of the things about that though is that kind of like we were talking about, how do we really curate the journey with this resource allocation stuff? Just tracking all of those different experiences, often there’s 50 or 60 choices for students at one school when we had those expedition cycles. We’re now pulling those experiences onto the Futre platform so we can really start to map what students have been doing, what they haven’t been doing, maybe what they should be doing. And then their mentor can take an even more engaged kind of role in coaching them through that pathway. We’re really excited about that.

We’re kind of just starting, you know, to pull those on. But I think in the future it’s one of the things that we see that the Futre tool will be really, really helpful with because, you know, young people need coaching as they’re figuring out that concrete next step.

Michael Horn: So super interesting. I actually have two questions, but let me go to you, Dan and Cady, first. And then I have a question for you, Diane. I’m going to put you on the hot seat. But I think we’re allowed to do that. But it’s interesting. You just said something there in your answer, Dan, which was then the mentor or coaching.

And so just like to put a fine point on it, The, like, this works really well because you have a model where there is that function that is meeting on a regular weekly basis, right? And like, so therefore that touchpoint, like it’s coherent again to use that word, but I, I would love a quick update on how Expeditions has evolved because when I think when Diane was exiting Summit, like, y’all were in the middle of redesigning it and I’ll be super honest, like even though she and I talk basically weekly, I don’t actually know the new version of Expeditions. And so, I still have a slide in my talk about Summit that says, you know, like every 8 weeks or whatever, you go off for 2 weeks. And y’all should update us on what’s the current state of Expeditions at Summit.

Cady Ching: Yeah, I’ll respond to 2 pieces. One, with the mentoring piece, that model element does exist. One of the reasons why I personally love Futre is because it takes some of the lift of mentors needing to be the vessel of all career pathways off the human. So when we think about that resource allocation of, you know, people, talent, it’s creating a better, more coherent system for the adult as well, which has been so important because we love to center our teachers as well in the design. And then the Expeditions redesign, it’s been really cool. We’ve been, you know, continuously shifting that program based on what our alumni are sharing back with us, based on how the world is shifting. And of course, AI, as so much a part of our students’ experience today and in the future, has shifted it again. It is non-graded— so this is actually surprisingly one of the most controversial things when we rolled it out to parents— they are not receiving grades on the different career exposure pieces that they try out as they’re with us at either the high school levels or as early as 6th grade in Seattle.

And it’s really about ensuring our students get about 9 career exposures between the time they start with us to the moment they leave, because we know it’s really important for them as they develop their identity to see themselves in different career pathways that are all mapping towards high opportunity where they can build their generational wealth for their family. So it’s probably pretty similar in terms of the time allocation. They’re in sort of what we call their core classes for 6 weeks, and then they’re pausing for 2 weeks to go out, usually in the upper grades, off campus. You don’t see — when people come to observe this on our site, they’re not actually a lot of kids in the building because learning happens without walls. Dan, what else would you add as you’re going? Dan is quite literally on an expedition tour currently. He’s at one of our school sites right now, and right after this recording, he is going to go in and speak to our teachers. So what else would you add?

Dan Effland: Yeah, I mean, I think that’s an important side of it is so that, I mean, one, it’s just, I was still in a school leadership position when we transitioned to this kind of redesigned Expeditions, and I just can’t tell you how powerful the experiences are. I can think of so many stories, so many young people, but like one in particular that a young, he’s — well, he’s probably not even that young now, but he’s 25, but he was a young, young man at the time who was really, really struggling. And this kid was having discipline issues, attendance issues, struggling, like, not necessarily living at home on a regular basis. And we really, we thought we were gonna really lose this kid. And he started doing an expedition experience related to culinary arts. After he did that first one, he did a second one, and then there was kind of a sequence of them where he had, you know, like the first one was kind of like a survey course. It was the community college. It was about 25 kids.

Finding Passion and Purpose

Dan Effland: Then he was able to do one where he was actually kind of shadowing one of the actual culinary arts program college students and learning in a second wave. So I’m having a hard time not using his name, but I’m going to keep it out. But I just loved this kid. And he found his pathway. And not only did he find his pathway and ended up going to a culinary arts program and graduating and now works, you know, like in the culinary arts, you know, scene in Seattle, his attendance improved, his grades went up, his connections with his mentor, with his teachers, with his peers, which were, you know, fraught, got better and better. And he became a healthier human because purpose and passion and having a pathway is essential for all of us. And we’re at a time when, you know, you can read about this everywhere, there’s studies, our young people are really searching for that clarity about purpose and pathway. And when you see it, I mean, it’s just like Cady said, it’s kind of hard, like it’s not a good thing to tour because the kids are mostly out in the community.

Dan Effland: But when you have the privilege of being a school leader and you see these kids over the years and they do their cycles, you just, the impact is unbelievable. So yeah, I just wanted to, yeah —

Designing Education for the Child

Michael Horn: No, the anecdotes make these things always so much more powerful. And I mean, you can, through your story, hear him building a positive identity of himself, right? And that’s incredible. Diane, something Cady said made me think of it, which is obviously, you know, folks who listen to us know that you’re the entrepreneur behind Futre. I now understand why it was originally called Point B based on Dan’s language and I guess, but she said something interesting, which was like a lot of edtech has not helped the launch of new model design, right? Because it’s been, and that, that’s sort of been obvious to me for why, right? Because the market is schools as they are, and venture capital wants big markets, and right, like, it’s — so it’s, it’s this sort of reductivist thing that happens. But she said you’ve been designing for the child, and so you’ve been able to escape that and I wondered if you just might want to reflect on that, because I imagine it is still hard though, um, because you’re still like — schools are the conduit to the kids. So just sort of like, what’s the advice, or what have you learned, right, through, through navigating that?

Diane Tavenner: Well, I think that I mean, so much of what Dan and Cady have just said is so important. And I think that what, what was one key thing is, you know, I sort of set out to build Futre as an edtech partner that did things differently than what I experienced when I was sitting in, you know, the seat that Dan and Cady are in. And you know, that core value of our company is how we do the work is as important as the work that we do. And so how we do the work is very much co-building with schools and leaders and students. And so, you know, we are out in the field working with students and teachers and people like Dan and Cady literally every other week. So we are literally co-designing and code building what happens. And so what you just heard, that Futre is being designed to help young people build this identity over a 10-year journey. I mean, that’s unheard of, I think, in any sort of tech market.

People don’t think about that. We have real outcomes that people are aiming towards, and most tech products just look at what’s something that exists and try to make it more efficient or slightly better. They don’t think about the integration of it, the flexibility of it, how it will be used by the adults. I mean, As an example, they just told you Futre can be used both in individual coaching, mentoring, advising, counseling. It can also be used with groups of students in a classroom, and it’s actually literally designed to support both of those. And I will say the, the inclusion of really supporting real-world experiences came directly from our engagement with our school partners and our students. That emerged as this real need And we were watching people literally running around schools with laptops on their arm and all these spreadsheets and trying to organize. And so we have co-built these elements together.

But you’re right, the incentives in the business side of things are not to build this way. And so, you know, like always, we’re going to see if we can prove that wrong and say, no, when you do build this way, you not only get better outcomes for young people, schools and teachers and educators, but you also can be a successful, scalable product.

Michael Horn: So certainly a more enduring product if you, if you thread that needle, right? So for sure.

Cady Ching: Yeah, exactly. So I think it’s I think it also speaks to why it’s so important for Dan and I to sort of pull together a coalition of the willing with other operators. One thing we haven’t spent — I know we’re almost at time — that much time talking about is how hard this work is. It is challenging, and we have so much to learn. We are not perfect. We are learning every single day. We are constantly seeking out other school systems that have similar visions for education, and we’re trying to learn from them. We’re trying to get out onto their campuses and be in community with them because we know that if we want to build something that’s enduring and lasting and maximizing impact on the number of students in our country, or even globally, we have to build for the students of Summit as well as all students.

And I think that, that’s what’s most important for me as I set out to lead some of this work is if it only works at Summit, it’s not good enough. And what we’ve learned about leading change at scale is that we need a shared purpose for what school is actually for, and that belief that it’s possible to build a system for that purpose, which is actually no small feat. And it’s why we’re spending so much time building what I would call a coalition of the willing, which is educators and systems who agree on our common destination before we start building the actual tools. I think my core idea is that beliefs come first, model comes next, and then the tools come last. And when we get that order right, that’s when the scale can become possible.

Summit Learning: Model vs. Technology

Diane Tavenner: Cady, I want to double-click on what you’re saying because, you know, you talked at the top of this about how Summit Learning had really scaled across the country to 40 states and, you know, 100,000 students, etc. But Dan, you also said the technology, the Summit Learning platform was not the model. It is not the model. And the model has really taken root even as that particular piece of technology has gone away. That said, I do know that you both believe deeply that having an aligned core technology that is the infrastructure that sort of I think, Dan, you used the word guardrails, like puts up the guardrails and the support for the model is profound. And I know that you’re in conversation with other folks who’ve done some at learning who are, who it’s taken root for them as well, but are having a hard time really keeping that model intact. And so talk about sort of the need for that infrastructure, the role that it plays and what you think it might look like in 3.0. And Cady, you just said it, no one’s going to build technological infrastructure for a single school or a single school system.

And so there has to be this coalition.

Cady Ching: We have to create the market.

Diane Tavenner: Yeah. And so talk about that because the market generally is not very coherent. And as I sit on the other side, it can be really confusing and hard so talk about how you guys are thinking about that.

Enabling Learning Through AI

Dan Effland: Yeah, I think this is something we’ve started to be spending more and more of our time on as we’ve gotten clearer in the work with our students and caregivers and educators this fall. We’ve gotten clearer about where we’re going. There is this need, which is that technology is not the model, but it is, you know, there’s a reason we talk about time, talent, and technology as the big levers with resources. It is a huge enabler. And I think the possibilities with AI as part of that technology infrastructure make it an even stronger enabler. So I’ve already talked about like the idea of like dynamically reallocating resources, which is, I think, I love in a conversation educators here, because I think sometimes it’s not the, like the shiniest thing to talk about, but we know that getting kids the right thing at the right time in the right sequence is often the difference between learning and not learning, between progress and not progress, and between finding that pathway and not finding it. And so, at a high level, when we’re thinking about that infrastructure, we need to make sure that, like, we have a really rich, you know, amount of data.

And there’s a lot of work to be done there. Our school systems historically have not put data together in ways where you can create what like a technology person would call the data lake in a way where you can really access that as you need it. And then the next element is going to be a really robust knowledge graph that is not just academic standards. It’s got to be much broader than that. And then, of course, the way that AI would then interact with that to allocate and think about your resources. And I’ll share too, like when we think about resources, I generally think of everything as a resource. My time is a resource, Cady’s time is a resource, our educators’ time is a resource, curriculum is a resource, YouTube is a resource. Anything that can help a young person move towards those outcomes, we think of as a resource, and how can we constantly repackage those and get them in the right order while holding onto the vision? Because I think there’s a version of personalized learning that I would call like individualized learning.

That’s not what we’re talking about. I believe this has to happen deeply in community and with really strong relationships and human connection. And so the personalized learning, then it’s actually more complex when you’re committed to maintaining community and relationships, because you’ve got to figure out configurations of young people and not just put everybody separately on a computer they have a particular pathway and so.

Cady Ching: And that’s what we’re seeing, we’re seeing people just run, sprint towards an outcome without doing the diligence. And I think that it’s resulting in a lot of binary. If you’re either tech-forward or you’re human-centered, and there is a way to bring that together and build a model that’s doing both and that’s what we’re setting out to do.

Dan Effland: Yeah. There’s another binary too, that we haven’t talked about, but we should stamp here, which is this binary of like, real-world readiness or academic foundations. And that we now, we have these camps and like, we’re all about academics and we’re all about the real world. And when you talk to students, you talk to students and caregivers and educators, no one thinks it should be an either-or. That’s the scarcity mindset we’re often in, an area that we engage in educators. And we’re deeply committed that our young people will be prepared with college-ready academic foundations and real-world readiness, which means for us habits of success, communication, collaboration, all executive functioning. That is has a purpose

Diane Tavenner: Yeah. One is, as Dan, your story of that student showed, the sense of purpose, which is connected to what my life will look like in the future, really is what drives everything for a young person, right? It’s how they’re forming their identity as they build that vision. It’s what motivates them to stick to the hard work every single day on this journey to get where, where they’re going, and so yeah, I think what you’re up to is really critical. I hope that a lot of schools and systems engage with you to create this demand in the market for this type of infrastructure, dare we say, you know, Summit Learning Platform 3.0 as well. Because I think that it’s really, it’s hard to conceive of a post-AI model that doesn’t have that. That real infrastructure.

And I know you all haven’t seen it or found it yet, but continue to make strides in bringing it to life.

Michael Horn: This season of Class Disrupted is sponsored by Learner Studio, a nonprofit motivated by one question: what will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers and for civil thriving. Learner Studio is sponsoring this season on AI and education because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about Learner Studio’s mission and the innovators who inspire them at www.learnerstudio.com. 

So a good place maybe, Diane, to wrap up.

Should we pivot to our before we let you off the hook section? Cady, Dan, we have a tradition here where we, where we talk about something we’ve been reading, writing, watching, listening, whatever it is, not writing, listening to, and eventually I’ll get my verbs correct. But and then, so just often we try to keep it outside work, but we often fail. So, Cady, you want to go first, and then Dan, we want to hear what’s been on your playlist or bedside table, and then Diane and I will wrap it up.

Cady Ching: Yeah, sounds great. I have been— I taught my 7-year-old what it means to brain rot. I don’t know if you’ve heard that term, but where you just sit on the couch and just kind of watch nothing for hours and hours. And we did do a Spider-Man and Avengers binge this past weekend. So that is something I have been watching a lot of. Reading is going to be hard for me to separate it from the professional. I’ve just been really deep in leader succession. I think to do this work, you need really strong talent in leadership pipeline.

And so I’ve been in HBR. I check the Marshall Memo every week to see what, what they’re pulling out, to really think about how I’m leading personally, locally, individually, but then also what the sector needs. Dan, I’ll pass it to you.

Dan Effland: Similarly, like the kind of first answer on my mind is just this fire hose of like white papers and podcasts about education and AI.

Cady Ching: And then he screenshots them and sends them to the whole team.

Dan Effland: Yeah, drive everyone nuts with them. But I do have a more, maybe a more fun one on the personal side. Kind of finally reading the Foundation series, the Isaac Asimov kind of classic sci-fi. It’s honestly about connection for me. My siblings are sci-fi readers and I’m very late to the party. And then my father is retired now, and one of his, it seems like, main activities as a retiree is to reread everything Asimov ever wrote multiple times.. And so for Christmas this year, I got a stack of these really great, Half Price Books paperbacks of all the Foundation novels, and I’m starting to work through them.

And we have a text thread about them, and they are, it’s a wonderful story, it’s very complex, and it certainly does also make me think a little bit about the future of our world and AI and, and what, you know, where, where young people fit in that, but it’s also just been a really fun way to connect to the family.

Michael Horn: That’s cool. Wow.

Diane Tavenner: What about you, Diane? Well, picking up on that. So first of all, apparently this is not going to be a novel recommendation because this Apple TV series, I guess, is the most watched at this point. But we watched Pluribus, which was created by Vince Gilligan, who — yes, Breaking Bad. Yes, Better Call Saul. I didn’t watch either of those, but I was a huge X-Files fan

Michael Horn: Back in the day.

Diane Tavenner: OK. And so there is very much some X-Files feel here in Pluribus. But to what Dan said, and I think Foundation is related, I just find this series to be so provocative in the questions that it’s bringing up and sort of the contemplation of where we’re going as a society and how the choices we’re making each day might affect that and what we actually want. And I will— I told you I would report back my goal. I did finish Ian McEwan’s novel that I pre-promoted. Yeah, yeah, yeah. But it was everything I expected and more.

It was just extraordinary. And I did both of those over the holiday. And I will tell you, I feel like I’m sort of in surround sound right now of asking these big existential questions along with everything from what’s happening in the news on a day-to-day basis to all the work in AI. So, but I would highly recommend it. Super provocative and interesting.

Michael Horn: Perfect

Diane Tavenner: Perfect. Crazy. Like, you never know what’s gonna happen next.

Michael Horn: That’s fun when you can’t predict it coming.

Diane Tavenner: Yeah.

Michael Horn: Yeah. Yeah. I was gonna say, so the brain rot theme that you brought up, Cady, I mean, we talk about it all the time with our 11-year-olds, here at home. But I was — this is not where I was going to go at all with this, but I — something one of my kids said made me think of the Animaniacs theme song, if you all remember that cartoon from back in the day, and I pulled it up and showed it, and my wife just dismissively said, this was brain rot when we were growing up. so, there you go. the one I’ll say is, we all went with another family and saw Wonder, at the American Repertory Theater. Many people may know the book, Wonder, which follows the story of Auggie Pullman, a 10-year-old who has Tretcher Collins, syndrome that presents as disfiguration of the face and sort of how going into a school environment for the first time and all the things that it does. And there’s a movie about it as well, but now there is a musical too.

And Diane, you will not be surprised, I was crying from the opening number and I kept it up through the whole thing. So it was, I was true to form. That’s a good one to cry over. It was good. I represented well, but it was fantastic. We’ll see if it makes the jump from sort of off-off-Broadway to something bigger, but until then, if you’re in the Cambridge area, definitely check it out. And for all of you, just huge thanks, Cady, Dan, for joining us, getting us to have a peek under the cover of what’s coming next at Summit and the broader — as usual, you all are thinking about the broader ecosystem as well, which I admire so much about the work you all do at Summit. It’s not just our model, but how does our model spur this greater change across education.

So huge thanks for joining us. And for all of you listening, keep the questions, comments coming. Diane and I feed off them, and we really appreciate all of you. We’ll see you next time on Class Disrupted.

Disclosure: Diane Tavenner founded Summit Public Schools and served as its CEO from 2003 to 2023.

This episode is sponsored by LearnerStudio.

]]>
Opinion: When It Comes to Developing AI Rules, Who Asked the Students? /article/when-it-comes-to-developing-ai-rules-who-asked-the-students/ Fri, 03 Apr 2026 10:30:00 +0000 /?post_type=article&p=1030620 Three years ago, schools took a side.

Within weeks of ChatGPT’s release, hard rules appeared almost overnight. AI tools were banned throughout departments. Teachers watched what seemed like an existential threat materialize in real time, and they responded the way institutions usually do under pressure: They drew a line and told everyone not to cross it.

Three years later, that line is still there. And at many places, nobody ever asked whether it should be, at least not the people most affected by it.

When I looked into how my Austin, Texas, high school’s AI policy was developed, I found that my administrators made the decision internally. There was no student committee, no open forum, no campuswide survey. The rulebook was simply handed down. In K–12 education, require districts to develop and publish AI policies; when they are published, they’re often developed without proper consideration of all stakeholders, including students themselves.

It’s reasonable to counter that students are minors, that institutions need coherent governance and that not all decisions can go to a committee. But AI policy isn’t a routine curriculum adjustment. It governs what tools students are allowed to use to think, draft, research and communicate — tools that increasingly shape how knowledge is produced and evaluated outside school. Getting those rules wrong produces consequences for students.

Brittany Carr’s situation is a well-known example. In early 2023, the had three assignments flagged by an AI detector. She provided her revision history and explained her process writing deeply personal essays about her cancer diagnosis, her depression and her personal recovery. It wasn’t enough. Fearing that a second accusation could cost her financial aid, she began running every essay through an AI detector herself, rewriting any sentence it marked until her writing voice felt flattened and unfamiliar. By the end of the semester, she left the university.

Carr is not alone. The same NBC News investigation found that students across the country deliberately simplified their vocabulary and avoided complex sentence patterns — not to write better, but to write less like themselves. Creative writing assignments exist to help students find their voice, which they can’t do in fear of an algorithm. Carr’s case shows a student reshaping her writing, and ultimately her education, around a software system she had no role in approving, in a policy she had no voice in developing.

Student involvement would not necessarily have guaranteed a different outcome in Carr’s case. But it might have changed the structure that enabled it. Students could have brought up concerns about relying on automated detectors without corroborating evidence. They could have described how fear of false accusations pushes students toward simpler vocabulary, safer syntax and less intellectual risk. They could have asked what procedural protections exist before a software flag becomes an academic charge.

Instead, at many institutions, enforcement architecture was built first. Conversation came later, if at all.

It doesn’t have to work this way. In Los Altos, California, did more than sit in on policy meetings — they designed and ran community workshops, facilitated discussions between sixth graders and administrators, and built an AI chatbot to help other districts draft policies. 

A found that students overwhelmingly want to be part of decisions about how AI is used in their education — and that many already hold sophisticated views on its risks and potential. The fact that Los Altos made national news tells you how rarely that invitation is extended.

But there is a deeper reason students belong in these conversations: We know something policymakers don’t.

At my high school, I’ve witnessed — and experienced — a secret loop in the learning process: we use  large language model tools like ChatGPT and Claude to genuinely improve learning by unraveling concepts, studying for tests and brainstorming ideas. 

A few days ago, a student asked a question about a formula in my AP Physics C class — and nobody knew the answer. Another student opened his laptop and asked Claude, and after a few minutes of back-and-forth, we had completely straightened out our question, improving everyone’s understanding of how circuits worked. I used an LLM to compile notes from my Multivariable Calculus class, which helped me study and earn a near-perfect score on my test. My friend used ChatGPT to learn Java syntax for a project — not to write code, but to understand the language.

A found that 54% of U.S. teens now use AI chatbots for schoolwork, with the most common uses being research and brainstorming — not copying and pasting answers. But that message hasn’t reached the people writing the rules. This secret loop goes completely disregarded by schools, simply because it’s easier to blanket-ban the technology altogether. The generation that grew up with these tools understands their texture in a way no outside committee can replicate.

These AI policies directly affect students’ outcomes and futures. To exclude them from the conversation is simply undemocratic.

If educational institutions are serious about preparing students for democratic citizenship, that commitment must go beyond coursework and into policy-making. The time to invite students into these critical conversations is now. Will schools treat students as subjects of policy, or as participants in it?

]]>
Opinion: We Don’t Let Babies Play With Electricity — Why Are We Letting Them Play With AI? /zero2eight/we-dont-let-babies-play-with-electricity-why-are-we-letting-them-play-with-ai/ Mon, 30 Mar 2026 14:30:00 +0000 /?post_type=zero2eight&p=1030476 AI is newly electrifying every corner of our lives, charging ahead faster than most of us can follow. If adults are barely keeping up with tools like Chat GPT and Claude, how are babies and young children supposed to make sense of a stuffed dinosaur that sings them songs or a plush bear that draws them into conversation?

We are developmental cognitive neuroscientists who study how children’s daily interactions with parents, caregivers, teachers and peers shape , and development. We are not anti-AI, but we are extremely concerned about corporate efforts to market AI toys to parents and educators of young children. We do not yet know how many young children are already engaging with generative AI bots, but if are any indicator, this is a rapidly growing market. 

Some companies say their toys and devices are “age-appropriate” and will support children’s learning and development, but that’s not always the case. For instance, the makers of Kumma, a plush teddy bear, promised to build conversational skills for children from ages 3 to 5. But the toy was pulled from the market last year after it was caught encouraging researchers testing it . 

Beyond these physical safety risks, we have essentially no data on how interacting with generative AI “friends” will shape very young children’s foundational brain, socioemotional and language development. Rather, the preponderance of evidence about how brain development works in the earliest years of life suggests that families should proceed with caution before letting their littlest children play with these new technologies in the form of toys.

We are not alone in this concern. Together with scientists around the world who study the exquisite, human-to-human interactions that shape early brain and cognitive development, we recently released an about the risks of direct infant-AI interaction. 

Decades of scientific studies paint a clear picture of optimal development in the first few years of life. Babies and toddlers grow and learn through daily, moment-to-moment interactions with their close caregivers. Indeed, humans cannot develop fully without these foundational interactions. Present, responsive, real-time interactions shape children’s language, sculpting their growing understanding of new words, grammar, pronunciation and social intentions. 

These real-time interactions shape children emotionally, helping them map their inner experiences to their outer perceptions. There is evidence that when a caregiver and a young child interact, — from eye contact to to heart rates, oxytocin levels, and even . 

Unlike AI models, which can parrot human-to-human interactions, caregivers pair their words with touch, eye contact and facial expressions that signal their love and attention. Real conversations include inside jokes, local dialects, family lore, and the distinct conversational patterns that make a family a family and a community a community. 

Development is about real-time rhythm, and every unique caregiver-child dyad develops their own. It’s not about perfection. It’s about presence, something an AI model can never and will never be able to provide. 

In fact, toys that imitate social responsiveness may interfere with an infant’s developing sense of how people relate to one another. The better these toys get at mimicking a parent, a child care provider, a grandparent or other adult caregiver, the more concerned we should be, particularly in the earliest years when infants and toddlers are developing a distinction between self and other  — a growing awareness that the other humans who surround them each have inner worlds of their own. 

From a policy perspective, . There is much more to learn about these new technologies before parents let their babies play with them. 

Without these policy protections, parents and educators must take the lead, that simulate social reciprocity, replace face-to-face caregiving, or are designed to replace soothing behaviors that infants and toddlers need from caregivers in order to build attachment, trust and human connection.

The earliest recorded scientific experiments with electricity happened 3,000 years ago. Today, access to electricity has raised the standard of living for nearly the entire world. Still — after more than a hundred years of widespread use, safety standards and engineering to wield electricity for the common good — no responsible adult would let a child anywhere near it in raw form. 

AI has the power to improve human lives, but these are early days. We take for granted that we cover our light sockets to protect all our community’s children. We must take the same protective stance with AI.

]]>
NYC Releases Guidelines for AI in Schools. Some Say it Raises More Questions Than it Answers /article/nyc-releases-guidelines-for-ai-in-schools-some-say-it-raises-more-questions-than-it-answers/ Fri, 27 Mar 2026 14:30:00 +0000 /?post_type=article&p=1030416 This article was originally published in

New York City’s Education Department unveiled its for artificial intelligence use, offering a rough road map for if and when to incorporate AI tools in school.

The guidance, released Tuesday, arrives nearly three years after a short-lived on ChatGPT. It also comes in the midst of ongoing debates about student privacy, AI’s effect on student learning and development, and the role of private companies in schools. Some schools had as they awaited citywide guidance.

Hot button issues, like how and if students can use AI for homework assignments, or whether students can use personal AI chatbot accounts in addition to tools approved and supervised by the Education Department, are still being hashed out.

City officials are asking families and educators for feedback, which will inform future versions of the guidance. The Education Department released a and will also host webinars and events to answer questions and gather feedback through May 8.

“AI is here, and our responsibility is to put strong systemwide safeguards in place,” schools Chancellor Kamar Samuels wrote in an email to parents.

The early framework is structured in a “traffic light” approach: green light for approved uses, red light for prohibited cases, and yellow light cases for gray areas, which require significant oversight.

For example, brainstorming lesson plans and drafting non-critical communications fall under “green light” cases.

In “yellow light” cases, schools can use AI to find trends in student data, to generate translations for bilingual learners, or adapt materials for students with disabilities — but a trained professional must first review the outputs before it is used with students.

All decisions made about students, including grading, development of special education and 504 plans, discipline, counseling and crisis intervention, and other academic placement decisions, are strictly forbidden. These “red light” cases are not expected to change in the final playbook the city aims to release in June.

Pushback has already been fierce among parents and education advocacy groups: A asking the city to put a two-year pause on AI use in schools has garnered about 1,500 signatures since October. Several Community Education Councils have also passed resolutions calling for a moratorium of AI in schools.

The guidance was written by the Education Department’s AI Task Force, and informed by the city’s external AI Advisory Council, which includes education technology partners from Google, OpenAI, and other companies hoping to contract with the city’s roughly 800,000 K- 12 students.

Questions remain about student privacy and third-party AI contracts

Before schools can use AI tools in the classroom, each product must go through a data privacy and security vetting process called the Enterprise Request Management Application. The process, created in 2023, applies to all third-party technology vendors.

But AI has become ubiquitous. The Education Department’s contract with Microsoft 365 programs did not originally include AI chatbots, but now do, said Naveed Hasan, a member of the Education Department’s Data Privacy Working Group.

“Just like TikTok was unregulated until school networks blocked it, so are these free AI products,” said Hasan, whose group advised on data privacy policies prior to the AI guidance.

Schools can visit the department’s to see if a tool has already been approved; otherwise, schools must submit an application for new use.

The process, however, doesn’t yet include guidelines on how to review certain aspects of AI products, such as algorithmic bias or instructional effectiveness. Those are expected to be included in the final June version of the playbook.

The guidelines, which were shaped by federal and local laws, say personal student information can never be entered into unapproved AI tools, and under no circumstances can student information be used to make money or train AI models.

Although the general sentiment about privacy protection is clear, how to ensure it remains protected in every use is a key question that some close to the policy development say remains unfinished.

Hasan said the guidance alone can’t guarantee privacy and relying on third-party products, even approved ones, makes it difficult to know what’s secure and what’s not.

He has called on the Education Department to consider maintaining its own hardware and training its own group of AI experts instead of relying on outside companies.

AI moratorium advocates push back

The Parent Coalition for Student Privacy, one of the groups on the AI moratorium committee, said in Tuesday that the guidance does not address the potential long-term effects of AI use on learning and thinking.

The city has already accepted that AI will be a part of school learning before proving its value and safety for students, said Kelly Clancy, founder of Parents for AI Caution, another group on the committee.

“The city needs to have a burden of proof about why this is good,” Clancy said. “It shouldn’t just be about harm reduction, but rather why AI is better for my kids than a human-centered, traditional classroom.”

Education Department officials said proposals for new, AI-focused schools and programs — like Next Generation Technology, an “AI-focused” high school — must demonstrate how they align with the guidance’s principles.

The full preliminary guidance can be accessed .

Chalkbeat is a nonprofit news site covering educational change in public schools.

]]>
The AI Behind Flourish Microschools /article/the-ai-behind-flourish-microschools/ Thu, 26 Mar 2026 16:30:00 +0000 /?post_type=article&p=1030396 Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

John Danner, the cofounder of Rocketship Public Schools and now the founder of Flourish Schools, an emerging network of AI-native microschools, joined Michael Horn and Diane Tavenner to share what’s now possible when it comes to school design in the age of artificial intelligence that wasn’t previously possible. Danner explained how Flourish is leveraging AI to deliver foundational skills like reading and math through conversational tutors to free up teachers to focus on building relationships and nurturing students’ passions and “superpowers.” 

He also shared how they’re using the technology to provide real-time assessment and feedback on student projects. The conversational models can be much more powerful, he says, than previous edtech applications. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you again for our continuing conversations on AI.

Diane Tavenner: You too. This one’s going to be a fun one. You know, our most recent episode, we talked with Alpha School founder Mackenzie Price. Most people have heard of Alpha at this point. It’s getting a ton of attention. And so what we tried to do there was really move beyond the talking points and the marketing to really dig into the model itself, including specifically how they’re using AI, which is turning into a bit of our quest this season. And so this conversation today is a part of that exploration on who’s building what I would call maybe AI-native school models, if anyone. And, you know, what might they look like? What are they starting to look like? And it’s a really fun conversation today because we get to have a chat with an old friend.

Michael Horn: Yes, that is indeed correct, Diane. Today we’re going to get to chat with none other than John Danner. John, for those that don’t know him, has had a decorated career in tech before turning to education, as he co-founded and led NetGravity, the first ad server company, I believe. And after taking it public, selling it to DoubleClick, John went back to school and then became a teacher, and he taught in Nashville for a few years there. And then I think a lot of folks know him because he co-founded, of course, Rocketship Public Schools in 2006, which we, of course, talked about also in our last episode. But Rocketship was a buzzy school for a good while there, marked by its student outcomes, its use of technology, its expansion. And then after leaving Rocketship in 2013, John did a number of other things, including founding an online math tutoring company, creating some very interesting education investment vehicles and more. But I want to skip ahead to his most recent venture, Flourish Schools, which is what we’re going to hear about today.

Michael Horn: So, John, hopefully I did some justice to the bio, but, welcome. It is always good to see you.

John Danner: Thank you, Michael. Great to see both of you. Long time.

Michael Horn: This is going to be fun. This is going to be fun. So let’s start with grounding our audience. My assumption is that a lot of folks know Rocketship and what you did there. Far fewer know about the Flourish Schools model itself and what these schools actually look like. So maybe give us the basics, like what is Flourish Schools, how many of them are there today, how big are they, what’s the grade levels, what does a day in a student’s life look like at these schools? You know, paint the picture for us.

John Danner: Yeah, yeah. So we started Flourish about a year ago. We opened our first school last August. In Nashville, one microschool so far. They’re middle schools, so grades 6 through 8. I’m out in Phoenix today. We’re opening a couple more schools in Phoenix next year, next August. And I’d say the reason for doing it, you know, Diane knows this well, like doing schools is quite difficult work.

Enhancing Foundational Learning with AI 

John Danner: I often prefer being on the software side where, you know, life is good. But, you know, schools are hard work and sometimes you have to do them. I think the big motivator in starting Flourish for me was that I had started a couple of AI companies, Project Read, probably the most notable doing reading, which is in a lot of classrooms. And I just noticed that most schools are using AI in a very supplemental way right now, very much the same way they used edtech. And that bothered me because, you know, in reading, for example, I think there’s a pretty good argument that AI for reading is going to be better than the best human reading teacher within the next year or two. It’s not a long way off at all because teaching reading is really hard. Training teachers to teach that is hard. It’s hard to be patient with kids when they’re making lots of mistakes.

And it’s hard to remember everything a kid has ever done when they’re reading with you, right? All of which just is default for AI. So, you know, in watching Project Read roll out and seeing everybody kind of use it, you know, in those last 15 minutes in the class when they were kind of, you know, a kid was done with the assignment and needed to do something else. Like, I was like, you know, that doesn’t seem like how AI should, affects schools. It should be used more strategically. You know, what can AI do, and therefore what do you do with teacher time? I think, you know, for me, teacher time has always been kind of the scarce resource. It’s like whatever teachers focus on is really what schools do. No matter what schools talk about, it’s like, OK, what, what are your teachers doing? That’s what’s going to have the most impact. And so Flourish we, we started with the assumption that what we call foundations, kind of the basic skills, reading, writing, math, are going to be better taught by AI.

The way we kind of look at it is if you think of like Tier 1, Tier 2, Tier 3 instruction, it’s really the move from technology as a Tier 2 or Tier 3 product to a Tier 1. So, you know, can you use AI to do kind of tier 1 basic skills and standards-based instruction? And so that was what we did from day 1 at Flourish. We’re 6 months into it now. I would say the lesson learned is, of course, you’re going to have students in any school that like, you know, whatever. We have several special ed, several ELL students they need more time and attention. But during our foundations block, which is an hour long, teachers have time to work with them one-on-one. And a teacher working with a student one-on-one on reading or whatever is like a luxury that like no other school has because that you can’t have them doing that. But when all the other kids are making great progress with AI, having a teacher spend that time, that luxurious time is actually possible.

AI’s Impact on Schooling

John Danner: So that’s the fundamental thesis is that we can do that in a way that that’s what our teachers are not doing and spending all their time preparing for and teaching during the day. And that allows us to kind of come up with a new curriculum. And I think actually, you know, you guys want to focus on AI and we should. I think the actual interesting question with schools is once you make the commitment that AI is going to do a lot of this basic instruction, then you’re confronted with the now what problem, which is like, oh gosh, what’s school for like moving forward? And I guess that’s, that’s what we’re kind of excited about is we’re in this super serious time of change for students. They’re not going to grow up to a world that we all experienced. You know, my daughter just got out of college. She was a pre-med, but didn’t really want to be a doctor. She gets out in the job market and gosh, there are no jobs.

And like all those other things that she learned along the way about hustle and, you know, you got to go put yourself out there and whatever played out and she found a job. But boy, like if you had just spent all your time in school, like learning algebra or whatever, she wouldn’t have done well. So, I think, you know, our point of view at Flourish is we, we talk about 3 things mainly, relationships. So these are middle schoolers. So how do you get along with other people? And we do an hour we call circles, which is really as kind of therapeutic as it might sound, where kids are sitting in a circle talking about their feelings, how other kids affect them, et cetera. And for many, many of our students, I’d say it’s pretty mind-blowing to actually understand how other people are thinking, you know, as you’re talking and saying things and stuff like that. Really powerful.

So relationships are a big piece. And then we talk about two others, superpowers and passions. So superpowers is kind of our word for what people have called soft skills. I hate the term soft skills because it’s kind of denigrating in a world of like standards-based instruction. Oh, that’s the other stuff that, you know, makes you a human, but it’s not nearly as important as high school chemistry or whatever. Like, we actually think it’s the opposite now that knowledge is pretty abundant and accessible, like the things that make you human are the more important things. So, do you have agency and curiosity and these other things that make you awesome? That’s important. And then the passion side is really, what do you want to do when you grow up? What are you excited about? What are your big interests? Which, you know, as you know, for upper-income families tends to happen at home.

You know, you’re sitting around the table or you go, you know, on a little family field trip or whatever, and kids are discovering lots of different things that they might be excited about. Happens a lot less in working class and lower income families. We’re purposefully mixed income. We took a page out of your book for that, Diane. I think that’s really the right way to do this. And so for our kids who are, you know, working class and lower income, we think like discovering, what the world is and what you might want to be in is super important, especially in middle, so that you kind of enter high school with some idea of like what you’re excited about and some kind of path you might want to pursue. Even if that changes, that’s OK, you’re not just kind of clueless showing up in high school, which, you know, a lot of kids are.

Diane Tavenner: Yeah, super helpful, John. You know, one of the ways I’ve been trying to have conversations with people about what these sort of AI-native models will look like or can look like or do look like is I don’t want to have a conversation where we compare what they’re doing compared to like the old industrial model classroom, right, that’s like not useful to me.

John Danner: We’ve had that conversation. Yeah.

Diane Tavenner: So I keep using the sort of Rocketship and Summit because I know them the best of like best-in-class sort of personalized learning models that we were doing the very best we could at the time with the resources we had, and doing a lot of what you just described, right? Like, I’m assuming circles maybe comes out of Valor, which, you know, it has, you know. So like, a lot of that great stuff we were doing before. So what I’m really, and you’ve alluded to this, I think, with shifting Tier 1 instruction out of the classroom model and the AI is doing that. But let’s dig in a little bit deeper. Like, literally, what’s possible today that we just didn’t do 10 years ago and now we can do it? And what does that specifically look like in the model?

John Danner: I think the big change here is really one from point and click to conversational, right? Like, that was the eye-opener for me, really, you know, back in the ChatGPT moment was you kind of just immediately it became clear that a conversational agent would be able to kind of work through things with a student in so much better way than, you know, kind of what we all did with kind of edtech back in the day. So, you know, we all, we call it personalization, but there’s kind of a difference between a program more or less knowing where you are and what you need versus what an AI does, which is it knows everything. You know, like in Flourish, we more or less pour everything about a student into it. We have transcripts from everything students say. Like, the AI just is all-knowing about what’s happened with that student at the school. And so when it’s personalizing, it’s 100 or 1,000 times deeper level than like this basic categorization that edtech used to be able to do. So I think it’s much more aware of what students need. And I just think the mechanism of talking to a student conversationally is so much better than kind of navigating through a bunch of screens and the stuff we used to do.

Diane Tavenner: So I’m assuming then you’re building your own. It sounds like you’re building, you called it curriculum, but like that tier 1, because I have yet to see sort of off-the-shelf products that are really, that I would be like, yeah, they’re great. They can do the tier 1 instruction. Talk about what you’re building, what that looks like for middle school kids, you know.

John Danner: Yeah, right. And remember, we’re 6 months old, so anything I tell you is like total work in progress. But, you know, we’ve got good people and we’re working pretty hard on it. So the, you know, the fundamental idea, so I’ll tell you where we started with this and then kind of where we are now. We kind of had this idea that we’d have an agent on our side that was very good at sending kids to the right place to get the right help, right? So kind of like a hybrid between the old ed tech world and kind of this AI-driven world. And we pretty quickly discovered the kind of things that we had discovered at Rocketship, or I’m sure you did at Summit, which is there’s so much friction and stuff involved in manipulating another program. It’s like basically not worth it. And so that probably took a couple months for us to just realize like this is a waste of time.

Tutoring via Adaptive Dialogue

John Danner: And so really the way our system works today is as a student, I’ll tell you today and then where we hope to be in 2 months. So today, the way it works is that we have kind of a pre-assessment where we’re looking for what a student knows. Based on what they know, they enter a conversation with our AI. We often will have a 1 or 2 minute video of like just what that thing is, kind of an old edtech type thing, right? Just because I think a framing is often helpful for a new concept, but that the majority of the real instruction is kind of this dialogue between the AI and the student on like, OK, well, let’s talk about, you know, two-digit addition just for lack of anything better. Here’s a problem, you know, solve this problem for me, tell me how you’re doing it. And then basically just digging in as the student doesn’t get it. And it’s so easy to prompt for, I mean, you know, Zeal, my third company, the math tutoring company, we had figured out all the misconceptions that every student has in math. And so when you prompt an AI with that, OK, here are the 10 likely things that a student’s going to do wrong, when they’re doing two-digit math, it just goes, oh, OK, that’s it, and then it goes deep there, right? So if you think about it, it’s very fluid.

It’s very much what a human tutor would do in that case. They’re kind of responding in real time to what that student’s doing and going, oh geez, you don’t really understand how to carry the tens place, so let’s go deeper there or whatever. So that interaction with the AI happens, and then we go out and post-assess. And so the student’s kind of manipulating where they want to go and what they want to do through that process. Where we’re going, where I hope to be in a couple months, is that that’s all, all the pre- and post-assessment is kind of gone. We’re finding that the AI through that dialogue has just as good an understanding of what that student is capable of doing as kind of any formal assessment process. And it’s much more natural to just have the students sit down with the AI, you know, when they start and talk about what they want to work on. And then, you know, kind of the AI drills into that and shows them a video and does things like that.

So I think it could feel quite a bit like, you know, a student showing up at a tutoring center and that tutor kind of just working with them. It feels like that’s going to work. But that’s where we’re at with it.

Diane Tavenner: Is that voice or are they typing or both?

John Danner: We’re doing typing now. We’d love to do voice. We started there and we really worked hard on it. I would say that the biggest problem with voice for us is that we have never figured out the kind of noisy classroom problem. Very hopeful that somebody does because of the issue, you know, even if you’re off in a corner of a classroom or even outside in the hallway, the AI hears everything. And so it you know, and if you think about it, like when you’re in one of these sessions, the AI hears something and somehow inserts that in the conversation. That’s just weird. It kind of ruins the whole flow.

So it’s easier with middle schoolers to do kind of a text-based one right now. But I, you know, what I’ve told the team is I think the main interface for AI will probably be audio at some point. Like it’s just the most natural way. And so as the industry kind of builds better and better models for that, I hope that this problem gets solved and we can go to audio.

Diane Tavenner: That makes sense to me. And do you then have a knowledge graph underneath that? So even though the students sort of like flowing where it makes sense to them, at the end of the day, you have kind of the macro plan of where you want them to go.

John Danner: And yeah, so we built a super elaborate one for Zeal and unfortunately are more or less rebuilding it now for all of our stuff. Yeah, I think that’s right. I mean, as you guys know, the real challenge with AI is often that it’s so good in the moment at these things, but you kind of have to bring it back to reality sometimes. And so, you know, having a prompt that says, hey, pull the knowledge graph and see what’s the most important thing to work on is helpful. It’s kind of like this, you know, savant type tutor that can help a kid in the moment with anything, but kind of loses the picture of like what’s the most important thing to do. So you kind of have to bring it back.

And I think the knowledge is the way to do that.

Diane Tavenner: John, how does this connect with, I know you’re very committed to project-based learning and sort of that approach, which you know that I am as well. And, you know, it sounds a little bit like what you’re describing. You know, at Summit Learning, we have the playlists where you were doing the content knowledge. What you’re describing, I think, is a stronger version of that and what AI can do. How are you connecting it to the projects? What’s the intersection there? What’s going on there? And are you using AI in the projects?

John Danner: Yeah, the answer to the second is definitely yes. And let’s talk about that in a second. So we have a theory as a, as a school system, that’s probably the opposite, at least the opposite of like my alma mater. I’ve been talking to Bellarmine. It’s my alma mater in San Jose, talking to teachers about that. And, you know, AI is a problem for a lot of schools and teachers, right? Like it’s the cheating and stuff like that. We have basically the opposite approach, which is like, assume any kid can use anything that will help them read, write, understand, research better, and then like uplevel what you’re teaching so that you assume that yes, everybody’s writing is going to be perfect now. Don’t worry about that.

That’s not your job anymore. So with projects, you know, the link really is when you’re in a project, you’re trying to apply knowledge to build something to do something. And it’s extremely common to not understand something well enough to do that well. And so you need to go off and kind of research and understand it. So the link that will exist that doesn’t exist yet, which I’d like to see, is foundations lives in its own block right now at Flourish, but we’d like foundations to be accessible kind of basically all the time for students so that that’s the main way that you research as well through kind of an AI interface. So that’s the ideal. Right now what happens is that a student kind of struggles, they go off and use Gemini or something for things. And then we know, you know, the AI knows because it’s paying attention to the project and what’s going on.

‘Oh, this student struggled with this,’ and then in Foundation that kind of bubbles to the top the next day. But like, why wait? Like, just make it real time. If a student’s struggling with something, just go ahead and do it. We do have to figure out kind of the, you know, the tier 1 versus tier 2 of this. Like, if a student’s really struggling and they’ve got a real issue and you just wipe out project time doing that, that doesn’t feel right either. So we’re gonna have to figure out like what level of intervention happens if, you know, they’re still not getting it. But certainly at least the tier 1, like, oh, I just don’t know about this, let’s learn more, should happen through that Foundation system, we think.

Diane Tavenner: That makes sense. Yeah, that makes sense to me. Tell me about what the educator is doing in these times.

John Danner: Yeah, I mean, I think that’s the most important thing really is And I know for many, many teachers, the concern is, gosh, well, maybe you just don’t need me anymore or something. And that’s just completely not true. I mean, I noticed this at Rocketship, you know, people go into teaching because they love kids. That’s like, you know, that’s the common thing that you always hear. Some people go into teaching because they want to be content experts, but not that many, at least at kind of elementary and middle, like, it’s still really driven by like, I really wanna connect with kids and be with kids, not like I wanna be the best reading teacher or whatever. And so, you know, when you kind of push a lot of this like content knowledge and instruction to AI, what really happens is a little bit of like what I was describing with tier 2 and tier 3 during that time where a teacher now has a lot of time. So, you know, a lot of the stuff is going on. Project-based learning is nice that way.

Building Teacher-Student Connections

John Danner: Kids are working on things, which feels kind of like a big Montessori classroom or whatever, where like everybody’s being industrious and getting things done. But like, you know, the question is always, OK, so like what’s the best and highest use for the teacher at that point? So I think, you know, our opinion in general is kind of building trusted relationships is the most important thing you can do as a teacher, right? Like anytime you think about teachers that affected you, it’s because for whatever reason they spent the extra time to kind of get to know you, understand what you were going through, and like became kind of a trusted friend and advisor. And I think buying time back to allow teachers to do more of that is by far the highest value. Of course, interventions and things like that are awesome. Having students reach to do higher-order thinking once they’ve finished a project, all that’s great, but I think it’s all in kind of service of making that connection between our teacher and our students such that the student is more excited and interested to, you know, learn and think with that teacher about other things, you know, especially with superpowers and passions and things like that. Like, we have it, I’ll just brief aside, you know, we have these report cards that have superpowers on them. And so they say things like, you know, organization or self-awareness or whatever. So you can imagine our parent-teacher conferences are pretty amazing because while a parent is like, yeah, I don’t really know much about middle school math and frankly don’t care that much.

Boy, when you bring up self-awareness or something like that, they can go on for a long time. And so you have these really deep discussions about these kinds of things and kids by middle school, certainly in high school, they’re not really listening to their parents about these things very much. They’re kind of sick of hearing this. So I really do think schools have a way better chance of kind of influencing how children are doing these things, especially around superpowers and passions. But that requires trust and trust, you know, it’s hard to build. So we think that the best thing for teachers to be doing is kind of like getting into deeper conversations with students and talking to them about like, you know, what their interests are, what they like. And building that in the hope that they have influence over that student’s trajectory.

Michael Horn: Well, so, John, I think this actually is perfect translation into the other thing that AI is doing to free up teacher time for that, which is, as I understand it, at least from, from what you’ve written, is that you have this AI coach that is quite involved in the project-based learning piece of this equation. And I think two distinct ways. So, maybe talk about that.

John Danner: Yeah, I mean, again, work in progress, so I’m not super happy with how it’s being involved right now, but I’ll tell you what I want it to be doing well. So I think that, you know, and Diane, you live this, that the real challenge with project-based learning is there’s kind of like this huge amount of really mechanical stuff that happens in project-based learning, whereas students are confused about what they’re doing, or they’re tired and not motivated, or whatever, and you watch project-based classrooms and like actually like 80% of the teacher time is like walking around doing that stuff where they’re like, come on, Joey, let’s get going, you know, blah, blah, blah. Which of course there will still be some of that, but to what extent can you create a really awesome thought partner that kind of does a lot of those things? Like, hey, Joey, you know, what we need to focus on here is this. Have you thought about, like, you know, kind of re-engaging the way a good teacher does. Because if you can free them of a bunch of that kind of, you know, really mechanical time, I think not only does it free time, it also like kind of frees your mind up as a teacher to kind of think deeper and like look for relationships and, you know, these kind of things that we really want teachers to do. So I think that’s a big piece of what we’re hoping that this coach does. The other thing it really does for us, and you asked about this before as well, Diane, is it listens. So we’ve got mics all over the place, students are talking, it’s all anonymized, but basically the system knows what bucket to throw all the comments that students are making, etc.

Teaching Soft Skills

John Danner: And when you think about like superpowers, these soft skills. One of the other difficult things in that kind of curriculum and approach is like, and you see it in kind of SEL-type schools all the time, it kind of devolves into like playtime sometimes where it’s not as rigorous. And what AI can really do there is by looking for evidence of, you know, perseverance, for example, when did the student show that they didn’t just stop, they kind of asked the next question and kept going? Like when the AI can provide those examples in each student’s kind of superpowers report card of those things and the teacher can review it, that is so helpful because, you know, when it comes to like pushing for students to improve in these areas. Teachers really have to know, like, kind of where everybody is, where is John on these different skills, where should I focus. And so helping to provide data so that teachers can do that is, is really, really important. I would say it’s pretty good. Like, here’s one thing that kind of surprised me, we did this like a month and a half ago, the AI assessing these, we have 24 of these superpowers across all the students in the school. And we did the AI-rated students on a scale of 1 to 5, and then 3 teachers rated those same students.

And it was only off from kind of the lead teacher by about 10%. So like you know, that to me, that’s like, it’s close enough. It’s kind of like stuff where it’s like, you’re probably right, like a super expert teacher can absolutely do a little bit better. But like, we kind of want to get it to the point where the teacher’s like, yeah, you know, I pretty much trust this. I’ll look at the evidence, but more or less, it says that, OK, like, what should I do about that?

Diane Tavenner: And John, that assessment from the AI was just sort of that natural capture of all they’re doing and assessing based on, yeah, to me, like, then assessment is a no-brainer. That should, I think it’s a conflict of interest for teachers to be assessing, quite frankly, but that’s another conversation. But,.

John Danner: I mean, the other point here, right, is that when you do assessment that way, I think it’s both more valid and stops taking classroom time, right? It just happens naturally. And that’s how it happens in the real world too. It’s not like you sit down and.

Michael Horn: You go, right, we don’t stop and say, now here’s your time.

John Danner: You don’t give somebody a 5-question assessment. 6 months or so. It’s crazy.

Diane Tavenner: Yeah, yeah. So, can I just play back to you what I think you’re just, saying, just to make sure I’m getting a real picture of what’s happening or what you are moving towards happening? And you’ve only been at it for 6 months, but you’re making pretty quick progress, it sounds like. So this, like, if I’m a student in my project time, and we all know this happens a lot, there’s some kids who, like, literally, you know, the teacher’s bumblebeeing around, and every time the teacher bumblebees around, maybe I’m productive for that moment, but then the teacher bumblebees away, and then I’m kind of playing or I’m whatever. But AI knows what I’m doing in those in-between times, and so I’m getting some sort of feed or feedback of some sort, and the teacher’s seeing it, my family’s maybe seeing it, of like, hey, this is what’s going on in your time, and so we’re going to hold the mirror up, give you some feedback, tell you like, this is the stuff you could be doing to be more productive. Is that kind of what you’re describing? And If so,

John Danner: Yeah, we’re all going to have that. So this is another thing, like one of the things we think about a lot at Flourish is like, is this different than the real world’s going to be or the same? And I think we all basically need that. Like, you know, if you had a voice that was kind of going like, John, what are you doing? You’ve been doom scrolling. You know, like it’d be pretty helpful, really.

Diane Tavenner: Well, one of the big conversations is about motivation, right? And like, oh, you can’t, you have to like motivate kids to use the technology to learn. But actually, I think you’re flipping the script here and saying like, no, the technology is like literally helping, young people be motivated because someone’s paying attention and they’re noticing what they’re doing and they’re giving them feedback on it. And you know,

Feedback and Rewards Drive Success

John Danner: The feedback thing is the important thing. It’s like basically if something’s giving you feedback, like even if the feedback’s not perfect, it’s so much better than not getting feedback. You know, like the classroom where everybody’s got their hand up and they’re just waiting for the teacher to call. Like that’s a bad place to be. So now you’ve basically got this continuous loop. The other thing I would say that I think is just almost for free in this world is, you know, the gaming world has figured out a lot of things that they do when you’re doing a pretty basic task to play the game, and you might not be that excited about it, but like, you know, they’re setting up rewards. We use badges, um, you know, so like an example is you might do 2 or 3 different projects, and by doing those 2 or 3 different projects that was built up to a badge. And so the badge is kind of hanging out there and some other student in the class got it.

And so you want it and things like that. And, and those like really kind of basic game things are very helpful at different times during the day, right? Like we kind of all need a little bit of push. We’re very conscious of intrinsic versus extrinsic. motivation. And so like projects are a good example where the default is intrinsic. We want students to be kind of working on that project because they’re interested in that, because they want to do it. But there are definitely times where the AI is paying attention and kind of prompting and even, you know, doing some rewarding and things like that is actually quite helpful for them to kind of persevere.

Diane Tavenner: John, I want to talk to you about, I think you’re the perfect person to talk to about this. So one of the things I hear out there a lot is like, oh, the hyperscalers are just going to build this. Like, number one. Number two, most schools and school systems have zero ability to actually build what you’re building. So you’re sort of this unique person because you sit at the intersection of like opening, operating schools and the ability to build sophisticated technology. Is that, are the hyperscalers going to build what you’re building? Like, are you, like, how do you think about the building of the technology here for schools?

John Danner: Yeah, I mean, we’d be pretty happy if the hyperscalers built it, first of all. We’re, you know, so I think that the main challenge over the next 20 years in education is going to be how quickly do we move to a world where students are living in the current world as opposed to the, you know, 20 years ago or whatever. Like, and, and so these basic things we’re doing like foundations, I think it’s important for students to live in that world now. And so what does it take school systems to move towards that world? I know that your approach at Summit, our approach at Rocketship in the beginnings of the edtech world were, hey, let’s just build these kind of basic model schools and hopefully people will come visit and go, oh gosh, you know, that doesn’t look too bad. Like I could probably do that as well. So I think a lot of the point of Flourish is creating this proof point where people can come and see and go, huh, that, that actually works well, and it’s definitely not dehumanizing. I see the teacher interactions with the students as being more human, um, than my classroom. So I think that’s like actually our point, our reason for being is to kind of be that model.

And, you know, we’ll build a network and we’ll get as big as we can, but, but really kind of purposefully influencing school leaders, district leaders, state leaders to think about, like, you know, what they could do as well. On the technology side, I’m generally of the opinion that a lot of this will get easier and easier for everybody who’s not at the foundation level over time. I will say, like, there are some exceptions to that. So, like, with Project Read, with phonemes and graphemes. When you’re doing kind of deeper reading stuff, they may get there. I mean, the AIs may know everything at some point, but like there’s not a super strong reason for them to get there earlier. So there are pockets like that that probably will be specialized for longer. But, you know, as a school, it’s just better for us the faster all of that becomes a commodity.

And the more we can just, you know, get off-the-shelf stuff, like there’s no real joy in building all of this stuff. And for the change to happen, we don’t want people to have to think about all this stuff, really.

Diane Tavenner: No, I have to ask about scale because your point that the faster we can get kids to be living in today’s world versus the old world suggests that we need to scale as quickly as possible for that to happen, to get as many kids there. You and I both bear a lot of scars around different efforts to scale both mortar schools and influence type things. This time you’ve gone with a microschool network. What’s your, you had grand ambitions with Rocketship and clearly Rocketship’s great and Preston’s done an amazing job since you left, but it never reached sort of the scale that I think you originally hoped. What is your thinking now? Why microschools?

John Danner: Yeah, I mean, you know, putting it like just putting it bluntly, I think politics killed charter schools more or less. Like, you know, you look at most high-performing charter schools, they tend to look more and more like the districts that host them. You know, they actually, like, I look at RocketShips around the country. They actually look as much like the district they’re hosted by as they look like RocketShips sometimes. You know, it’s like, ’cause you know, your authorizer authorizes you and they have a lot of influence. So it was kind of like this cool experiment that at the beginning probably created a lot of innovation and then over time kind of has this like bringing it back to the, you know, kind of what the districts are doing. I think that microschools, certainly microschools, are starting in a very different place, you know, where the way I think about charters is the compromise happened right at the beginning. Where we would like to receive public funding and for that we will like to fit into the system.

Whereas the microschool movement kind of started with a different point where the stronger position was taken early on when the laws were formed that like these things are independent. They’re way more like private schools than they are like district schools. And of course, there will be some influence from states and others on that, but nowhere near like, you know, what we saw in the charter world where it was like, you know, I remember the story I always tell is Rocketship had specialized teachers for math and reading in elementary school, which was not normal at all. And I was just tortured for years by districts over this. You know, the main thing was like, no, it’s, you know, a student needs one trusted adult, you know, when they’re that age. And if they have two, it’s going to like, you know, all fall apart, which was, of course, total bogusness. But I had to go through that anyway. Like, you know, that was just time of my life spent arguing something silly.

Whereas with microschools, you just don’t have to argue that. So I think the big question is, what will be the ultimate, like, kind of political destiny of microschools? Will they get capped in the way that charters did? Will they somehow kind of get influenced in a way they aren’t now? Right now they’re pretty great. I mean, you know, you basically build a school that parents and students love and, and you build the curriculum and the program you want. That’s nice. Something you would have enjoyed, Diane.

Reimagining Teachers’ Roles

Diane Tavenner: Yeah, no, I mean, it’s tempting. I will say Michael’s always so kind because when we start talking schools, I just take over. So he’s being so patient. The thing that’s coming to me, and maybe this will lead us to wrap up, is, you know, you and I both taught, and were passionate about teaching. And as you start talking about politics, one of the sort of sad elements of that politics to me is I think teachers get involved in kind of, or, you know, blocking some of these changes, a lot out of fear, a lot of out of like but my identity is teaching a classroom of students and writing great curriculum and like doing all, you know, being a hero. And I think what you’re offering is a new identity for a teacher that might actually be more aligned with why they got into it in the beginning, which is instead of judging myself by the quality of my classroom instruction, I’m like literally focused on every single kid learning and growing and, you know, in your words, flourishing, right? It’s such a profound

John Danner: In general, I think that professions that go in the direction of being more human, where the human elements are like the differentiator, they’re going to do so much better. So I, you know, wrote a piece on this. I just think, you know, while most parents would not have counseled their kids to become teachers in the last 20 years, I think that conversation is likely to change because I think it’s going to be both a more enjoyable job and probably more resilient to kind of the whole AI apocalypse than most jobs.

Michael Horn: Agreed.

John Danner: Yeah.

Michael Horn: I think that is a good place to part us. But John, I feel like we have like 10 other questions like sitting in our dock that we could have dug in with you. But let’s pivot. This is fascinating. It’s really cool to see what you’re building and hear both the frustrations, but also frankly, the North Star for where it’s going. And one day maybe Massachusetts will have you here. But I’ll pray for now. But let’s pivot.

This season of Class Disrupted is sponsored by Learner Studio, a nonprofit motivated by one question. What will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers and for civil thriving? Learner Studio is sponsoring this season on AI in Education. Because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about Learners Studio’s mission and the innovators who inspire them at www.learnerstudio.org.

We have this section that we always talk about things we’re reading, watching, listening to. We try to do outside of work. People track us on this stuff. Diane and I occasionally fail. I’m going to fail today. So you can go wherever you want.

John Danner: So, yeah. I’m rereading the Culture series, Iain Banks, right now. So my brother works for Tesla and Tesla just, as you probably heard, kind of made this transition where they knocked off the Model S and Model X and are building robots. So he’s building robots right now. So that makes it much more personal to me that like the future is coming soon, and so, you know, I’ve always been a science fiction reader, but, but I think one of the cheat codes in Silicon Valley is like the amount of science fiction consumed equals your ability to be comfortable with like what’s coming. So yeah, culture series.

Michael Horn: Good rec, good rec.

Diane, what’s on your list? You said you’re cheating.

Diane Tavenner: So, I’m cheating, I’m failing today. Sorry. Ted Dintersmith has his latest book out and sent it along. I couldn’t resist. The title is very provocative. It’s called Aftermath: The Life-Changing Math That Schools Won’t Teach You. And, you know, this is really, you know, for those who don’t remember, Ted, like, goes hard on the things we’re doing wrong and really tries to bring public awareness to them. And, I think lots of us have been concerned about how math is taught and not taught and whatnot for a long time.

So, that’s what this one’s about.

Michael Horn: I have an email from him in my inbox to send him my address, so I will do it after this conversation, uh, so he could send it to me as well. But, I’m also cheating. I’ve been really interested in, not just how schools start doing new things, but how do they stop doing old things? Like, they are just really bad. And it’s not just schools, by the way. Like, all organizations are really bad at deimplementing or pruning, like, old things that don’t make sense anymore, whether they’re bad habits or frankly habits that just aren’t fit for the current age. So I’ve started, like, trying to read some of the academic literature and just learn about that. And there’s a book, Making Room for Impact: A Deimplementation Guide for Educators, by Aaron Hamilton, John Hattie, and Dylan William. And so I’m just cresting the end of that book right now, and, and then looking at all the healthcare studies that they’re citing.

And I haven’t decided if I’m going to read those, but that’s where I am right now.

Diane Tavenner: So is it a recommend, Michael, or no?

Michael Horn: I mean, it’s, it’s like a, it’s a deep workbook, right, on the topic, um, is what I would say. So like, if you’re a school and you’re trying to work through this, definitely dive into it. I was more interested in like, who’s, who’s thought about, like, how do you de-implement? How do you prune, right? And because there’s just not a lot of conversation except for educators griping about it. And so I wanted to learn more and it was a good starting point. So huge thanks, John, again for joining us. We appreciate it. Really check out his Substack as well if you want to just sort of follow along on the journey, I guess is what I would say. And we’ll watch as Flourish opens two more in Arizona in August and keep up the good work.

We appreciate you. And for all of our listeners, keep the emails, notes coming. We love it. We learn a lot from it as well, and it inspires us on our future topics. And so, as always, thanks for joining us on Class Disrupted. We’ll see you next time.

This episode is sponsored by LearnerStudio.

]]>
AI ‘Slop’ Is Flooding Children’s Media. Parents Should Be Alarmed /article/ai-slop-is-flooding-childrens-media-parents-should-be-alarmed/ Tue, 24 Mar 2026 19:30:44 +0000 /?post_type=article&p=1030273
]]>
AI in Student Assessments: Promise, Potential and Risks /article/ai-in-student-assessments-promise-potential-and-risks/ Wed, 18 Mar 2026 16:46:17 +0000 /?post_type=article&p=1030004 Artificial intelligence is rapidly reshaping how student learning can be measured, moving beyond traditional tests toward more dynamic forms of assessment. From students conversing with virtual characters to demonstrate problem-solving and reasoning, to AI tools that analyze collaboration and learning processes in real time, these approaches promise insight into what students know and can do. At the same time, these innovations raise critical questions for educators, researchers, and policymakers: Can AI-powered assessments adapt to individual learners in ways that are both valid and fair? Will they help close opportunity gaps or risk reinforcing existing inequities through bias, access barriers, or opaque algorithms? And as AI systems grow more sophisticated, what guardrails are needed to ensure transparency, trust, and responsible use?

In this one-hour webinar, hosted by AERA and The 74, leading education researchers will explore how AI is being used in assessment today, what evidence we have about its effectiveness and what risks demand careful attention. The conversation will balance promise with caution, highlighting both cutting-edge research and the policy and ethical considerations shaping the future of student assessment.

RSVP to watch, or refresh after the webinar to stream.

Related coverage on The 74: 

]]>
AI ‘Slop’ Is Flooding Children’s Media. Parents Should Be Very Alarmed. /zero2eight/ai-slop-is-flooding-childrens-media-parents-should-be-very-alarmed/ Wed, 18 Mar 2026 10:25:00 +0000 /?post_type=zero2eight&p=1029803 This story was co-published with .

Updated March 27, 2026:In response to this story, YouTube terminated six channels for violating the platform’s terms of service and one channel for violating its spam policy.

In a video that has been played almost 50,000 times since it was posted five months ago, two cartoon children sing along as they guide viewers through the experience of riding in a car amid a vividly colored, utopian backdrop. 

At first, the seems harmless. The song is upbeat and informative. The animation aligns with the promised subject. 

Except, hold on a second, did those lyrics just say, “Red means stop, and green means right”? And why are the characters changing in every frame — different hairstyles and colors, slightly different outfits for the girl and boy? 

Worst of all, for a video that purports to be “educational,” the visuals are sending precisely the wrong message about riding in a car. 

The video opens with the children riding, without seatbelts, in the front row of a moving vehicle. The next scene shows the girl defying physics, floating alongside a moving car, while the boy is seated in what appears to be the hood of the vehicle as it travels backward down a busy street. The third and fourth scenes show the children walking in the middle of the road with moving cars behind them. 

In a video called “Vroom Vroom! Car Ride Song,” the cartoon children sing, “Red means stop, and green means right.” (Screenshot from YouTube)

It’s not hard to imagine how the video could have gotten so many views. 

Maybe a parent needs to complete a task — fold some laundry, get dinner ready, hop in the shower — and is searching for an age-appropriate video on YouTube to entertain their toddler during that short time. Perhaps that toddler, increasingly independent and prone to running off, needs a better grasp of road safety. “Vroom Vroom! Car Ride Song | Educational Nursery Rhyme for Kids” presents itself as a win-win solution. 

But children’s media experts say this is AI-generated “slop,” and that it has infiltrated the internet, preying on young children and their unsuspecting caregivers. 

“We’re at the beginning of a monster problem, and we have to get hold of it quickly,” said Kathy Hirsh-Pasek, a professor of psychology and neuroscience at Temple University and senior fellow at Brookings Institution who studies child development. 

She and other researchers, including Dr. Dana Suskind, a professor of surgery and pediatrics at the University of Chicago, have that AI-derived products for babies and children need to be reined in. 

“This is not neutral content,” said Suskind, author of the forthcoming book . “I think of this as toddler AI misinformation at an industrial scale. It’s very risky for the developing brain.”

It’s hard to say just how pervasive this type of content is, but it’s clear the problem is widespread and getting worse. One published by video-editing company Kapwing in November 2025 found that about 21% of YouTube’s feed consists of low-quality, AI-generated videos. 

, the creator of the “Vroom Vroom! Car Ride Song,” has posted more than 10,000 videos since its first release just seven months ago, in August 2025. That’s an average of about 50 new videos each day. , meanwhile, has published about 3,900 videos to YouTube in its entire 20 years on the platform. 

YouTube creators who publish AI-generated videos are producing content for children at a breathtaking speed, as seen on the time stamps from Jo Jo Funland’s account. (Screenshot/YouTube)

The cognitive decline associated with the consumption of AI slop — such as a shortened attention span, decreased focus and mental fog — is sometimes referred to as “brainrot.” But when the audience is children, there’s not much to rot, Suskind said. Because a child’s brain is still in its early development, still being built, what you get instead, she said, is “brain stunt.”

“Every experience is building a million new neural connections,” Suskind said of children who are still in their early years. “You will be unintentionally wiring the brain in incorrect ways.”

This is not neutral content. . . I think of this as toddler AI misinformation at an industrial scale. It’s very risky for the developing brain.

Dr. Dana Suskind, Professor of surgery and pediatrics at the University of Chicago

That comes at a cost. A child may absorb the implicit messages of something like the Vroom Vroom video and end up mimicking the “downright dangerous” behaviors they saw depicted there, said Carla Engelbrecht, who has created digital experiences for children’s media brands such as Sesame Street, PBS Kids and Highlights for Children and considers herself an AI educator and creator.

Engelbrecht is also when it comes to child-targeted AI slop. She has found countless examples of AI-generated videos that could cause real physical harm.

“The more content I find,” she said, “the more horrified I get.”

They include videos of a being chased by a T-Rex; a crawling biting into an apple that appears bloody, swallowing whole grapes (a major) and eating honey (which carries the potentially fatal risk of ); and a eating raw elderberries (which are toxic when uncooked).

In a video called “Dinosaur at the Window,” a T-Rex scares a small child. (Screenshot from YouTube)

But there’s another category of AI slop in kids’ media, she said, with consequences that are more difficult to capture. These videos claim to pertain to learning and development, focusing on topics like literacy and numeracy, but due to the speed with which they are produced and the lack of quality checks, they end up introducing or enforcing the wrong lessons. And sometimes, the errors don’t come until midway through the content. That means if a parent previews the first few seconds of a video, they may miss the unreliable information that appears later in the clip.

A about vowels includes visuals of consonants. It also depicts letters on screen that don’t align with the audio overlay. A promising to teach about the 50 U.S. states sings along as butchered state names appear in text at the bottom of the screen — Ribio Island, Conmecticut, Oklolodia, Louggisslia. A about the seven continents frequently shows a compass with more than four points and indecipherable symbols where the “N,” “S,” “E” and “W” should be.

In a video called “50 States Song for Kids,” the voiceover sings, “Alabama warm, Louisiana jazz,” while the subtitles read, “Alaboama warm, Louggisslia jazz.” (Screenshot from YouTube)

These may seem like silly slips from a machine, but for a child, every “input” is part of their learning process, Engelbrecht explained. “Mixed signals means you are delaying them learning the cause and effect of a thing,” she said. “If you learn that red is blue and blue is red, that’s a delay.”

“If you’re inconsistent, it takes that much longer to learn,” she added. “Every delay they have means everything else gets pushed back. That’s taking their executive function offline to go learn nonsense.”

Amid all of this internet muck, the question of responsibility is a tricky one.

“Fundamentally, everybody has a responsibility,” Engelbrecht said, including platforms like YouTube; companies that operate large-language models, like OpenAI, Google and Anthropic; the people creating and publishing these poor-quality videos intended to reach kids; and parents. 

YouTube’s current requires creators to disclose videos that have been generated by or altered with AI when that content “seems realistic.” This does not apply to cartoons and — which seems to be the majority of what’s reaching children — because it has long been assumed to be fictional content, Engelbrecht explained. 

The platform does have stricter “” for content targeting children than it does for its general viewership, said Boot Bullwinkle, a YouTube spokesperson, in a statement. It also has a “.” (These web pages, however, do not specifically address the use of AI.)

Due to the volume of content on the platform, YouTube does not catch every video that violates its policies. (It did take action against at least seven channels on the platform in response to The 74’s reporting, including terminating two.) 

“The trust that parents and families put in YouTube is a responsibility we take very seriously, and we’ve invested deeply in age-appropriate environments that empower parents,” Bullwinkle wrote in the statement. “YouTube Kids, for instance, offers industry-leading parental controls and rigorous designed to provide a safer experience for families.”

YouTube Kids is a distinct version of the platform with content that has been curated for children from birth to 12. Many families continue to use the main YouTube platform to view children’s content, though, which means many creators still have an audience and earning opportunities there. None of the AI-generated videos reviewed for this story were found on YouTube Kids, although recent in The New York Times found AI videos had penetrated that space as well.

Sierra Boone, executive producer of Boone Productions, a children’s media production company that makes original content for children ages 2 to 6, noted that kid-friendly competitors to YouTube, such as by Common Sense Media and , do exist. But they have struggled to break through to families. 

“Overcoming that juggernaut is extremely difficult,” Engelbrecht said of YouTube. “There’s a graveyard full of failed attempts to create a safe YouTube alternative.”

Boone suggested that some effective labeling would go a long way, not unlike the “” LinkedIn is phasing in, which aim to disclose when media has been created or edited by AI, in part or in whole. 

Engelbrecht thinks labels are a good idea, not least because they would be important for AI literacy, but she also believes they would penalize creators like her who use AI “thoughtfully” in their work. (She is , among other projects, an AI tool that detects AI slop in children’s videos on YouTube.)

As for who’s behind the videos, some of it originates overseas, but plenty is home-grown, created by Americans with access to phones or computers who are just trying to “make a quick buck,” as Boone put it. 

These people are often using AI at every step of the process — to develop themes and scripts for children’s videos, to generate the videos, and to automate the process of publishing the content regularly on “, in which the creator is anonymous and has no on-camera presence, Engelbrecht explained.

A little over a year ago, a popular content creator posted a video to YouTube in which she raves about a “huge opportunity” that would lead to “many millionaires.” The opportunity? AI-generated animated videos that inexperienced users could create with a simple prompt in just minutes. The target audience? Young children. 

That video has been viewed more than 335,000 times. 

“AI in general isn’t inherently good or bad, but it exposes people’s intentions,” said Boone, whose production studio is responsible for . 

The flood of AI-generated content, she added, reveals how many people have “no regard for children or how they’re impacted,” as long as it benefits them. 

In a video called “Learn ABCs at Breakfast,” a small baby eats a fistful of whole grapes, which are a major choking hazard for infants. (Screenshot from YouTube)

For Boone, who works painstakingly with her team on every episode of The Naptime Show — researching, writing the script, editing the script, placing props, doing table reads, going to set, filming, editing the video, publishing and promoting the final product — creating children’s media is an “honor” that should be taken seriously. 

“The very foundation of creating children’s media is you are creating something that a child, in their core developmental years, is going to be consuming,” Boone said. “So what is the level of intention that you’re bringing to that? I think we need to be holding the people who are uploading this content more accountable.”

Ultimately, though, in the absence of more regulation or content moderation, the burden falls on parents. 

Parents are likely putting YouTube videos in front of their children in the first place because “they are already so stretched,” said Suskind, who still sees patients in her pediatric practice and interacts with families often. So it’s inherently challenging to ask them to more closely monitor the content that is coming through their children’s screens. 

Yet that is what must be done, Hirsh-Pasek said. Until a better solution emerges, the onus is on parents to separate the slop from “the good stuff.”

“We owe it to our kids to protect them,” said Hirsh-Pasek. “That’s what they look to parents for, to keep them in safe spaces. If we don’t deal with that or do anything about that, we’ve absconded [from] our responsibility.”

]]>
Proposal for NYC AI-Focused Public High School Sparks Pushback /article/proposal-for-nyc-ai-focused-public-high-school-sparks-pushback/ Mon, 16 Mar 2026 18:30:00 +0000 /?post_type=article&p=1029829 This article was originally published in

New York City students with a passion for STEM — and an interest in artificial intelligence — may soon have a high school dedicated to training “the next generation of technology professionals.”

But families in Manhattan’s District 2 are pushing back against for , a new screened admissions high school that would take the place of the tiny, girls-only Urban Assembly School of Business for Young Women. Next Generation would be the first city public school to focus its curriculum on AI and computer science.

As details of the two proposals emerged over the last month, so have dual tensions: What should fill the space left by Young Women in Business, and how private technology companies and their artificial intelligence products could shape the curriculum at Next Generation.

Much of the opposition to Next Generation has come from families at a middle school also in the Broadway building, Lower Manhattan Community School. Also known as LMC, parents at the school have called on the department for years to expand enrollment from grades 6-8 up to grade 12.

The Panel for Educational Policy, the board that votes on new schools and closures, is expected to consider the proposals for Next Generation and Business for Young Women at its April 29 meeting.

The Education Department released both proposals on March 6, the day after the city’s eighth graders received their high school acceptance offers. If approved, Next Generation would welcome its first class of ninth graders in the fall. (The plan to close Business for Young Women in June is not contingent on Next Generation’s approval.)

Despite not having the green light yet, Next Generation has already held three virtual open houses. Its states the school is “set to open” in fall 2026, noting that applications would open March 19.

Parents ask: ‘Why this school and why here?’

Manhattan High Schools Superintendent Gary Beidleman introduced the idea for Next Generation Technology High School at a .

Panel for Educational Policy members and families of the three co-located schools at 26 Broadway — in addition to LMC and Business for Young Women, Richard R. Green High School of Teaching shares the building — said that meeting was the first time the district school community had been notified of the proposed STEM- and technology-focused screened high school.

At the Feb. 25 announcement, Beidleman said Next Generation grew out of his experience as a summer 2024 , and that Google and OpenAI are part of the planning team for the school. One of the school’s goals, he said, is to “expand pathways connected to high-growth technology careers” and provide advanced STEM and technology programming for NYC students. Next Generation also plans to offer a summer internship program with Carnegie Mellon University.

Caleb Haraguchi-Combs, founding principal and project director of Next Generation High School, said in an information session that the school would utilize . How much of this AI-powered, AI-focused Google coursework would comprise the curriculum is still in flux, according to the proposal’s .

The school’s academic description includes similar or identical language as found on the Google Skills website: Next Generation’s “special access to technology industry mentors,” “technology certifications,” and “curriculum that adapts to the dynamic changes in the technology field” are offerings advertised on the homepage of the Google Skills site.

Officials and families question new school proposal process

The community and Panel for Educational Policy members have asked questions about the fast proposal process, speaking to uncertainty around admissions for the coming school year.

in a letter to the Panel for Educational Policy that the proposal seemingly came out of nowhere, and families were not provided adequate engagement opportunities before its release. Panel Chair Greg Faulkner said he has received hundreds of similar letters from parents since the community learned of the incoming proposal in late February.

High school offers were released March 5, ahead of the panel’s vote and months before the proposed school would open. It remains unclear how the Education Department would handle screening requirements — such as interviews or assessments — after the main admissions cycle has concluded. The Office of District Planning did not respond to questions about how enrollment would work for this fall.

of the school, created by the Next Generation’s founding principal and program director on March 8, had under 100 signatures at the time of publishing.

A public hearing is scheduled for April 14, two weeks before the panel’s vote.

“I would love more transparency around why the department chooses certain schools to go in certain places,” said Sarah Calderon, a parent at Lower Manhattan Community School. “When we asked the superintendent, ‘Why this school and why here?’ he said he had no data on district demand.”

Beidelman told parents at the Feb. 25 District 2 meeting that expanding Lower Manhattan Community “was not an idea that was on the table.”

The Education Department receives many proposals each year, including some from outside New York City, said Sean Rux of the Office of New School Development.

“This was the proposal that spoke to us,” Rux said.

Families push to expand Lower Manhattan Community School

The plan to close the underenrolled Business for Young Women school has been percolating for a few years — with just 91 students this year, it’s the smallest district high school in the city, said Education Department officials.

Families at Lower Manhattan Community School say they have pushed for years to expand into a 6–12 model, and would like to move into the space used by Business for Young Women, if closed.

“A proposal to expand LMC could potentially open up sixth grade admissions to applicants citywide, but we have not been given the opportunity to even submit a proposal,” said Anne Hager, a parent of a sixth grader at Lower Manhattan School.

At a PTA meeting with Education Department staff on Wednesday, LMC’s Student Leadership Team presented its case to expand the school instead of opening Next Generation.

A new 6-12 would eliminate the need for LMC students to go through a second, onerous application process, something that students with disabilities would especially benefit from, they said. The presentation also cited Department of Education data from 2024 that showed 6-12 schools have nearly three times higher demand than their 6-8 middle school counterparts.

compared with citywide averages.

The department’s proposal focuses largely on space at the Broadway campus, estimating that Next Generation would serve roughly 450 students by its fourth year. All three schools can comfortably co-locate, according to the proposal, though its capacity calculations do not allot for significant expansion for either Richard R. Green High School or LMC.

Debate over AI timing and oversight

Next Generation’s proposal arrives amid over artificial intelligence in schools.

The school initially marketed itself in information sessions and on social media as an “AI school,” though DOE officials later clarified that students would learn about artificial intelligence rather than be taught by it.

“Students need to be creators, not consumers, of technology,” Beidleman said at the Feb. 25 meeting. “Lessons learned from the past show us that new tech in place creates an opportunity.”

Some parents have argued that broad use of an AI platform in public schools should not be allowed before comprehensive guidelines have been released by the city.

Greg Faulkner, who chairs the Panel for Educational Policy, said he first learned of the proposal after receiving Next Generation’s last month. Since then, the panel has received hundreds of letters from parents opposing the plan and raising concerns about the lack of community engagement so far.

“I have two major hesitations with this: We don’t know what kind of AI involvement there will be. The development team has not provided a playbook for how that will look,” Faulkner said. “And in reading the response letters from District 2 parents, I see that proper engagement and process was not done.”

At a District 2 town hall on March 5, Chancellor Kamar Samuels said the Education Department expects to release AI guidance in the coming weeks and will provide a 45-day window for community feedback once it’s published.

Five Community Education Councils have passed resolutions calling for a two-year moratorium on artificial intelligence use in schools. But calls for broad AI guidelines implemented at the city level are nothing new; of an AI-powered reading program in 2024 after former Comptroller Brad Lander called for a citywide playbook.

“I think the question of teacher capacity and teacher shortages, the research on kids and AI, is still nascent, and the DOE’s lack of its own AI policy leads me to question the timing of any AI school,” said Calderon, the parent at Lower Manhattan Community.

Chalkbeat is a nonprofit news site covering educational change in public schools. This story was originally published by Chalkbeat. Sign up for their newsletters at .

]]>
AllHere Set Meeting With LAUSD Leaders Months Before Landing $6.2M Chatbot Deal /article/allhere-set-meeting-with-lausd-leaders-months-before-landing-6-2m-chatbot-deal/ Wed, 11 Mar 2026 12:30:00 +0000 /?post_type=article&p=1029653 This story was reported by Mark Keierleber and written by Kathy Moore

Months before the Los Angeles school board approved a $6.2 million contract with AllHere, an AI chatbot maker that is now being investigated by the FBI, top district leaders were invited to a meeting with its CEO and a consultant, who is a close friend and associate of schools Superintendent Alberto Carvalho.

The Jan. 18, 2023, calendar invite for the gathering at the district’s downtown headquarters, billed as “AllHere Meeting,” was shared with The 74 by a former central office staffer, who asked to remain anonymous for fear of retribution. 

The AllHere contract in question is widely believed to be connected to the high-profile raids on Carvalho’s home and district office in late February. 

The 74 has not received confirmation on whether the meeting took place or what specifically may have been discussed, but the invite suggests district administrators were consulting with AllHere principals five months before the contract was voted on.

It also calls into question public statements by Carvalho, who was placed on paid leave Feb. 27, that he . He said the education technology venture represented by his longtime friend and business associate Debra Kerr won the job based on legally mandated bidding. Kerr called the Jan. 18 meeting.

AllHere filed for bankruptcy in September 2024 and its founder and CEO, Joanna Smith-Griffin, was later arrested on charges of identity theft and defrauding investors

The 74 filed extensive public record requests with Los Angeles Unified School District in September 2024 for documents related to the AI chatbot contract, including all proposals, bids or submissions made by AllHere and any other companies vying for the work. The request also asked for documents detailing how the district evaluated AllHere’s qualifications and determined that the small Boston-based firm with little to no artificial intelligence experience was capable of carrying out the contract.

On Feb. 11, 17 months after those requests were filed and two weeks before the FBI raids, a senior paralegal in sent The 74 an email asking if we still wanted the documents.

Through his attorneys and a spokesperson, Carvalho since the FBI probe exploded into public view. The Los Angeles Times reported that he denied any wrongdoing, pointed out that “no evidence has been presented by prosecutors supporting any allegation that (he) violated federal law” and pressed to return to his job.

“Mr. Carvalho remains confident that the evidence will ultimately demonstrate that he acted appropriately and in the best interests of students,” said the statement that was issued through the spokesperson and the law firm of Holland & Knight, according to the Times. “We hope the school board reinstates him promptly to his position as superintendent.”

Kate Brody, the vice president of communications for , a 2,000-member LAUSD parent and educator advocacy group, sees the moment differently. Her group has called for an audit of all the education technology contracts entered into under Carvalho, saying they lack independent research into their efficacy and now is “the time to peel this whole thing back and take a look, not just at what’s going on with AllHere, but the inappropriate amount of access that all these companies have.”

“The evidence is increasingly clear that this technology is not really for the benefit of the students,” she told The 74. “Our big question has been for a long time — whose benefit is it for?”

Carvalho has not been accused of any wrongdoing and authorities have not provided details about the investigation. The warrants underlying the . 

In  after the Board of Education placed Carvalho on paid leave and named an acting superintendent, the district said that while it understood “the need for information, we cannot discuss the specifics of this matter pending investigation.”

Kerr could not be reached for comment and attorneys for  Smith-Griffin did not respond to requests for comment. District spokesperson, Britt Vaughan, could not be reached for comment.

Kerr and Carvalho

Federal agents also . Her ties to Carvalho go back to his days leading the Miami-Dade County Public Schools, a period of time in his prominent career that is also now reportedly under investigation. According to , grand jury subpoenas have been issued seeking records from the district’s inspector general and a fundraising foundation overseen by Carvalho while he was the Miami schools chief.

Kerr was a key player in executing the failed contract between AllHere and the nation’s second-largest school district. In addition to her being in a position to call senior staff to a meeting at district headquarters, according to the calendar invite, Kerr’s son Richard, a former AllHere account manager who began working for the company in 2022, told The 74 in September 2024 he pitched AllHere to LAUSD school leaders.

Among The 74’s long-unanswered public records requests were any conflict of interest disclosure forms filed by AllHere, its employees, third parties involved in the contract or LAUSD personnel.

The location listed on Kerr’s hourlong invite to discuss AllHere was the office of LAUSD’s longtime chief spokesperson Shannon Haber, who has since retired. Other invitees included senior advisor of communication Bích Ngọc Cao, senior director of engagement and partnerships Antonio Plascencia Jr.. and director of development and civic engagement Sara Mooney. 

Mooney is also the former executive director of the , the district’s separate fundraising arm includes Carvalho. Attempts to reach Haber and the other meeting invitees, which also included Vaughan, the district spokesperson, and marketing director Lourdes Valentine, were unsuccessful.

Los Angeles schools Superintendent Alberto Carvalho appears in a photograph with Debra Kerr, which the education technology salesperson later posted on LinkedIn. (Screenshot)

Earlier calendar entries shared with The 74 show Carvalho had an hourlong meeting scheduled with Kerr and someone identified only as “SN” on Oct. 21, 2022, about eight months after he took the $440,000-a-year job in Los Angeles. The meeting was scheduled for 12:30 p.m. at a place “to be determined.”

In 2022, Kerr was busy consulting for and promoting AllHere in multiple Florida cities, according to . She also did consulting work for Rethink Ed, a New York-based company that provides social-emotional and wellness resources. In May 2020, in the midst of the COVID-19 pandemic and the national school shutdowns, to support students with autism and other related disabilities during remote learning. 

“We appreciate partners like Rethink Ed which assist us in empowering these very deserving students with a variety of innovative and helpful tools to successfully engage in distance learning,” Carvalho said in a statement when the Miami-Dade contract was announced.

Roughly two years later, when Carvalho was leading LAUSD, the firm

Other calendar entries shared with The 74 show that right before the scheduled meeting with Kerr that October Friday, Carvalho had back-to-back interviews lined up with reporters from The Wall Street Journal and Politico. Later that day, he was scheduled to attend a retirement dinner for Michael Hinojosa, the former Dallas schools superintendent, at the Ravello restaurant at the Four Seasons in Buena Vista Lake, Florida, near Orlando.

Two days before Carvalho was due back in Florida for that celebration, the a $1.89 million contract to provide text-messaging support to students struggling with attendance, academics and social-emotional issues. The SMS tool was a precursor to its AI-powered chatbot. 

Carvalho told the Los Angeles Times he had getting the three-year deal in Miami although the newspaper reported that the bidding process began while he was still in charge. 

Former CEO Joanna Smith-Griffin with students from Florida’s Hillsborough County and Pinellas County public schools at a 2022 AllHere-sponsored event on improving high school graduation rates. (Facebook.com/leadershipmax)

Two years later, in November 2024, the district would move with Miami-Dade schools for a period of three years after the ed tech company abandoned its contract.

The 74 filed public records requests on Sept. 13, 2024, asking for copies for all of Carvalho’s daily calendars going back to his first date of employment at LAUSD. The district has yet to produce them.  

AllHere then gone

Also invited to the Jan. 18, 2023, meeting set up by Kerr was AllHere’s Smith-Griffin, who six months after landing the L.A. schools deal was charged with defrauding investors of nearly $10 million.

Her case, which involves allegations of securities and wire fraud and aggravated identity theft, is being heard in U.S. District Court in Manhattan. The Harvard graduate and former middle school math teacher  pleaded not guilty in December 2024. Conferences on her case were postponed three separate times in 2025 to allow the parties time to work on a possible disposition. The last was a 60-day adjournment on Sept. 25, 2025, and there’s been no activity in the file since then.

By the time Smith-Griffin was arrested at her home in Raleigh, North Carolina, in November 2024, the company she founded in 2016 had been forced into bankruptcy, unable to pay its debts, including a disputed $630,000 commission claimed by its largest creditor: Kerr.

Carvalho and Smith-Griffin spent considerable time together in the spring of 2024, appearing at multiple ed tech conferences touting “Ed,” their sunny chatbot that was seen as catapulting LAUSD into the K-12 AI vanguard. They said communicating with Ed would provide an unprecedented level of support, accelerating learning and strengthening well-being for students and families, many of whom were still struggling from the pandemic. 

“He’s going to talk to you in 100 different languages, he’s going to connect with you, he’s going to fall in love with you,” Carvalho raved at the April 2024 ASU+GSV conference in San Diego. “Hopefully you’ll love it, and in the process we are transforming a school system of 540,000 students into 540,000 ‘schools of one’ through absolute personalization and individualization.”

None of that materialized for the district, whose enrollment has since and which is now and

After AllHere shuttered and a former company manager-turned-whistleblower told The 74 that students’ private data  was not properly protected in the push to launch Ed, Carvalho vowed to investigate. He promised a task force of outside experts who would dig into what went wrong with the AllHere contract and determine how the district could strengthen its bidding process to avoid future debacles.

Carvalho told the Los Angeles Times in July 2024, he expected. Some 19 months later, there’s been no further news or shared task force findings. The district’s independent inspector general’s office launched its own investigation around the same time. 

However, the office’s and reports to the Board of Education make no mention of AllHere. In 2024, the IG opened a total of 62 cases, closed 54 and identified nearly $2.5 million in waste. In 2025, it opened 38 cases and closed 43, including some from previous years, though none appear to have involved AllHere. No financial waste was identified in 2025. 

Inspector General Sue Stengel at the end of 2025 after three years. The office did not respond to a request for comment. 

Equally elusive is what happened to Ed or the underlying tech tool for which LAUSD paid AllHere $3 million out of its $6.2 million contract. Although it’s been reported that school officials said the district was not financially harmed in the contractual fallout, and it received the services and products it spent several million dollars to acquire, it’s difficult to substantiate that.

Los Angeles Unified Supt. Alberto Carvalho, left, waits to be called on stage during the official launch of Ed, a new district-developed Artificial Intelligence-assisted “learning acceleration web-based platform that will boost student success and revolutionize how K-12 education is tailored to meet individual needs,” at Edward R. Roybal Learning Center in Los Angeles on March 20, 2024. (Christina House / Los Angeles Times via Getty Images)

When Carvalho unveiled Ed at a major March 20, 2024, celebration attended by Gov. Gavin Newsom and L.A. Mayor Karen Bass, he said the chatbot would be in 101 elementary, middle and high schools as part of a pilot program. By the fall, Ed was supposed to go districtwide

Much later, that reported group of Ed testers had been “to a small number of schools (that) tried it out, each with a sample of students and parents.” In July 2024 after the district “unplugged” Ed in the wake of AllHere’s demise, that it was “hard to find students, teachers or other staff who have used any part of the system since its official launch.” 

Absent human interactions with Ed, the district has been slow to produce documentation from AllHere of services rendered. Among the public records sought by The 74 in September 2024, which LAUSD now appears ready to provide, are “purchase orders, invoices, and payments records related to any and all goods and/or services provided by AllHere.” 

Staff reporter Amanda Geduld contributed to this report

]]>
How Alpha School Uses AI to Rethink the Education Experience /article/how-alpha-uses-ai-to-rethink-the-school-experience/ Fri, 06 Mar 2026 13:30:00 +0000 /?post_type=article&p=1029467 Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

The private, AI-powered Alpha School had quickly generated attention in the education world and beyond. The school’s been featured in dozens of articles and dissected across countless podcasts for what leaders call their “two-hour learning” model.

On this episode of Class Disrupted, MacKenzie Price, co-founder of the Alpha School, joins Michael Horn and Diane Tavenner not to explain Alpha School’s model, but instead to dive deep into how the school is leveraging artificial intelligence to radically rethink the school experience. Price focuses on how AI itself is being leveraged at Alpha — from the core academic blocks to afternoons spent on real-world projects and life skills development. What’s possible now in school design that wasn’t a decade earlier, thanks to AI? 

Listen to the episode below. A full transcript follows.

Chris Hein: So when the school shut down and went to remote learning, we were really fascinated by how quickly our kids adjusted to e-learning and how hard of a time the teachers seem to have with just the basic tools and systems and then how to translate their curriculum to a digital format. But the thing that really jumped at me was my wife and I were having conversations with our kids every day saying, hey, what are you doing?

Why are you guys playing video games? Or why do you, like, want to go outside and play? It’s midway through the day and they’re like, we’ve already done our work. And we were like, that can’t be right. And so we double checked their assignments and their tests and where they’re at. And it was like, no, they got all their work done in a couple hours. And then it really made Teresa and I question, why does it take them eight hours a day at school if the school is teaching them the same content and administering the same number of tests and they’re able to get through it in a few hours?

Michael Horn: That was June 2020, and Diane and I were broadcasting during the height of the pandemic, and we were hoping that parents would realize that schools could be rethought dramatically, including by helping people realize that what we tend to think of as, quote, the academics could be done in much, much less time than the six plus hours that kids spend in traditional schools. Five years later, and thanks to a startup school network, Alpha School, the two hour message finally seems to be spreading like wildfire. So with that as a prelude, Diane, first, it is great to see you as always.

Diane Tavenner: It’s good to see you too, Michael. I’m a little disoriented by us changing up our normal intro. But in a good way, change is always good. That take from season one is honestly priceless. It’s taken us a bit longer than we had hoped, but we do seem to be getting some momentum towards some of the big opportunities that we saw in education back then and still are hopeful for now.

Michael Horn: Yeah, no, I think that’s right. And I’m glad you’re accommodating my whims on changing the format up on you today. But I am particularly excited because we have on our show today MacKenzie Price. She’s of course one of the co-founders of Alpha School, and MacKenzie’s been on my Future of Education podcast and Substack before and we actually both have Substacks named the Future of Education. We independently named them, so we’re vibing already. But MacKenzie, it’s great to see you again, welcome.

Scaling Education with Technology

MacKenzie Price: Well, thanks for having me. And, you know, it’s so interesting that you tell that story about the way, you know, education was done during COVID And we were pretty lucky because we’d started Alpha school back in 2014. So when the pandemic hit, you know, it happened to be during spring break. So the kids who hadn’t brought their laptops home came and picked them up at school. And we really had a very smooth rest of the school year because the kids already were doing their learning on the computers. And then we just said, you know, afternoons, we’ll just, we’ll, we’ll call it, you know, do whatever you want at home. But what’s interesting is a couple years ago or in 2022, when we really launched our learning platform with the advent of generative AI, we realized, okay, we can actually scale this. We can go beyond just, you know, a local school that’s doing a reasonable job of educating kids, and we can, we can scale it bigger.

And we were originally talking about the idea of 2x learning. You know, you can learn twice as much, you can learn twice as much. And even our own families were like, we don’t, we don’t care. Like, why does my kid need to learn twice as much? It’s not a big deal. And we, we’d have like, parent conferences where we’d be saying, hey, if, if your son, you know, hits his, his goals, he can be learning twice as much. And they didn’t care. And then we had this unlock idea of let’s call it two hour learning and say, hey, if your son hits his goals, he can be out of here in two hours and freed up to go do the rest of the things, you know, that he wants to do during the day. And suddenly the parents are like, Johnny, come on, get with it.

Let’s hit our goals. And it was that mind shift of, you know, let’s get your academics done in two hours. And as a side note, you’ll learn twice as much, but let’s do that for two hours. And then one of the code names we actually had for our learning platform was “Time Back.” And we went through a whole process in the last year trying to make sure, what’s our new name going to be? What are we going to call this? And ultimately we landed back on exactly what it is that we’re giving kids, which is time back to go do all these other exciting, interesting things during the rest of the day. Because it doesn’t take all day to educate kids. You can not just do academics, but crush academics in a much shorter period of time when you’ve got this personalized mastery-based tutoring.

Transforming Education Models

Well, and I think you’re speaking to, like, there’s many reasons why Alpha has done what many education startups struggle with, which is jumping into the mainstream narrative. And that sense of giving kids back their most precious resource, time is clearly part of it. AI is another part of it. And that’s where we want to dig in with you today, just given the focus of the podcast that we’ve had here. But let me perhaps frame it this way. We now have two school founders on this show, you and Diane, who have each created models that at one level I think look awfully similar in certain respects. If you mix in, say, Rocketship Education or something like that, which was founded in 2006 and is an elementary school model.

Michael Horn: We can take that and Summit Public Schools that Diane founded and Rocketship and say, hey, a lot of the structures that Alpha Schools has at one level, like a relatively limited block of time on learning academics and content in ways that are personalized for the learners, large blocks of time for projects, a big focus on skill development and habits of success or life skills like growth, mindset, agency, and so forth, those are things that were present in models like that. But then we come to at least one big difference, which is, yes, Alpha was originally designed, as you said, right before the mainstream use of AI, just like Summit and Rocketship were. But Alpha is now aggressively developing AI powered dashboards, AI powered learning applications, AI powered knowledge interest, working memory graphs for students. And so, given our focus on the podcast in this particular season around AI, I just love to dive into the AI parts of the model with you. Even as we’ll say up front, like AI is clearly inextricably linked to the other elements of the overall Alpha model. Pulling them apart is not fair to you all. But just given that we’ve heard so many podcasts with you about Alpha, and we suspect most of our particular listeners have as well, I think digging into that AI question in particular, and this is maybe the framing we can bring to it, which is, what does AI allow us to do today? That was not possible in the best of the personalized models from a decade or two earlier.

MacKenzie Price: Yeah, I think that’s a great way to frame it, because artificial intelligence in the learning science world now is what I believe is like the microscope to biology. It is the tool that is finally enabling us to integrate all of these learning science principles that have been known for many, many years can result in kids learning 2, 5, 10 times faster. It just was never possible to incorporate in obviously in a teacher in front of the classroom model, but even more importantly, even in an individualized adaptive app type setting. And so to give context to that, you know, when we first started our school back in 2014, we knew that we could use apps. So we were using things like Dreambox and Khan Academy and Freckle and Grammarly and Egump, a lot of the apps that were kind of out there. The difference was it was still hard to manage the way that kids worked through the apps. And so one of the things we found is that there’s a lot of what we call anti-patterns that kids will do when they’re using apps. It could be things like topic shopping.

You know, they jump in and say, hey, I’m going to go to, you know, I’m a fourth grader, but I’m going to try some fifth grade material just because it’s kind of interesting. Oops, it got hard. I’m going to back out of that. I’m going to jump into some third grade material or I’m going to kind of mess around on this or even more just not engaging with the apps. You know, you could have everything from a kid not even sitting in front of his computer or picking his nose or, you know, just rushing through the explanation and not reading it. And that’s where a lot of the big difference is. One thing to kind of just be clear about, we do not use a chatbot in our education platform. Chatbots in education are cheat bots.

And it was interesting. I actually had a big event last week in Austin. The National Governors Association came and toured and we’re learning all about our schools. And I made that comment, you know, we do not use chatbots. They’re cheat bots. 90% of kids are going to use them to cheat. And a couple hours later, there was another vendor who’s basically built a chatbot for education that was like, well, you know, I put him in a, put him in a little bit of an uncomfortable situation. But I think that’s really important to know.

And one of the things I really don’t want to see in our education system is we slap a GPT on every kid’s computer and suddenly say we’re an AI first classroom. Right? And I was actually talking to a Stanford professor a few weeks ago who said, you know, here’s the problem that we’re seeing. Educators are using, you know, chat features, ChatGPT to create lesson plans, you know, and do these things. Kids are using ChatGPTs to write their stuff. Professors or teachers are using ChatGPTs to grade it. And so basically the AI is just talking to each other. Right. And we’ve taken the human out of it and that is totally not what we’re doing.

So there’s kind of two features that I can go into around how we’re using AI in our model.

Diane Tavenner: Yeah, let’s take this piece by piece. MacKenzie, that will be that context is super helpful. Let’s start in the morning block where you’ve already gone a little bit with some of the apps and whatnot. You all roughly have about three hours where students are doing sort of two hours of head down learning that quote academics my language for that is content knowledge. So forgive me if I slip up and use different lingo. And as I understand it, and as you were just sharing, you’re using these apps or adaptive learning products and you named several for us there. But there are some places where you are using apps that, as we understand it, you’ve built for yourself. And this tracks with my summit experience.

Our first choice was always to buy quality products. Second choice was to partner with startups or companies that wanted to work with power users. And last choice was to build our own when it didn’t exist. So I’d love to unpack. Where is it that you’ve determined there wasn’t something good enough and that you have literally built your own application and are using it right now? And are those AI native applications?

AI-Powered Personalized Learning Systems

MacKenzie Price: So we’ve definitely had a number of years to test out a lot of different apps, see what worked well, what didn’t work, where there are gaps. And what I would say is we’ve curated over this period of time which apps are best for which grade levels in which subjects. Not all apps are created equal, but to kind of start at the very beginning where we’re using AI, we are using AI to be able to assess what a student knows and what they don’t know. So any student who comes into our Alpha school to start takes an NWEA math assessment. We also do math assessments three times a year for all students and that’s how we’re measuring growth. But what we do is we take the information that comes through that assessment as well as some other initial assessments that we’re able to do with students. And from there we have AI tools that will basically build out the personalized lesson plans that say, all right, here’s where a kid needs to go, here’s how we whole fill, which of course is a very common issue. Even our students who come into us with, you know, A’s on their transcripts, you know, can be three years behind in academic content.

Right. Actually we found out students who came in to us this year from other schools, if they had a B on their transcript, they were between three years behind and seven years behind. Which actually shows, you know, grades mean nothing anymore in this day and age. So we take the assessment and we have an AI tool that basically builds that out. So what does that look like?

Diane Tavenner: And that’s a tool you all have built internally, is that Time Back?

MacKenzie Price: That’s a tool that we built out. We have built that tool out and that is using standardized third party assessments like Max.

Diane Tavenner: Yeah, the results. And you’re ingesting the results on that.

MacKenzie Price: Exactly. So they build that. So the experience for a student, a student sits down in the morning during their core block of academics and they will log into a dashboard. We have a time back dashboard that a student logs into and says, okay, it’s time to do math. Now in some of our classrooms, kids get a choice of what subject they want to take on first. Other of our classrooms, you know, we have a set thing. Okay, we’re doing math first, then we do reading, you know, then we do language.

Diane Tavenner: And is that based on age?

MacKenzie Price: Depends on the age. Yeah. And, and so it’s, it’s always interesting. You know, what we’re really working on creating is self driven learners who understand their skill of learning to learn. So like if you talk to some of our fourth and fifth graders, you’ll hear some of them say, hey, I usually will choose to take on my hardest subject first when I’m fresh and I’m ready. Right in our kindergarten and first grade classrooms, you know, that’s more, okay, it’s math time, it’s reading time, you know, and it’s kind of subscribed there. But basically what will happen is a student will go into the dashboard, click on the subject that they are going to take on. So that’s math as an example.

And then the dashboard takes them to the app that has been determined is the one that is right for them and what they’re doing. Now when I say right for them, we also as a school have kind of used certain things. For example, Math Academy is a third party app that we love. We think Math Academy is amazing. They’ve been fantastic partners to work with and it works really great for basically third through high school. We were Using another app for our younger students, earlier this fall, we were using Synthesis, which, you know, that’s a sexy app that, you know, parents kind of like, because kids are doing interesting things. We were seeing, though, like, I don’t know if we’re getting the results we want.

So we’ve made changes, you know, to that, but they’ll go to the level that they need. So you’ve got a fifth grader who maybe needs to go back and revisit concepts from third grade. You know, they have to hit this fast math, you know, concept, or they’re looking at these fractions or whatever it is. So it takes them to that lesson and they’re doing that. So that’s the first use of AI that we have. Now the second use that we use is the vision model. So what’s happening is we’re using an AI tool that we have built that tracks the screen and is actually watching to understand how is a student moving through this material.

So, for example, when they are doing reading comprehension, are they rushing through the article? Are they just scrolling to the bottom of the screen and randomly guessing, or are they taking the time? And of course, you can tell this is a reading article that normally would take, you know, 69 seconds to read. And this kid just answered it within 10 seconds. Okay, now we’re realizing we’re. We have an anti pattern, which is basically an improper use of engaging with the apps. So we’re looking at that in terms of the vision model to see how kids are learning. When they get a question wrong, are they watching the video? Are they, you know, taking time to read the explanation? And then our AI tutor creates coaching for that student.

So it’ll say, hey, buddy, we’re realizing that, you know, you’re not reading the explanation when you get a question wrong. If you take this time to go forward, here’s what it would do. And so we’re basically giving coaching. Now. The other thing is, in our schools, we also have our cameras turned on and they are recording the students. So they’re seeing if you know, the.

Monitoring and Progress Tracking

MacKenzie Price: If the computer has been, you know, quiet for a minute and a half, is it because the student’s not even in front of their computer, or is it because they’re goofing around with their buddy next to them, what is it that they’re doing? And so it’s able to do that. Now our families have the ability to turn that feature off at home if their students are using that feature at home or if they’re working at home, they can turn that off. But in our schools, we do require that that be turned on. And so we’re able to kind of look at the coaching. Now students will basically walk through each of their core subjects, generally in about 25 minute Pomodoro sessions, and then they’re done with their academics in that two hours. The other feature that we’re using with our AI tool is we can really well analyze and understand how a kid is progressing through the material. You know, what percentage completion are they on each of the different apps, you know, and grade level subjects, things like that.

How many minutes do we anticipate? How many weeks will it take before they’re finished with, you know, fifth-grade math? If they put an hour of homework in a night, here’s how much shorter that will take. And one of the things that people love about that, not only do our students get to really see and understand, they have a sense of ownership over their academic journey. But of course, parents can log in, you know, every day if they want to, to be able to see what is my kid working on. What, you know, did he hit his goals? And then what. What we’re also tracking in the way that goal setting works is students are getting experience points, XP, to borrow, you know, a term from video gaming. And so the goal is that they get 120 XPs per day, which is 120 minutes of focused work. That’s one XP is equal to one minute of focus work.

And so that’s what we’re working on. And then when you ask about the apps that we’re using, we have built Alpha Math, Alpha Read and Alpha Write are some of the apps that we’ve incorporated into our model. And then we’ve got some other things that, you know, that we’re continuing to roll out. One that’s actually available to the public for free is an app that we’ve built that helps encourage the love of reading, which of course is a difference between learning to read and learning to love to read. And that’s called teachtales.com and you can go to teachtales.com and basically it’s using AI to generate personalized reading material based on a student’s interests that then delivers at the appropriate Lexile level for them.

Diane Tavenner: Awesome. There was a lot in there. So let’s.

MacKenzie Price: There was a lot. I need to work on more short sound bites. Well, I hope that doesn’t get worse as I get older.

Diane Tavenner: We all have things we need to work on, right? Let’s stick with those three apps that you’ve developed. So Alpha math, read and write. Are you using those across all of your grade levels? And are they AI, are they adapt, are they AI native, are they adaptive? What’s going on with those apps?

MacKenzie Price: So the Alpha Write is something that we’ve been really excited about and we break this down just to have an idea of how the app works. We break this down with the idea of can you write a grammatically correct sentence, you know, then building onto paragraphs, then building on to essays and working through. And I will tell you, I mean, we had a lot of students, again, A students from their previous schools that come into Alpha. We had high school students who couldn’t write third grade level sentences, like, it’s just crazy how poorly this is going.

Diane Tavenner: Yeah, that’s one of the questions I think that comes up is where writing is situated in the model. So it sounds like you’ve got writing in the morning block as sort of a standalone kind of just expository approach to writing.

MacKenzie Price: We do have writing in the morning block now. Our students are also doing a lot of writing in the afternoon. So, you know, for example, they’re writing, you know, talks that they’re going to give for TED talks, they’re writing essays, they’re writing book reflections that are part of our afternoon block, which is our check chart time. So it is a common fallacy that people have of, oh, these students aren’t actually doing a lot of writing. They’re absolutely getting, they get a lot of writing in. But we’re really breaking this down into everything we’re kind of thinking about is what actually works when it comes to educating students. And where have we been doing it wrong? And that’s where I think it’s so exciting to see all these learning science principles that can come up. And you know, for example, here’s another thing that we do during these, the, the core block period.

Optimizing Learning

MacKenzie Price: We’re, we’re measuring what percentage accuracy students are at to understand are they in the zone of proximal development. Right. If they’re getting more than 85% of the questions right, you know, then that’s a sign that they’re, they’re in too easy material. If, you know, they’re under 70, it’s a sign this is too hard. How do you make sure that they’re staying in the right spot? And so that’s the other part that the AI tool will kind of say, whoa, hold on here. We’re noticing that there’s something changing or that a student’s not being hit at that right level. The other thing that’s going to come in to play is we’re also going to be able to really take a lot of things around cognitive load theory principles and understand, OK, if a student only needs 5 reps of a concept in order to master that concept, they shouldn’t have to sit around and do 10 reps. And if the student needs 15, they shouldn’t only get 10.

So that’s just some ideas of some of the things that are coming in the pipeline that generative AI is going to make really available.

Diane Tavenner: So two things I’m trying to understand and contrast to pre AI to now that we have AI because a lot of what you’re describing sounds very much like what Summit Learning was about. You know, we built thousands of playlists and young people, they actually had a lot of choices. So we were working on self direction in, you know, they would do a pre assessment, they would know what they know, they would prepare, you know, and study and learn. And then they would take a post assessment, we would assess all the things you’re talking about. So I guess I’m wondering in these apps, is that similar or is AI actually playing a new and different role here? And then I do want to get to the sort of time back coach as well because I realize it’s connected. But, are we using AI in these apps? Are these sort of still adaptive learning apps? Are they …?

MacKenzie Price: Yeah, the third party apps that we’re using are not using, you know, an AI feature and they’re not creating dynamic content. You know that, that is created. This is, you know, The K-8 Common Core curriculum is what’s, what’s being fed into these apps. Where we are getting to is we are going to be moving in, in 26 to dynamically created content. Obviously there’s been a problem. There’s still hallucination issues. In fact, we have a group of high school students, kind of our, our top honors students who we are testing out dynamic content and they’re able to say, hey, guess what? The AI is acting up here. Like this is totally a wrong question on that.

But right now what we’re doing is we’re going through and we’re analyzing every lesson before it’s out there. So this isn’t just like an LLM creating a fifth grade curriculum. We’re still using that. Where the AI tool is really being used is around that vision model. So that’s the biggest difference is that, and that’s part of the reason, you know, if you talk to families who went to Alpha, you know, six years ago, you’ll hear a much more varied experience. Right. We had a lot of families that my kid wasn’t learning.

They were goofing around. There wasn’t this connection. Now there were a lot of reasons for that. We didn’t have the motivation model locked in. We didn’t have the high standards, just expectation. But the other big part was it’s really easy to goof around when you’re learning on these, you know, in general on these apps. And so that’s the biggest thing right now is that our AI tutor is ensuring that kids are moving efficiently at the right level and then understanding what the pace is for that and creating basically new lessons that will fill academic holes, you know, and go at their pace, is what I would say. But yeah, if you’re looking at, you know, for example, a math academy, you know, type of thing, you know, that is static content that, that kids move through and kind of work on.

We used to use IXL, actually. IXL kicked us off of their platform. They don’t like us for some reason. They literally won’t even tell us, they won’t talk to us. They just say, you’re off. But we had used IXL a lot. And actually one of the things I always say for families that are wanting to recreate this at home, I actually think IXL does a really good job across a lot of dimensions. They were a pretty good app.

They don’t like Alpha for whatever reason, but, you know, that’s where we’ve kind of been able to figure out what this is. But I think the other question is, when you talk about things like reading, writing, it’s really helping break down our apps that we built. You know, they’re breaking down into small components. Let’s make sure a student is excellent at this and then build from there. I think in a traditional classroom, having students write a five paragraph essay is not necessarily helpful. Instead, are they really understanding the structure and mechanics of a sentence? Are they understanding what a paragraph should look like? Are they going. And we use really the idea of building blocks in all of the work that we do.

Diane Tavenner: So does that mean you’ve got under underlying at least the apps you’re building sort of a knowledge graph that you’re, that you’re working with in order? Yeah, I mean that again, fairly. Okay, fairly consistent. Let’s dig into that AI coach or tutor, like you said, because it sounds like this is not a traditional dashboard where young people are looking at Their own data and information. Maybe they are. But what it sounds like you’ve really got is this AI coach or tutor coming in to keep them motivated. I mean, the apps you’re talking about, lots of schools have them, as, you know, lots of schools, they just don’t get the number of minutes, they don’t get the progress. And so is you’re. It sounds like that’s the key.

So that is an AI tutor or. But it’s not a bot that you were referencing.

MacKenzie Price: Well, it is, but you’re not correct about. Yeah, you’re not correct about that. The AI tutor is not providing the motivation levers. There’s no motivation that’s happening through the apps. The motivation is all through our guides, our human teachers. They are focused on motivation. And just to be really clear, the reason we’re having the success that we’re having and the academic results we’re having is not because of our ed tech. Our ed tech is fine, it’s whatever.

But there is no magical edtech product that just immediately motivates and makes a guide or makes a student, you know, lock in and be able to learn well, We haven’t built it. We haven’t seen it yet. The key for us is that we have freed up the time of our human adults to be able to focus on motivation. And so that could be everything from, well, from the idea that students earn alpha bucks for hitting their XP goals to, I was just talking to one of our kindergarten guides the other day, and she said, you know, we have kids where when they hit, one of their goals, when they. When they unlock a goal that they.

They’ve done, they have a secret sniggle, they have a secret signal, they’ll, you know, scratch their nose. And that signals, oh, you hit a goal, let’s do a silent dance party. And It’ll be a 15 second, you know, the guide is doing the silent dance party, and then they move on to the next thing. It can be individual motivation, you know, models. We had a student who, as a result of hitting her academic goals over a period of six weeks, she earned time in a professional recording studio to record an original song that she had written and was singing. So that’s the whole key. And by the way, 90% of what creates a great learner is a motivated student.

10% is having the right level and pace, which is what our edtech tool does. What the AI tutor does, though, it actually does give kids the ability to go on their dashboard and each day and see, okay, I hit my rings, I filled my ring. It kind of looks almost think of an Apple watch, you know, with exercise rings. That’s what it is for each student is, did you fill your ring? Which means, did you get your XPs in that subject? And then they can go into their learning dashboard and they can see at any time, here’s how much I. Here’s how much I hit. We even have a waste meter in the corner that says, you know, you’ve wasted 20% of your time you were wasting by not engaging in the right way or not accurately doing that.

Diane Tavenner: So the student doesn’t actually, like, engage with the AI tutor. It literally is just powering this dashboard then.

MacKenzie Price: Well, it’s powering the dashboard, and then it will pop up and say, you know, it’ll write something like, hey, watch the video explanation. You know, sometimes it’s, you know, going.

Diane Tavenner: It was like a nudge or something.

MacKenzie Price: One of the things that, yeah, we’ll see is that, you know, we’ll often say to students, you know, often the fastest way forward is to slow down, slow down and read the explanation. So it does that. But here’s what it’s not doing. There’s not some little avatar Dashy, that pops up and is like, hey, Johnny, you’re doing such a great job. Two more questions, and then we’re doing that. It’s not that kind of thing. The AI really is kind of under cover.

And it’s again, building these lesson plans and then analyzing and understanding how a kid is moving through that.

Diane Tavenner: Building the lesson plans that are in the apps or in the …

MacKenzie Price: Yeah, taking them to the right spot. So it’s able to say, okay, we’re going to take you.

Diane Tavenner: Oh, by lesson plan, you’re saying directing them to specific.

MacKenzie Price: Directing directly to this math academy. And we put up these basically guardrails. That don’t allow a kid to pop out of Math Academy and say, hey, instead of doing this concept, I’m going to go play over here. I’m going to go do this. And I think that’s a problem in traditional classrooms when people are using apps. They’re given their iPad or their Chromebook, they’re put on Khan Academy, and then they’ve got the ability to kind of bounce around. There’s one other topic that I think is also important, and this is actually a lesson we learned very early on, is the idea of requiring students to do some work each day in each subject. Right.

And there’s a lot of alternative education systems that’ll say, hey, if a kid doesn’t really want to focus on math for a couple months, that’s okay. They want to pursue reading. We actually believe. And this was, I’ll never forget the very first year we had a first grade student who absolutely loved math. Loved math. He was at 8th grade level math. And the problem was he needed his guide to read the word problems to him because he couldn’t read and he hadn’t read in like months. And that was one of the early unlocks where we realized, okay, we have to require, you know, time in each subject each day that students are accomplishing, which some, again, some alternative schools don’t do that.

Diane Tavenner: Yeah. So it sounds like then, the motivation is highly related to this relationship that young people have, which we know is very powerful. And then just following the directives essentially of the guide and then the technology to do what you’re telling them to do and stay on track.

Confidence Unlocks Student Motivation

MacKenzie Price: Exactly. And then I think the next part of the motivation, kind of the deeper level of motivation is and you know, people often go, oh, is extrinsic motivation bad? And you guys know, there’s all the research that shows there’s not necessarily even that same, you know, intrinsic versus extrinsic. But what we are seeing is that as students become more and more capable, you know, and build up their knowledge, they become more confident and they do get more motivated. They suddenly realize like, wow, okay, I can be 99th percentile in, you know, math, in language, in science, I can do this, it’s not as hard. And so we find that kids, their identity really changes as they start to see that, wow, I’m capable of learning when I’m given the right level and the right pacing and I get motivated to do that. And that is what I think is the really cool unlock that we enjoy seeing when students finally realize this. Like, wow, I can do this.

Diane Tavenner: Yeah, definitely. You said that one of the benefits of this approach is you freeing up the guide time to really do the more important things. And as I understand it, one of those activities they do is one to one meetings with the young people in this morning block. This was one of them. Continues to be, I think the most highly rated element of the summit model is the mentoring model with the one to one check ins as a part of that. And over the years we started leveraging technology to enhance those check ins. I’m curious if you’re using AI in any way to support the one to one check ins and, and what that looks like.

MacKenzie Price: Yes, we are. So we actually mic up the guides during those one to one check ins and then they’re using, you know, we take those transcripts and we’re running them through for everything from what percentage of the time were you talking compared to the student? Right. If you’re talking too much, that’s a problem. How many questions were you asking, you know, versus stating what are some of the things that are happening there. We also actually use that technology for some of our students as well. So an example of that, one of our students in Arizona, he struggles with a growth mindset, you know and he’ll, when he’s struggling in his academic work, he’s quick to say I’m dumb or I can’t do this or whatever. And so we put an AI mic on him and then he and his guide go through daily and analyze how are you speaking to yourself? Were you being kind to yourself? And what we found amazingly is that just him knowing he has this lanyard around his neck that’s listening helps him remember, hey, speak kindly to myself. I can incorporate these growth mindset strategies.

So we’re able to do that. We have guides that wear these lanyards throughout the entire day so that they can understand and then get feedback on their coaching. And so, you know, that’s, that’s a great part of it. We’re using AI. We’re very much, our organization is very much on be AI first in everything we do. How can we always take everything to the next level and build that out? And then of course the other aspect of AI, you know, that comes across in our afternoon life skills workshops is kids are learning how to use these tools that are going to help them be successful. So you know, kids are starting to build out and develop these brainless and then build out an LLM. In fact, we actually just had a pretty exciting thing happen last week.

One of our students at our high school had built up an LLM around safe teen dating advice and she ran a research study with the University of Texas professor around basically how good was the LLM she built compared to a ChatGPT and suburban moms and they just submitted to Nature with that research information. So it’ll be really exciting in the next couple of months. We’ll hear if that gets accepted. And that should be a pretty cool thing. So that’s the other part of this is you’ve got to make sure kids are being equipped to learn how to take advantage of all these new tools that are constantly coming out.

Diane Tavenner: For sure, for sure. Let’s move to that afternoon block and unpack that a little because I think I hear far less about the afternoon time, which is familiar to me, because also in the Summit model, you know, the self directed learning time seemed to get all of the publicity in the play and whatnot. It was only two hours. It was only 30% of the young person’s grade, but it got like 90% of the attention. So let’s break the afternoon into the K8 and the high school because I think those two are different in your model. Talk about the K8. Yeah, talk about the K8 afternoon, where I understand it’s young people are learning life skills. Is this a project based approach? Who’s planning this? Is it a curriculum? I think, as you just said, students are encouraged to use AI from their side.

But what I’m really interested in is how are guides and educators using technology and specifically AI for this afternoon block, the dashboard here. What’s going on there?

MacKenzie Price: Yeah, this afternoon block is really when our guides are shining in terms of being able to plan and connect and mentor our students. And that’s done a few different ways. So when we think about In K through 8, our students are participating in these life skills workshops that are developing leadership and teamwork, financial literacy and entrepreneurship, relationship building and socialization, public speaking and storytelling and grit and hard work. And so every workshop that is created has to be able to pass two tests. One is, what is the life skill that is actually being taught and how are we going to assess at the end of the six week period whether that has happened? So, for example, you know, we’re in the week before the holiday break. We’ve got test to pass events happening at all of our schools around the country where parents and people from the public can come in and see something that’s being done that the kids have been working on and understanding. Did they learn this life skill? You know, an example that we often talk about because I think it really highlights the idea of how do you learn grit? How do you learn, you know, stick with something when it’s hard? So we have students who participate in grit triathlons. And that could be things like having to solve a Rubik’s Cube, juggling three items for 30 seconds and running a mile without stopping.

And when you can see that a kid has, you know, a third grade student has been able to understand, okay, there’s an algorithm and I keep practicing my Rubik’s Cube and I start by juggling scarves and eventually I’m juggling balls and I incorporate atomic habits to, you know, walk and run. At the end of six weeks when these students are able to accomplish that goal. And it shows grit. We also do a lot of physical workshops that build out things like grit, like facing fears. For example, we’ve got a rock climbing workshop and that actually for our kindergarteners, they’re climbing a 40 foot rock wall. And when you watch the difference between a student at the beginning of that six week period, you’ve got a five year old who’s like, I don’t even think I can hold on to one of these suddenly going 40ft up. The only one more amazed by that are their parents, right? Their parents are like, this is amazing. So a lot of physical workshops that are doing, doing things and then the guides will use AI tools as part of building out those workshops. Being able to measure one workshop that we do every year that’s very popular.

It’s a communication and basically uplifting others workshop. And the test to pass for that workshop is that kids go into an escape room, you know, one of these, one of these rooms where they have to, you know, solve a bunch of different puzzles and logic things and all that to go. And we mic the students up and we use AI to analyze what percentage of their language is considered uplifting and positive. You know, where are they doing that? We’ll do that in sports activities. Kids will get feedback on their public speaking. They’ll be using AI tools to build graphic novels, to build films, you know, all kinds of things that they’re working on that way. And so that’s a combination of group workshops. And then they also get individual time to pursue what we call kind of check chart independent projects.

Diane Tavenner: Ah, so it sounds like then your guides are using just AI, like an LLM to help them plan those workshops. And then are you rubric gradient or just checklist grading?

MacKenzie Price: We’re rubric grading as well. And so we have for each life skills workshop we’re grading, what is the quality of workshop. And that’s everything from, you know, the kids’ assessment of did they love the workshop. You know, we’re constantly surveying parents, kids to make sure that what we’re delivering is right. And how are these guys going? The thing that we’re calling it.

Diane Tavenner: And that feedback from the rubric, is that derived from the AI or is the guide doing that? And then is that also incorporated in their dashboard?

Iterating to Build Measurable Skills

MacKenzie Price: All a combination of both things. And I think in a lot of ways what we are constantly doing is iterating. How do we build upon a workshop, how do we make, are we doing each session that kind of comes together. In fact, you know, today again, it’s the last week before the holiday break. We’ve got staff days every evening, you know, after school as we kind of plan and go through what worked, what are we doing to kind of increase, you know, love of school, the learning 2x in 2 hours and then development of life skills. So we’re working through a lot of these types of activities of, you know, how can we make this alpha life core soft skills measurable? Right. How can we understand how to measure these skills versus just kind of saying oh, you know, sure, they’re learning leadership qualities, you know, from, from something. What are the things that we can do to, to kind of build that out?

Diane Tavenner: Interesting. One of the conversations, big conversations, is how AI can and should change the role of the educator. And you all have purposely and publicly redefined the role of the teacher to be a guide. And I’ve been tracking through this conversation. You know what I think some of the shifts are in how you think about teacher versus guide and educator and how AI is enabling that. So let me run this back past by you and see if I got it right. So the guide’s not planning any sort of lectures or traditional lessons and they’re not doing any assessment. They’re leaving that to the technology.

They are doing one to one check ins and they’re getting feedback from sort of AI inputs from their recordings and things like that about how they can improve. So that takes time. We know in a teacher’s day if you’re transcripting all of those things, they’re going to an educator’s day and then they are planning the afternoon workshops. It does sound like they’re doing some of the assessment there. And they’re certainly, you know, working closely with the students on the motivation piece and engaging directly with them. And it does sound like that’s supplemented by AI. Did I get that right? Sort of the role of the guide, if you will.

MacKenzie Price: Yeah, you did get that right. Now there’s one other aspect of the guide’s job, in the morning academic time, in the core time. You know, I think people have this, this misconception that oh, you know, you’ve got a kid, a group of kids that are just staring at computers with no adults in sight. Our guides are there and they’re engaged, but they’re not there to teach academics. So if a kid says, hey, I’m struggling with this, you’re not going to see one of our guides saying, okay, let me, let me show you how to work through this problem. You got to carry the one. Let’s do a tutoring session on this. Instead.

They’re going to be basically asking students questions to help them understand if they have used their resources. So, hey, were you able to watch the video? Did you go into the resource library to find another answer? Did you check these kinds of things out? And so that’s where they’re really providing coaching around how to go about learning to learn. Here’s one. I don’t know if you call it an exception, but one thing I will say for our younger students, our kindergarten, first and second, we have not found to this point a replacement for reading than that one to one reading time. So we have reading specialists at all of our schools for our younger learners who are working with students on reading. And our students get one to one pull out time, you know, to be practicing that reading. It’s something critical. We are seeing, you know, certainly some great progress and success around learning to read.

But you know, you have to have that time reading out loud with a human. And so that’s the one thing I would say is our guides in our younger levels, we do have certified like reading specialists who are at those schools. And it’s, it’s critical.

Diane Tavenner: We didn’t talk about the high school afternoon time. And as I think you alluded to, and as I understand it, this is where young people are picking one project to work on for four years. And again, I don’t know if that’s a headline or if that’s accurate. I must say this is an element of the model that gives me a little bit of pause and so I’d really love to underbutt a lot of buzz. So what’s actually happening for high school students for those four hours, four years?

MacKenzie Price: You know, so we have two tracks for our high school. We have what we call an honors track. And the idea of that honors track is basically kids who kind of, you know, want to be sort of Ivy League bound. They’ve got ambitions of going into a top 20 university. And so in that program we’re basically saying, okay, we’ll deliver 1550 SAT score scores, you know, fives on at least a few hard AP courses and what we call an Olympic level Alpha X project. This is a project that is as impressive as being an Olympian. You know, what is it? So an example of that, one of our students who just got accepted to Stanford this past week. She’s the student who’s also submitting her research to Nature.

If she’s accepted, she’ll be the youngest female ever and the only high school student in history. You know, to be able to do that, you know, they work on something big. Now during that time when they’re working on these Alpha X projects, there’s no question that you’ll have kids who might, they might decide to change their project 10 times during their four year experience. What they’re really developing is the skill of learning how to go deep into something and become an expert. And so we’ll do things like they’ll go into, you know, two week long sprints where it’s like, go learn everything you can learn about this subject. And at the end of that two weeks, you know, just as often as not, you’ll have kids come out and go, actually it turns out I’m not interested in that. I want to go into something else. And the other thing is these projects that kids work on aren’t necessarily what they say, oh, I’m going to do this for the rest of my life.

Right. I’m going to go build this out in college or something. But it’s a project that they’re kind of, you know, able to develop and go deep and become an expert on. Now we also have a non honors track at our school and that non honors track is for kids who say, you know, I really love the idea of getting time back to just go do things I’m interested in. So for example, you know, we’ve got a student who wants to get his pilot’s license and he loves the idea of flying planes. Now does having your pilot’s license at age 15 get you into Stanford? Yeah, you know, maybe not, but it gives you time to go develop these things. So a lot of our athletes who want to have time to pursue their sports or whatever. Now what all of our students do, and that non honors program basically is 1350 SAT, which is, you know, top 10%, fours and fives on APs, you know, and time to go and develop the interests that they have. Honors students are spending about three hours a day on their core learning.

The non honors track is about two hours of what they’re doing. Kids are still taking AP courses, they’re still doing all those kinds of things.

Diane Tavenner: Sorry, you lost me for a second. Where’s the AP course? Is that in the afternoon or in.

MacKenzie Price: No, that’s in the morning. That’s the core academic time is students are taking four years of English, four years of math language or, you know, foreign language, all that kind of stuff. So they’re doing that in the morning. Afternoons are for working on these Alpha X projects. And then we do a lot of workshops around life skills for all of our students. So that’s everything from rejection training to giving and receiving feedback, you know, leadership challenges. A lot of things that students are working to kind of build out those skills is what our high school program looks like.

Diane Tavenner: So in the high school afternoon, there is sort of still a framework curriculum. Maybe it’s not every day, all the days, but that you do have some of these skills that you’re doing in some workshop, being around with students.

Developing Projects with Real Impact

MacKenzie Price: Yeah, there’s absolutely a framework. And then for the kids who are working on their Alpha X projects, they basically go through different levels, right? So, you know, as an example of the kind of the highest level where basically these kids are getting out and they’re launching real businesses or activities. One of our students, who’s the senior this year, she’s working on getting a musical launched on Broadway. So she actually spends, you know, five, five to seven days a month in New York City, you know, working on recording with producers, meeting with potential investors, you know, doing those types of activities. So she’s kind of been released out into the wild, you know, in some ways to go work on these projects. But the other thing that we have in common is every day our students are spending an hour working on their brain lift. So this idea of whatever the interest they have, they’re staying current on research, what’s going on, and they’re using this brain lift to then build out whatever their LLM and GPT is based on this. They also work on things like creating a spiky point of view.

So an example of that, we have a student named Alex who is building a plushie doll that is basically a mental health coach. And his spiky point of view that he’s built is he believes AI can actually provide better counseling to a teenager than a human counselor. Now, that’s a very spiky point of view, right? Especially when you think of all of the dangers on this. But he’s built certain things in his system that he believes are making a successful AI mental health coach. And so the idea is building out these things and being able to learn how to become an expert on using AI to build this thing out. So we have another student who’s interested in creating. He’s a filmmaker and wants to create, you know, his ultimate goal is to create an Oscar winning, winning film.

And part of what he’s done is to create basically a spiky point of view around how filmmaking can be done. And he just got accepted. He reached out to a bunch of different podcasts. He got accepted and invited on three podcasts. Now a lot of rejection training going on in there as well, where there’s a lot of podcasts who say no answer, you know, or whatever it is they do. But they’re learning all of these skills during this time. Plus getting the traditional academics that, you know, students in a normal school are getting.

Diane Tavenner: Where would science labs fit into this model? Or, you know, projects that are in history where we know kids, you know, dates, facts, information is, is based, but you actually need to understand the big themes and trends. Where does that fit in your model?

MacKenzie Price: Well, if you take things like science labs. We don’t have science labs. Our students are taking AP Biology, AP Physics, AP Chemistry. But they are, you know, watching great YouTube videos that are exploring these topics instead. We haven’t found that there’s this critical piece of getting kids in a lab doing beaker experiments, you know, as part of what they’re doing. They can watch these things. Now. Kids who are really excited about something that they’re working on, you know, in science can go in and build something out.

So for example, we had a student who got really interested in cancer research and epigenetics, and she ended up going out and creating a documentary that’s been viewed over 5 million times around cancer and epigenetics. So we kind of think like everything we do at these schools is taking an interest or a passion that a kid has and figuring out how to get them out in kind of real world experience with things and how they can build. We had a student who loves physics, really interested in science, loves physics. He also went on to become a professional water skier, but he would take physics principles and then work on how he could improve his water skiing times and rope length, you know, incorporating physics principles. So there’s things they do there, things like history, for example. You know, students are taking AP World and AP European and AP US History. So they’re doing all those things. They’re getting a lot of experience on writing, obviously, as they’re, they’re learning on apps, they’re coming out with, you know, fives on their APs and doing very well, and they’re having some connected time with each other where they’re, they’re basically going through some checkpoints at the same time.

Where they’re interacting last year towards, you know, basically in April you heard a lot of singing because kids had basically used AI tools to help them remember a bunch of their facts for AP world history, you know, with basically in the, in the same vein as Hamilton lyrics, you know, and, and working through those things.

Diane Tavenner: Is that the College Board’s digital curriculum that they’re using for the AP courses? Yeah. And then, that like joint collaborative time would be in the afternoon. Is that how it connects?

MacKenzie Price: Yeah.

Diane Tavenner: Got it. Awesome.

Michael Horn: This season of Class Disrupted is sponsored by Learner Studio, a nonprofit motivated by one question. What will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers and for civil thriving? Learner Studio is sponsoring this season on AI in Education. Because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about Learners Studio’s mission and the innovators who inspire them at www.learnerstudio.org.

Michael Horn: This has been super helpful, MacKenzie. Huge thanks. But before we let you go, we have this segment where we, where we get away from the conversation around education generally, although not always. Just things we’ve been reading, watching, listening outside of work if you can. But if not, that’s cool too. So we’ll let you have the first say at it before Diane shares what’s been on her list.

MacKenzie Price: Well, I’m sure that I’m going to give you an answer that is not going to be impressive to any of your followers or listeners.

Michael Horn: I guarantee you most of my answers are unimpressive. So go ahead.

MacKenzie Price: My absolute favorite thing to do in the evening when I get time to relax is I love to take a bath and I have a huge television that is mounted in my bathroom in front of my bathtub that is non-negotiable. My husband and I just moved into an apartment a year ago and I was like where is the TV in front of the bathtub going to go? Like I will not move into an apartment that doesn’t have that option. And I got in the bath last night and I was so excited to watch the Taylor Swift Eras documentary. So I am halfway through the first episode. My girls and I, and actually my husband too, we totally bond over that. And then actually later in the evening my daughter’s home from college and we’re watching this show called All Her Fault. It’s like about a kidnapping and it’s the gal from Succession, you know, the redhead from Succession, she stars in it. And one of the guys from White Lotus season one.

So I do. We like those types of shows. We loved White Lotus. This All Her Fault. I just watched the Beast in Me. So I do, I sometimes can be known to binge some of these Netflix shows, but I do them in the format of about 35 minutes, which is how long my bathtub water stays hot for. And then I’m out of time.

Michael Horn: And then you’re out.

Diane Tavenner: There you go. Well, I’m totally, I’m totally cheating today. I’m gonna share a novel that I’m going to read over the holidays. My favorite living authors, Ian McEwan. And he has a newish novel out called What We Can Know. And I, I’m literally counting down the days to the holidays and to being able to crack this one open and savor it. I’ll give you two sentences from the New York Times review that make me excited. Quote, it’s a piece of late career showmanship.

McEwan is 77 from an old master. It gave me so much pleasure, I sometimes felt like laughing. I will report back.

Michael Horn: And you’ll have to report back because I was going to say you just quoted the New York Times, which is an item for later but yeah, so, all right, I’ll wrap with mine, which is MacKenzie, to your point. We binge watched Four Seasons with Tina Fey and Steve Carell. It’s a Netflix. I hadn’t heard of it. It’s like an eight episode first season. There will be a second season based on the cliffhanger at the end. And I would say it’s about three couples, sort of 50s age group is roughly where they are and through trials and tribulations that is hysterical.

A lot of predictability and yet still very funny as it went through. So we really enjoyed it and I think binge watched it in two nights. I think so.

MacKenzie Price: Oh, great. That might be our holiday activity too for some time.

Michael Horn: There you go adding to your.

MacKenzie Price: I love that. I love that.

Michael Horn: Awesome. Awesome. Well, MacKenzie, huge thanks and as always, huge thank you to you, all of you, for listening. Keep coming with your questions, comments and all the rest, and we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
SXSW EDU Cheat Sheet: 26 Sessions for 2026 /article/sxsw-edu-cheat-sheet-26-sessions-for-2026/ Thu, 05 Mar 2026 11:30:00 +0000 /?post_type=article&p=1029429 South by Southwest EDU returns to Austin, Texas, running March 9–12. As always, it’ll offer a huge number of panels, discussions, film screenings, musical performances and workshops exploring education, innovation and the future of schooling.

Keynote speakers this year include Monica J. Sutton, creator and host of the children’s education series Circle Time with Ms. Monica, Yale psychology professor and Happiness Lab podcast host Dr. Laurie Santos, appearing alongside Common Sense Media’s Bruce Reed, and bestselling author Jennifer B. Wallace, whose work centers on the human need to feel valued — and to add value. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Also featured: former Presidential Science Advisor Arati Prabhakar, who will join a panel on “moonshot” thinking and the future of AI-driven learning. And a new documentary traces the career of longtime Sesame Street star Sonia Manzano.

Artificial intelligence this year plays a bigger role than ever. Dozens of sessions examine AI’s expanding role in classrooms, from adaptive tutoring and authentic assessment to teacher burnout, algorithmic bias and what it means to be literate in an age when machines can write, reason and create.

This year, the Austin Convention Center, which typically hosts the event, is under construction. So sessions will be held at four venues around downtown Austin. Organizers are also planning a “SXSW EDU Clubhouse” at the historic , which will host daily performances, keynote livestreams and social events each night.

Because of the event’s multiple venues, space may be limited, so organizers recommend booking reservations for keynotes, featured sessions and workshops. They’ve provided an with details. 

To help guide attendees, we’ve scoured the 2026 to highlight 26 of the most significant presenters, topics and panels:

Monday, March 9:

9 a.m. — : Researchers, district leaders and family engagement specialists examine the chronic absenteeism epidemic that has left millions of American students disconnected from school since the COVID pandemic. This panel presents the latest data on what is actually driving absenteeism — from housing instability and health crises to school climate and whether students feel they matter. It’ll explore which interventions are producing genuine, sustained improvement.

11 a.m. — : This panel presents evidence that score inflation on standardized tests, state-level proficiency standards and the federal retreat from accountability are making it harder than ever for families to get an accurate picture of their child’s true academic standing — and what policymakers can do about it.

1:30 p.m. — : This Opening Keynote features Monica J. Sutton, educator, entrepreneur and creator of Circle Time with Ms. Monica, who traces her journey from preschool classroom to digital learning spaces reaching millions of families worldwide. Sutton challenges educators to evaluate every innovation through a developmental lens, asking: Does this technology honor how young children learn, grow and thrive, while protecting curiosity and connection?

2 p.m. — : What do real students think about AI? How do they want to learn about it? This session, by MIT Media Lab’s Jaleesa Trapp and LEGO Education’s Jenny Nash, explores strategies for building AI literacy through hands-on computer science that fosters critical thinking and ensures safe, responsible AI use.

2 p.m. — : Civics teachers, researchers and policy advocates will examine how teachers are navigating the nearly impossible task of teaching democracy, elections and civic participation in classrooms where students and families often hold deeply opposed political views. The panel shares new findings from America’s Promise Alliance’s State of Young People research and explores strategies for creating classrooms where hard but evidence-based conversations happen productively — and where students develop the civic skills needed to participate in and repair a fractured democratic system.

4 p.m. — : Child development experts offer a science-backed framework for evaluating AI for young learners without compromising the play, exploration and human attachment that are foundational to healthy development. This session offers an “urgent exploration” of AI’s impact on brain architecture and what educators, parents and policymakers must know to protect young minds.

4 p.m. — : A panel of educators explores the causes of low student engagement, absenteeism and cheating, sharing classroom-tested solutions for creating assignments that are cheat-resistant by design. Rather than relying on cheat-detection software and pedagogy that punishes students for cheating, panelists will share how to foster a culture of academic integrity based on student agency, purpose and ownership of learning.

4 p.m. — : In this featured panel, Rep. Jim McGovern (D-Mass.), Chef Ann Foundation CEO Mara Fleishman, University of Pennsylvania student Maya Miller and Duke World Food Policy Center Director Norbert Wilson make an evidence-based case that school nutrition is an educational issue, not merely a logistical one. Panelists connect chronic hunger and poor nutrition directly to cognitive function, attendance, behavior and academic performance, and present district-level models that have transformed school meals into assets for learning.

Tuesday, March 10:

9 a.m. — : This featured session stars Roya Mahboob, CEO of the Digital Citizen Fund, who will draw on her experience growing up in Afghanistan to trace how exclusion compounds across the pipeline from K–12 classrooms to corporate boardrooms. Mahboob offers evidence-based interventions that have demonstrated real impact on girls’ participation and persistence in tech, as well as a vision for education that is inclusive, practical and full of possibility.

9 a.m. — : A candid discussion on the science, ethical considerations and implementation challenges of using Voice AI for assessment in K–12 classrooms. Learn what’s promising, what’s problematic and what’s on the horizon as experts explore how Voice AI differs from other AI tools such as large language models (LLMs), and how it can be integrated in ways that truly support students and educators.

12:30 p.m. — : In this keynote, Bruce Reed, Head of AI at Common Sense Media, and Dr. Laurie Santos, Yale psychology professor and host of The Happiness Lab podcast, examine how rapidly evolving AI technologies and social media are shaping young people’s mental health — and how families, educators and policymakers can respond. They explore the science of well-being, the risks of algorithm-driven systems and common-sense guardrails to protect young minds. 

2 p.m. — : This panel challenges the deficit framing that has long defined how schools, families and students themselves understand dyslexia. In an interactive session, a think tank-style panel will present a strength-based model of dyslexia support and examine how AI tools are beginning to unlock academic access for students whose abilities have been systematically undervalued.

3 p.m. — : Director Anna Toomey’s feature documentary tells the story of five mothers determined to establish the first public school in New York City for children with dyslexia. Toomey follows their battle to open the South Bronx Literacy Academy, addressing a learning disability that affects about 20% of the public. A post-screening discussion connects the film’s themes to national debates about reading instruction and equitable access.

4 p.m. — : As chronic absenteeism reaches historic highs, schools are doubling down on academics, interventions and incentives. But they may be missing underlying emotional and psychological factors driving absenteeism: stress, anxiety and lack of belonging. This session looks at how rest, youth voice/choice and emotionally safe environments can re-engage students.

5:30 p.m. — : Director Ernie Bustamante’s feature-length documentary offers a portrait of Sonia Manzano, the trailblazing actress who played Maria on Sesame Street for 44 years. A conversation with Manzano herself follows the screening, exploring how public media can reach children when formal schooling often fails, and what Sesame Street’s legacy means in the age of AI-generated children’s content.

Wednesday, March 11:

10 a.m. — : This performance offers an early look at a show in development that began as a teacher performance at a school meeting. In this Hamilton-meets-The Sound of Music-meets-Good Night and Good Luck story, set against today’s culture wars, three high school students and their teachers navigate questions of identity, purpose and what school can and cannot teach. A Q&A with Peter Nilsson, the show’s creator, follows the performance.

11 a.m. — : This solo session by Toby Fischer, an Ohio educator, offers a sweeping reimagination of literacy for the 21st century, arguing that reading and writing instruction must now encompass the ability to critically evaluate AI-generated text, recognize the hallmarks of synthetic content, prompt AI systems effectively and to understand the social and ethical contexts in which AI-generated language circulates.

12:30 p.m. — : This keynote by Adeel Khan, Founder & CEO of MagicSchool AI, makes the case that teacher expertise, relationships and professional judgment must guide technological change. Drawing on his experience building the popular platform, Khan will share unfiltered insights on what’s working and what’s not, offering a framework for evaluating AI tools through the lens of educator agency.  

2 p.m. — : This panel examines why so many school AI initiatives rely on tools that “just aren’t there yet.” Panelists share case studies of implementations that stumbled, the lessons of those failures and the educator-driven, grassroots efforts that can move schools from dabbling with AI tools to using them for real instructional transformation. 

Thursday, March 12:

10 a.m. — : This featured panel convenes former Presidential Science Advisor Arati Prabhakar, Renaissance Philanthropy President Kumar Garg, Carnegie Learning VP of R&D Jamie Sterling and Bezos Family Foundation Chief of Staff Eden Xenakis to explore how bold learning goals can accelerate AI-driven innovation in education. They’ll examine how “moonshot-centered” models can rally diverse innovators around a shared outcome and catalyze the funding needed to scale breakthroughs.

10 a.m. — : Dubbed the “toolbelt generation,” more than half of Gen Z respondents in a recent survey said they’re considering a skilled trade career. And schools are working to modernize career preparation, including by tapping immersive technology to expose students to in-demand skilled trades. This panel, moderated by The74’s Greg Toppo, will discuss how we can harness tech to engage students in learning while preparing them to successfully meet workforce demands.

11:30 a.m. — : This session offers a ground-level counternarrative to AI anxiety, presenting a community college and workforce development partnership in Cleveland that is using AI-powered tools and training to open new economic pathways for adults who were left behind by earlier rounds of technological change. Speakers will examine what equitable AI adoption looks like in a post-industrial city and what conditions made the initiative work.

11:30 a.m. — : Leaders from higher education, industry and workforce policy examine whether universities are structured to produce graduates who can thrive in a labor market being remade by AI. The panel will ask which degrees and credential pathways are producing AI-ready graduates, where institutions are falling behind, and what structural changes will move the needle most.

11:30 a.m. — : Directed by Scott Barnett, this feature-length documentary follows bestselling author James Patterson to the front lines of America’s reading crisis to examine how the Science of Reading — a vast body of evidence-based research — is changing how children are taught to read. A post-screening discussion with literacy researchers and classroom teachers will examine what the film gets right and what systemic change will actually require.

2 p.m. — : This workshop, conducted by two top officials with the Illinois-based Education Research and Development Institute, will offer practical AI tools that automate routine tasks, generate content, analyze data and simplify communication, freeing teachers to focus on students and strategy and reducing the risk of burnout.

2:30 p.m. — : This featured panel, with Martin McKay of Everway, Hello Sunshine CEO Maureen Polo and the Brookings Institution’s Rebecca Winthrop, draws on a landmark report spanning 50 countries to explore what it means to protect children’s cognitive, social and emotional development in an AI-saturated world. Speakers will move beyond the question of whether AI should be used in schools to ask how it can be designed to strengthen young people’s capacity to think, relate and thrive.

]]>
Two New Reports Urge ‘Human-Centered’ School AI Adoption /article/two-new-reports-urge-human-centered-school-ai-adoption/ Tue, 03 Mar 2026 11:30:00 +0000 /?post_type=article&p=1029371 Two new reports caution that if schools make missteps implementing AI, the results could haunt them for years, locking them into a future largely written by big tech instead of those closest to kids.

The reports, both the results of small, intensive gatherings of educators, policymakers, researchers, tech officials and students last year, share a common warning: AI in schools must serve human-centered learning that doesn’t simply push for more efficiency. To do anything else risks creating a generation of young people ill-equipped for the future.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


The findings come as young people say they’re turning to generative AI more than ever: A Pew Research Center survey released last week found that more than half of teens ages 13 to 17 use chatbots to search for information or get help with schoolwork. About four in ten report using AI to summarize articles, books or videos or create or edit images or videos. And about one-in-five say they use chatbots to get news.

For the first report, a group of 18 people met in July in Phoenix. Brought together by , a training and policy organization, and , a digital curriculum company, the treats the question of how schools should view AI as a literal “Choose-Your-Own-Adventure” story: The authors lay out three possible scenarios in which educators in an imaginary school district make radically different decisions about the technology.

In the first scenario, the district retreats from AI altogether after a data breach, abandoning a previously created “Innovation Lab,” while teachers return to traditional instruction and testing.

The restrictions soon backfire. Students continue using AI at home, but without guidance, take shortcuts on homework, developing a kind of survival mechanism they privately call “school brain.” Seeing how irrelevant most lessons are, they do just enough to get by, offloading thinking to AI tools. When tested, they show shallow understanding and poor foundational skills.

Test scores plummet, college acceptances drop and 40% of graduates land on academic probation. Employers report that graduates can neither work independently nor collaborate effectively with AI. Teachers begin departing in waves.

Retreating from AI, the authors find, creates “the worst of both worlds” — students who can neither think independently nor use AI effectively.

In the second scenario, the district, facing competition from AI-driven private schools, goes all-in, adopting a comprehensive, district-wide AI platform for automated instruction. The platform promises greater efficiency via AI tutors, automated grading and behavioral monitoring. And while it initially lowers costs and produces higher test scores, teachers find that students are soon gaming the algorithms rather than learning. The auto-grader penalizes valid but unconventional answers, while multilingual learners are unfairly penalized for non-standard answers on tests.

Teachers find themselves defending grades they didn’t assign and can’t fully explain, while families that challenge grades are stopped by “proprietary algorithms” that even administrators can’t review. The system delivers “a black box” that removes human judgment: “Students could feel the difference between being evaluated by an algorithm and being understood by a teacher.”

Before long, graduates struggle with collaboration, creativity and adaptability — skills employers and colleges increasingly value.

In the report’s third choice, the district, via its Innovation Lab, redesigns its offerings to prepare students for an AI-driven future while keeping a focus on “human-centered” education. Rather than focusing solely on technology, it develops a “graduate profile” that emphasizes critical thinking, ethical reasoning and human-AI collaboration, among other indicators.

The lab shifts to flexible, project-based learning, and students soon learn to use AI as a tool that supports but doesn’t replace their thinking. While the district continues to satisfy state accountability through testing, it also pursues federal innovation grants to fund portfolio-based assessment systems based on the graduate profile.

All is not rosy, though. The redesign is expensive and hard on teachers. Enrollment suffers as political resistance builds steam. But graduates soon demonstrate an ability to critically evaluate AI tools, adapt quickly to workplace changes and develop a “learn how to learn” mindset that serves them in the long term. 

Alumni soon report that their “robust” portfolios of work are a huge advantage in competitive job markets, and employers say they are the only new hires who critically evaluate AI’s recommendations, spotting hallucinations and biases.

Amanda Bickerstaff, AI for Education’s co-founder and CEO, said the first two scenarios are what educators at the July convening said they were seeing most often in schools.

“There was a strong recognition from everyone, including the students, the two high schoolers, that the traditional methods have not worked … for decades,” she said. “But it feels safer.”

As for going “all in” on AI, she said, that point of view is inevitable in many places, given current aggressive efforts of tech giants like Google who are “pushing into schools,” going direct to students.

“There’s this real pressure from both ed tech and AI itself, because it’s such a big market that’s never really been figured out,” she said.

Amanda Bickerstaff

What makes it worse is that few tech firms employ enough teachers to ensure that their products work well for students. “They don’t have hundreds of education people,” Bickerstaff said. Their education teams are “fractions of their headcount, working on tools that are instantly in students’ hands.”

The third path, in which the district redesigns its offerings, is “the most human” of the three, she said, and the most intentional. “The third path is the one that trusts humans and educators and students and families,” Bickerstaff said.

‘Explicitly ambidextrous’ schooling

by the , a think tank at Arizona State University, also calls for a new approach to schools’ decisions about AI, saying the technology “should be a catalyst for human-centered learning, not a replacement.”

The CRPE report, the result of another gathering in November, asserts that schools are at a pivotal moment. Their AI policies could go one of two ways: They can either entrench outdated educational models or help bring about a fundamental transformation of schooling.

“One of the big things that came out of those discussions was a strong feeling among the group that AI is currently being thought of as a productivity tool for the education system that we have, rather than a tool to radically improve teaching and learning and outcomes for kids,” said Robin Lake, CRPE’s executive director.

During its meeting, the group repeatedly discussed an “efficiency paradox” that could make schools faster and cheaper without addressing students’ actual needs. To protect against it, they call for a more coherent, human-centered approach that is “explicitly ambidextrous,” improving current practices while intentionally building toward new learning models.

The problem with AI, the report alleges, is that it could simply improve the efficiency of outdated educational models. It notes that the , a time-saving testing technology, for decades reinforced low-level standardized assessments, often at the expense of improved learning.

Instead of using AI as a new kind of Scantron, it says, AI could make way for several innovations, including new assessments that capture real-time performance as students work. It could even measure key non-academic indicators such as belonging, confidence, curiosity and relationship quality.

Robin Lake

Lake said the report’s idea of an “ambidextrous” approach to AI came from an acknowledgement by the group that “we have to attend to the kids who are in our schools right now — and the teachers,” she said. “We have to use whatever technologies are available to make things better, but we also have to make investments in big, really different whole-school designs.”

Those could include not just better assessments but ways to help teachers provide “rigorous personalization grounded in the science of learning.”

Districts could create classrooms with multiple adults working in teams based on their expertise. And AI could enable schools to match students to internships and other experiences, handling administrative tasks so humans can focus on relationships.

Lake said the group that met in November kept coming back to one idea: Keeping an eye on both the future of school and the reality of the schools we already have.

“A lot of times when we have these conversations about AI and the future of schooling, it feels very floaty and abstract,” she said. “So I really appreciated that the fellows had a vision to connect the here-and-now to what kids need to know and [should] be able to do in the future. That feels really important for us all right now.”

]]>
Exclusive: New Google Partnership a ‘Sizable Investment’ in AI for Teachers /article/exclusive-new-google-partnership-a-sizable-investment-in-ai-for-teachers/ Mon, 23 Feb 2026 12:01:00 +0000 /?post_type=article&p=1028964 A top professional organization for teachers has inked a three-year deal with Google to offer AI training to “all six million K-12 teachers and higher education faculty” in the U.S., an audacious undertaking by the tech giant that could reach millions of students and dwarf previous tech forays into education.

“While Google’s been offering educational products for 20 years, this is a different moment for us,” said Chris Phillips, Google’s vice president and general manager of education.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


He called the effort the largest for Google in two decades of working with teachers and students. Phillips didn’t immediately offer a price tag, but said it’s “a sizable investment.”

Chris Phillips

The training, offered through the ed tech-focused group , will include hands-on experience with Google’s Gemini and NotebookLM tools, offering certificates and digital badges.

“We have just heard so much feedback from teachers that are just saying, ‘We are not prepared,’” said Richard Culatta, ISTE+ASCD’s CEO. “‘We don’t have the training, we don’t have the background that we need for the realities of teaching in an AI world, both teaching in the classroom and also, secondarily, but equally as important, preparing students for the world that they’re going to be in.’”

It’s the latest in a series of large-scale teacher training initiatives over the past few months. In July, the American Federation of Teachers, the nation’s second-largest teachers union, announced its own $23 million , partnering with Microsoft, OpenAI, and Anthropic to train up to 400,000 educators.

At the time, AFT President Randi Weingarten said the academy was a way to ensure that teachers, not technology, remain in control of the classroom.

But AFT’s partnership with OpenAI and Anthropic drew sharp criticism from educators and researchers, who questioned whether tech companies with products to sell and market share to protect are the right architects for teacher training. Education technology critic Audrey Watters called AFT’s academy “a gigantic public experiment that no one has asked for,” while ed tech analyst Alex Sarlin said tech companies were in a “land-grab moment.” 

Microsoft has also launched its own community-based platform, Microsoft Elevate for Educators, offering free courses, live training sessions and credentials. 

Google itself in 2024 committed $25 million through its philanthropic arm to several nonprofits, including ISTE+ASCD, 4-H, and aiEDU, with particular attention to reaching underserved communities. Its goal at the time was to reach more than half a million K-12 and college students, as well as educators.

ISTE+ASCD — the group is a combination of two that merged in 2023 — was the beneficiary of $10 million of the $25 million, saying it would collaborate with several other groups, including the National Education Association and the Computer Science Teachers Association.

Though Google has its own AI platform, Culatta insisted that the work won’t be about pushing specific tools, saying that kids need enduring AI skills as the tools change. 

Richard Culatta

In 2023 ISTE+ASCD introduced its own AI chatbot built on educator-focused content and trained solely on materials developed or approved by the organization. The chabot tapped into curated databases in a bid to give teachers routine access to high-quality research. 

In some ways, efforts like those of AFT and others reflect a lack of leadership at the federal level. The Trump administration, through an , has backed efforts to expand AI in schools, but has also eliminated the Office of Educational Technology, which long focused on making access to technology until Trump last spring.

Culatta, who ran the office under President Obama, said it’s important that organizations like ISTE+ASCD “step up when there are key needs that may not be filled at the federal level. And we just want to make sure that, regardless of where we would like some things to happen, at this point we just have to do all-hands-on-deck and make sure we’re supporting kids and teachers.”

‘Massive undertaking’ or waste of time?

The sheer scale of Monday’s announcement underscores how urgently educators see the need to learn about AI: RAND Corp. last spring found that the number of school districts training teachers on AI from 2023 to 2024, from 23% to 48%. Researchers predicted that as many as three-fourths of districts would be in the AI training business by the end of 2025. 

Robin Lake, director of the at Arizona State University, said the new partnership is “a massive undertaking that is urgently needed right now. I hope it includes a research component so we can learn from it because much more is needed.”

Google’s Phillips said the company has “multiple arms of research happening all around the world” and “will start to produce some of those and share them publicly where we’re doing studies” in classrooms.

“We’ll see how the results land, but ultimately we want to improve learning outcomes,” he said. “We want to help change. We want to bend the curves on proficiency.”

Robin Lake (CRPE)

Lake, who has long urged schools to take AI readiness seriously, said school principals, district leaders and teachers-in-training “also need to be AI literate, as do students and families. We can’t rely only on private companies with an interest in AI products to fund and lead AI readiness.”

Others were more sharply critical of the new partnership.

Justin Reich, an associate professor of digital media at MIT and host of the podcast , said industry-sponsored professional development is, at its core, a “customer acquisition” campaign. Since ISTE+ASCD is historically both a membership-driven teacher organization and an industry trade association, he asked, “How can it be an honest broker to those two constituencies, while also launching an enormous initiative that privileges the products of one particular vendor?”

Google’s past educator certification programs, he said, “focused more on tool use and adoption than on learning,” with no substantive evidence that improved student outcomes followed.

Phillips said its research is ongoing, but noted that its app is allowing students to self-pace lessons. “Where they struggle, they can dive deeper and learn more and get more up-to-date,” he said. Among several unpublished findings, Phillips said, is one that found students spend more time on topics they’re struggling with and end up learning these topics more deeply. 

Culatta admitted that Google would of course like to see its products in the hands of teachers. But he said he and his colleagues “want to make sure that if there are products going to schools — and they already are — that they’re being used in ways that are really impactful.”

He added, “If it was going to just be, ‘Here’s how to use Gemini,’ Google actually doesn’t need us. We are coming in because Google is looking for somebody who can say, ‘What are really the best practices for learning with AI, not necessarily learning about AI?’”

Google’s Phillips said teachers and students “can choose other products in the market and so forth, but this program does come with using our products so that we can help teachers really get started, get going.” 

He noted a “super-generous free tier” to make the tools widely accessible, and the training to use it. “But schools, districts, teachers themselves have choice, and I think that’s perfectly fine, but we want to play a role with not just providing tools, giving people access, but actually helping them apply it and use it” to jumpstart “safe, appropriate use of AI.”

Justin Reich

MIT’s Reich said his deeper concern is what he said is the near-total absence of evidence underlying AI professional development, either to teach educators how to use AI in their classrooms or simply to teach them how AI and large language models work.

“Literally no one on the planet understands how [AI] works,” he said. “The best computer scientists in the world cannot explain why LLMs generate plausible sounding text in a convincing theoretical framework.”

Reich recounted asking engineers at a Google DeepMind event in November whether they knew how to train junior engineers to use AI tools effectively in their work. “Every single person I talked to said, ‘No,’” he said. “If Google doesn’t know how to effectively use AI to write code, what is this business about teaching people AI literacy? We just don’t know.”

Benjamin Riley, a well-known AI skeptic who founded the think tank , was more blunt, casting the Google partnership as part of an ongoing process making ISTE+ASCD a “shill” for Big Tech.

“I admit I’m fascinated to see the major Big Tech companies competing so vigorously to control ‘the education market,’” Riley said. “OpenAI is giving away their premium model to teachers (until they won’t), and now Google is doing whatever this is.”

Benjamin Riley

In the past, Riley has questioned whether offering teachers and students skills such as “AI literacy” and “AI readiness” are effective, even as many others warn that they’ll be essential.

“I guess I’d credit their clairvoyance a tad more if ISTE+ASCD had not claimed, as recently as just a few years ago, that ‘the future’ would also demand that everyone . Oops!”

Riley, who also founded the cognitive science advocacy and research group , predicted that much of the training will end up wasting teachers’ time, Google’s money and ISTE+ASCD’s relevance. 

“Human beings have evolved to learn from each other in the context of our relationships. This is the superpower of our species, and the kids who’ve grown up in the past 20 years are increasingly disgusted by what tech has done to them personally, and society more broadly. They are not happy about the world we’ve given them, and their voices are growing ever louder.”

Culatta, for his part, said AI “is not going away. Does learning happen with people connected with each other? Sure. It’s not the only way learning happens, but it’s a very important way. And we actually think AI can help make those human-to-human learning experiences much better.”

]]>
Opinion: America Is About to Be Graded on AI Literacy. We Are Not Prepared. /article/america-is-about-to-be-graded-on-ai-literacy-we-are-not-prepared/ Sat, 21 Feb 2026 11:30:00 +0000 /?post_type=article&p=1028727 In 2029, a global spotlight will turn to how well U.S. students are prepared to understand and use artificial intelligence. For the first time, the Programme for International Student Assessment or PISA will treat AI literacy as a core competency, it alongside reading, math and science.

That is not an abstract milestone for researchers or policy circles. PISA is a premier scoreboard used globally to compare how well countries are preparing young people for the future. When AI literacy becomes part of that scoreboard, it will send a clear message about who’s ready and who’s not.

The warning signs are already there. The latest PISA results place U.S. students at roughly 28th in mathematics, 6th in reading, and 10th in science among peer nations. Taken together, those rankings paint an uncomfortable picture. By international standards, the United States is already falling behind in areas that will define economic competitiveness in the years ahead.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Based on my experience as a former state commissioner of K-12 education, America is not anywhere near ready to top this list when it comes to AI literacy. If we stay on this trajectory, we may not even make the top 30. Are we ready for this level of embarrassment on the global stage for a technology we largely created?

The problem is not that we lack innovation. Innovation is part of our national identity. The creation of transformational tools is woven into our nation’s history, and AI may prove to be the most revolutionary technology yet. The real problem is that we are not urgently preparing ourselves for the changes AI will bring. At this time, America has no real plan to prepare all our students and educators with anything close to the consistency and urgency this moment requires.

Our country’s patchwork system of state-led educational approaches and requirements is a big reason why. A student’s experience with advanced technology like AI depends largely on their ZIP code, their school district and whether educators have been given the training and support to teach this material well. In some schools, teachers are moving forward with thoughtfulness and energy. In others, staff are frozen by uncertainty, lack of training, or fear about what could go wrong. Many districts still have no clear guidance at all.

Local control has long been one of America’s strengths. But in this case, local control may be becoming a liability. When it comes to AI literacy, our system is both inefficient and inequitable. It means some students will graduate fluent in the most consequential technology of their generation, while others will be left to their own devices. In the future of work, that gap will matter.

I do not believe AI will replace teachers. Teaching is built on human relationships, trust and the ability to motivate young people. But I do believe people with AI skills will replace those without AI skills. Industries will shift. Some jobs will disappear, others will emerge, but one thing is clear: The students who can use AI responsibly and effectively will have a distinct advantage in the future economy.

That is why AI literacy is not a luxury. It is both an economic issue and an equity one.

So what should we do, and why now?

Let’s use the 2029 PISA timeline as a collective spark to give our kids the best opportunity anywhere in the world. Three years is not a lot of time in education. Curriculum adoption takes time. Teacher professional development takes time. Building sensible policies takes time. Let’s embrace this moment in time to instill urgency in everything we do. 

It’s time to shift off the path we too often do in education: scramble, improvise and widen the very gaps we claim to care about closing. Instead, let’s work together to develop a true national AI literacy framework, paired with a basic shared approach to assessing progress.

That does not mean federalizing classrooms or punishing schools. A national framework is about consistency and responsibility. It ensures every student learns the fundamentals, regardless of where they live, and it helps educators know what good looks like across grade levels.

AI literacy also needs to be defined clearly. Young people must understand what AI is and what it is not. It is not a human. It is a prediction machine. That distinction matters, especially now that many students are interacting with AI companions. Some of those tools have already been linked to serious harm. Kids deserve straightforward education that helps them navigate this technology safely.

If that sounds like a lot to teach, it is. But we’ve done something similar before with other powerful tools, like computers in classrooms and use of the internet. Those things helped us be more efficient, and more importantly, they helped educators focus on the critical job of teaching.

This is critical, because we must also provide support for our educators if we expect students to be ready for the 2029 PISA test. AI has real potential to improve teaching and learning, but only if educators are trained and given clear guidance on how to use it responsibly and effectively. Without that preparation, we cannot expect consistent outcomes for students.

The same is true for families. Students’ use of AI does not stop at the schoolhouse door, and parents need the tools and understanding to support responsible use at home. Schools and families must be aligned if students are going to develop the skills and judgment this technology demands.

The encouraging news is that this should be common ground. Regardless of politics or geography, we share a responsibility to prepare young people for the world they are entering. What’s needed now is a shared national commitment to AI literacy that creates urgency around implementation and ensures that by 2029, students and educators alike are prepared, confident, and competitive on a global stage.

America invented this moment. Now we need to teach our children how to lead in it.

]]>
AI Optimization’s Impact on Use of Time, Space and Resources in Schools /article/ai-optimizations-impact-on-use-of-time-space-and-resources-in-schools/ Wed, 18 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028633 Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Imagine being able to build a master school schedule in 30 minutes.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


On this episode of Class Disrupted, Paymon Rouhanifard, CEO of , joins Diane Tavenner and Michael Horn to explore how AI-powered optimization is transforming a complex challenge in K–12 education: the master schedule. The conversation touches on the critical role that master schedules play in shaping student experiences, resource allocation and district priorities. Rouhanifard explains how Timely identified a pain point schools face with traditional scheduling methods and applied an AI-driven approach that saves hundreds of hours while enabling systemic change and better use of resources. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, this is Diane, and you’re about to listen to an interview that Michael and I had with my friend Paymon Rouhanifard, who is the CEO of Timely, which is a company that’s helping schools figure out how to do their master schedules in a way that’s aligned with their values and what they’re trying to do to support their young people. And I love this interview. I think it’s so fun for us to really talk with someone who deeply understands schools and how they work and the operations of them and what’s going on and who is really trying to add value using AI in a way that feels very concrete and specific. And I just think you’re really going to enjoy Paymon’s thoughtfulness and his deep understanding of education and this really specific application of how AI is being used in education. 

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you. And I’m truly excited for today’s guest, someone we both know pretty well, who has been doing some very interesting work, some of the early innings of which I got to see up close because his company was incubated as part of Workshop Venture Partners, where I’m an advisor. And like Laurence Holt, he’s been on my Substack before, . So I’m excited for this conversation to dive a little bit deeper into what he’s doing and how it interfaces with AI.

Diane Tavenner: I agree, Michael. I’m excited to have Paymon on our podcast. We met when Paymon was leading Camden and I was leading Summit. And it’s interesting because I think fortunately for me at that time it was in a learning space where we met and I did a lot of learning from Paymon and with Paymon, well, I’ll speak for myself. I did a lot of learning and feel really grateful that he’s here with us today. And so let me just tell those who don’t know a little bit about who Paymon is. He’s the co-founder and CEO of Timely, an education technology company that helps schools build better master schedules through AI optimization. Prior to that, he was the co-founder of Propel, which offers tuition free health care job training and is currently the chair of its board of directors. And as I said, he was the superintendent of Camden City Schools in New Jersey, among other roles in public education, from teacher to administrator.

And so Paymon, welcome. We’re so happy to have you here.

Paymon Rouhanifard: It’s really great to be here. Thanks for having me.

Michael Horn: Well, so I’m excited. But let’s levelset with our audience and start at a high level and just help us understand exactly what Timely does and what problem it’s solving for school districts and school systems.

AI-Powered School Scheduling Support

Paymon Rouhanifard: Well, as Diane just mentioned, we help middle and high schools build their master schedule using AI optimization and dedicated support from a team of former educators who have built schedules before and also support with data integration. And I think to really understand our work, you have to understand the importance of the master schedule. And there’s sort of two parts to it. Part one is every school, including elementaries, although they have slightly less complicated schedules, but every school in the country has to build their master schedule every year, typically in the spring for the following fall semester. And it is an incredibly painful exercise at the school level where folks have just historically been using really clunky tools. And then the second part of it is the opportunity for systemic change and the connection to the central office to think about resource allocation more strategically, to think about priorities more strategically. And so there’s sort of those two components to it. But tactically, what we do is we help middle and high schools build their master schedule.

That is a painkiller at the school level. And again, can kind of enable key priorities at the central office.

Diane Tavenner: That’s awesome. Paymon, one of my, I’m going to disclose something weird here. I am like a fanatic about the master schedule. When I used to build the master schedule, I was like a lunatic around it. So I’m actually very nerdy and excited about what you do. And one of my concerns is that most people, when they talk about AI in education, the only image they have in their mind is literally a chatbot, you know, that’s mostly focused on the students or the teachers used in the classroom. And, you know, as Michael and I are shifting our conversation from sort of big picture AI to actual practitioners and the usage of AI in education, I really wanted to talk with you, and I’m glad we’re doing it first, because you’re working on the system of school, if you will, and your instance of AI is not a student directly interfacing with it, but has a massive impact on the student’s experience.

Because literally the master schedule is everything. I don’t think people realize that. It is sort of the infrastructure that controls almost everything. And when you’re in a district and you realize that, you realize all the power is in the master schedule. Right? And so tell you said it’s a pain point for schools, but paint that pain point a little bit more for us. Like, what problem were you setting out to solve for them, yes, it’s, like, laborious and kind of hard. But you know what? How does solving this lead us in a direction that you believe in in schools?

Master Scheduling: The Complex Puzzle

Paymon Rouhanifard: You know, we often say that those who know about master scheduling really know, and Diane, really appreciate that you’ve been in the guts of it in your prior lives. And if you were to ever talk to an assistant principal at a middle or high school and you asked them about the master schedule, like, their eyes will get widened and then they’ll, like, have a lot to say. Typically mostly horror stories about how hard it is and how they lock themselves in a room every spring and don’t leave that room for weeks, and they’re bruised and battered and they have a final master schedule. And so the reason for that is the schedule is just a really complicated puzzle to put together: What courses you’re going to offer. What courses students need to graduate depending upon the graduation trajectory they’re on, what credentials teachers have and what courses they’ll be teaching, what rooms are available, what other constraints in terms of collective bargaining, consecutive periods taught. There’s just course requests — you know, from students might be kind of most fundamental to solving this equation — but it is a lot of different variables. And folks are using tools such as Google Sheets, whiteboards, sticky notes.

We’ve seen giant Magna Tile boards with a lot of our district and charter partners, and we take pictures of them and save them for posterity. And so that’s, again, speaking at the school level, of what a painful exercise it is to put together a really complicated puzzle that is fundamentally a math problem to solve. It’s a mixed integer, linear math problem.

Diane Tavenner: Yeah.

Paymon Rouhanifard: And to your point for the systems level, that’s, I think, where it gets really interesting and I suspect a thread you may want to pull on.

Diane Tavenner: Definitely. Let’s start a little bit with just understanding more. You just said it’s a math problem, which is now we’re getting into AI. I don’t know that everyone realizes that AI is really mathematical in many ways, but help us understand where the AI is in Timely and what, you know, do people in the schools even realize that you’re using AI?

AI Optimization Over Generative AI

Paymon Rouhanifard: Yeah, I would say because of the AI boom that we’re in. A lot of folks understandably believe that we use generative AI, but we don’t. And we use AI optimization technology, which is under the broader banner of AI and machine learning. And so the reason for that and implicit to the question you just asked is large language models being predicated on words are not really good at math. And I think we’ve all seen stories of the large language models struggling with basic math and hallucinating. And we’re solving a really, really complicated math problem. And so we are training off of a local set of data, using AI optimization to do that heavy lifting, to ultimately solve that math problem school by school. And so we don’t use a chatbot, but instead we think about it as a series of inputs in terms of course data, student course requests, staff room information, and then you layer on a number of constraints in terms of, we’re telling the AI optimization engine, “here are the things that we know have to be fixed. This teacher needs a prep in 8th period. This common planning period needs to be at the start of the day,” whatever it may be. There’s a million different examples of that. And once you enter those constraints, you push a button and then it solves that math problem for you.

Diane Tavenner: And I’m assuming it’s also pretty quickly Paymon, because I remember, oh my gosh, back in the day, I mean this would have been in the like early, early 2000s. I mean there was like a, I had a computer program that did this, but literally I could only run it a few times because it would take, you know, sometimes 24 hours literally to run it. I would have to put my stuff in there and then hit a button and then go away for 24 hours and cross my fingers that something would come back. I’ve also done the Post-it Notes on the board too. So like, I don’t, if you haven’t done this, you don’t appreciate how insane this process really is. And oh, by the way, everyone’s mad at you when you’re done because you never did it right.

Paymon Rouhanifard: Yeah, you can’t, you can’t please everyone with the master schedule. And yeah, I mean we obviously like to track a lot of data, outcomes oriented organization. And what we see is on average, folks are spending hundreds of hours, hundreds of hours building that master schedule. And so, you know, there’s a process in terms of onboarding, ingesting data, setting those constraints, and that takes a little bit of time. And then you push the button. And on average we’re talking about, sometimes the schedule is built in 30 minutes, sometimes it takes a couple of hours.

Diane Tavenner: Yeah.

Paymon Rouhanifard: And in the grand scheme of things, you’re saving hundreds of hours. And so at the school level, it really does create that, that sort of time efficiency.

Michael Horn: So I want you to double click a little bit more about why this wasn’t possible previously because you mentioned Google Sheets, Diane mentioned the software program that would take 24 hours to run. We know that there have been a few startups in the master schedule space. You know, maybe a decade ago, I think there were a couple that got funded and stuff like that. What is different about this moment where you could use this AI optimization that wasn’t possible say five, 10 years earlier, that you’re able to take a process that’s hundreds of hours to a 30 minute of output and then I imagine some iteration?

Improving Clunky School Scheduling Tools

Paymon Rouhanifard: Well, I would say there are two things happening here. Certainly the technology, as we all know, has gotten better over time and even in the last two to three years, significantly so. But I would also add that because we are a focused solution and when you think about the status quo and in what ways we’re disrupting people, for the most part they’re using these clunky tools because the solution that they purchase to solve their master schedule is the student information system. For the student information system, the scheduling module is one of many, many different things it does. And so if you talk to any superintendent, assistant superintendent, head of a charter school about their student information system, usually they tell you it’s clunky, it’s hard to use, it’s a necessary evil, it’s the repository of data, it’s the source of truth. And then it has an attendance tracker and a grade book tool and a master scheduler. And what we’ve learned through lived experience. A former superintendent, my co-founder, was a teacher in Boston Public Schools, who’s our chief technology officer.

Pretty much everyone on our team has had school based experience. We know that that status quo has not allowed folks to build schedules that are easy to build and two, are strategic and connect at the systems level. And so it’s about creating a dedicated systemic solution that frankly could have been built sooner. But now with better technology and a more dedicated approach to solving the problem, I think it’s allowed us to gain some traction.

Michael Horn: It’s super interesting. I’d love to hear some stories about districts and charters and how they’re taking advantage of this, how they are allocating resources differently, perhaps to better optimize the use of time and space and the impact you’re seeing and numbers like what you know, how many schools are you serving and what are the sorts of stories that show how they can now rethink use of time, space, resources across the school when they get to play with the master schedule in a way that they hadn’t before.

Paymon Rouhanifard: I’ll start by just saying that when I think about the moment we’re in with AI and connecting it to priority moments of innovation and sort of mass adoption of technology. So I’m thinking about certainly post Covid and adoption of technology across schools in a significant way, the personalized learning movement before that. What you see is a lot of different solutions entering the marketplace. And I would argue that most of those solutions, and this is not a critique, but most of those solutions are at the individual level. They’re used by classroom teachers, used by students. Rarely do they connect across all schools in a systemic way. Rarely do they connect to the central office in a systemic way. And sometimes and oftentimes I should say that is the nature of innovation.

You need to have a very dedicated point solution and really figure that out in the same way that we started. I think what makes scheduling unique is that it’s not just about the painkiller at the school level and helping your AP and your counselor save their summer, basically to get their summer back and not have to be banging their heads against the wall, but because the schedule should reflect your fundamental priorities as a school district. So when you, when you zoom all the way out, 80 to 85% of your budget is your personnel. And the schedule governs how your personnel are interacting with students. And that fundamentally reflects the student and teacher experience, your academics, your budget and your staffing priorities. And so the schedule before Timely was always this black box that was created on a Magna Tile in one school, in a Google sheet in another school, in an Excel spreadsheet in your third school, and so on and so forth. And then they’d use the student information system to kind of do the last mile and put it in and call it a day. But never did the central office get the opportunity to, to connect those dots and to think about what are our district wide priorities academically.

What are our staffing and budget priorities and how can we reflect that in the schedule that again governs 80 to 85% of your budget. And so that’s, I think, what makes Timely really unique. You know, in this moment where we have a lot of point solutions that are serving individuals. In terms of where we are as a company. Michael we started with a really small pilot serving a handful of schools three school years ago. The following year we served about 80. You know, last year we closed around 300. Right now we’re up to a little over 400 schools across 17 states. So we’re still a young organization, but we’ve, we’ve seen a lot of momentum and we’re really grateful for that.

Michael Horn: But I know you got a couple of great case studies. Maybe just give us a couple examples of how schools have used that to allocate resources very differently or things they were surprised by before they looked into it through your tool and then all of a sudden said, holy cow, how can we change this?

Paymon Rouhanifard: I’ll give you two examples, one district and one charter. We worked with a district in West Texas, Lubbock Independent School District, which has about 25,000 students. And like many other urban and rural school districts, it has seen declining enrollment as their special education population and emerging bilingual population has increased in terms of a percentage of the total enrollment. So one way to think about that is overall budget declined, but the needs of students has increased. And so doing more with less is a very common refrain in district lands across the country. And so what Lubbock did, across 14 middle and high schools, through implementing Timely and building a scheduling process alongside us, they identified 37 vacant positions, teaching positions that they were planning to hire for, but realized they didn’t need to hire for them. And the reason for that is they identified staffing inefficiencies through the master schedule. And by the way, I felt this acutely when I was a superintendent, where I walk into one of our high schools and I walk by a class with six students and another one with 33 students.

A lot of variants, a lot of inefficiencies, because that schedule is so hard to build. And you skip a lot of those steps because those steps are just so hard and complicated. And so what Lubbock did was they eliminated those 37 vacant positions and three things that are really important to call out. One, the average class size target was the same as the year before. They didn’t eliminate any course offerings to students. A student choice was not impacted, and three, no teachers were impacted because these are vacancies. So strict inefficiencies that led to bottom line savings. And they took those bottom line savings and reinvested them into new academic priorities.

37 positions in West Texas dollars is about $2.2 million. On the east coast and west coast, it’d probably be close to $4 million. So really meaningful savings. The second example, charter management organization, Noble Schools in Chicago. Seventeen campuses, largest charter management organization in the city of Chicago. They’re solving a different problem. They felt that their staffing model was tight enough, resource allocation was less of a priority for them, but they needed to solve that pain point at the school level.

And in particular, they had a big challenge with directors of operations being trained and supported because there was a lot of burnout. It’s a really hard job. Directors of operations for charters tend to be the equivalent of an assistant principal without academic responsibilities. So they’re in charge of master scheduling and a whole array of other operational tasks. And so for them, they had a lot of new schedulers, new directors of operations, and this allowed them to mitigate that attrition risk and to kind of create a more sustainable role. And I think what was really cool, 11 of the 17 schools had a new director of operation. And those 11 gave us a perfect 10 out of 10 NPS. And so making a job easier, creating greater productivity, and certainly still giving Noble the opportunity to think about resource allocation more strategically, although that just wasn’t as much of a priority for them.

Master Schedule as Innovation

Diane Tavenner: I love those examples because they feel very, very familiar to me. And I think anyone who’s been in that, has had these experiences and would recognize what a big deal it is. You just, what you just said, what a gift you’re giving. And I think in this moment in time where everyone’s kind of enamored with the tech, they forget how hard it is to literally just run schools every day. This massive, complicated operational challenge. And like you said, the master schedule is an expression of your values and what you care about, in so many ways. And so I think what you’re describing, and correct me if I’m wrong, Michael, because this is your area, but is you really built a sustaining innovation? I mean this is an innovation for how we do, you know, do the most important thing that controls what all these people are going to do for a whole year, all day, every day. And so that’s one framework we talk about a lot.

Another thing, a newer one Michael and I are kind of playing with, is this idea that, you know, most of our, well, I would say all of our schools in some way shape or form fit in this, this original kind of industrial model of schools. And we’ve talked for a long time about how to break out of that industrial model. I think some of us are hopeful that with the advent of AGI, we will kind of be able to invent that post industrial model by. I don’t think we’ve seen it yet. I’m wondering how do you think about, how do you, or do you think about that kind of post industrial model, for example, Paymon? Like, you know, I think in that new model we probably don’t conflate time with credit. And so we’re much more probably in a competency based progression. Does Timely move in that direction take us there? Of all, like, how do you, how do you think about the product and its evolution and where it might take us?

Michael Horn: And Paymon, while you gear up for that, I’ll just geek out for one second because I think it’s interesting. It’s a sustaining innovation for a school, but you’re clearly disrupting the landscape of how we schedule today. So it’s like it’s one of those things, right, where you’re doing both depending on the paradigm or framework. You’re looking at it through it, which is fascinating.

Paymon Rouhanifard: Diane I love the question and coming from you, I’m always, I’m always a little circumspect because you study this point, obviously, so do you. Michael and so I’m not sure if I’m going to have anything new to offer that you haven’t already thought through. But I will say what gets me really excited about the work that we do is ultimately we are a tool that can operationalize the hopes and dreams of the district of a charter management organization of an independent school. We don’t have a view as to what their delivery model should look like. We don’t have a view into what their strategic plan should be. If they ask us for advice, we’ll certainly give it to them. But we want to operationalize those hopes and dreams. And so to the extent that they’re innovating and certainly we have a lot of partners that are pushing the envelope, I, I will say, and we can come back to this or we can leave it alone the moment we’re in and not just, not just with AI, but just where districts are and declining enrollment and, and, and a lot of fiscal pressure.

I can’t say I’m seeing as much innovation as we did pre Covid.

Michael Horn: That’s interesting.

Paymon Rouhanifard: You know, having said that, we have partners that are trying to rethink the teaching profession and are trying to give a full day of professional development for teachers, which is not an easy thing to do in the construct of a traditional school district. And we’re a tool that helps operationalize that. We have partners that are thinking about, oh gosh, first year teachers. We see so much attrition and it’s really expensive and it’s really disruptive. How can we in the master schedule build in a set of professional development supports, mentor teacher who has a prep that coincides with the first year teacher to observe and vice versa for that first year teacher to see the mentor teacher and then build in common planning time. That is very intentional for first year. These things are really hard to do using sticky notes and Google sheets. And so we’re helping operationalize where that innovation is happening.

And maybe those are more modest examples of innovation that would, you know, competency based and kind of eliminating seat time. But ultimately Timely is vision agnostic strategy agnostic. And that gets us really excited.

Diane Tavenner: Me too. Because I think that when people build something with a complete point of view, then it’s not… You actually close down innovation. Right. You don’t. You don’t address the problems that exist. You don’t let people really imagine what’s possible and support them in that.

I can’t resist. I got to go back. Why do you think there’s not as much in it? Why are you not seeing as much innovation, what’s happening on the ground? And do you feel like it’s shifting at all?

Paymon Rouhanifard: I’m gonna come back to why I think it’s shifting. I just think in a lot of states. Well, across all states, we all know that the overall enrollment across all school types has been declining over the last five to seven years. And that’s a combination for a lot of factors, but the declining birth rate being a big one, of course. And so that leads to smaller budgets. And in urban and rural quarters in particular, you see a commensurate increase of the percentage of students with an IEP and percentage of students who require multilingual support. And so that fundamentally shifts the mindset of district leaders.

Diane Tavenner: Yeah.

Navigating Fiscal Pressures in Education

Paymon Rouhanifard: And makes it hard to innovate when you’re trying to do more with less, when you’re trying to, at the base of Maslow’s hierarchy. And you’re just trying to make ends meet in a lot of ways. And so what we see across the country is how can we address this fiscal pressure by doing the least harm possible. And that certainly opens the door for Timely to be of real support. And we’re incredibly proud of that. And so I think at the same time, when priority number one is we want to avoid teacher layoffs and we want to make sure we deliver resources to the students who need it the most. It’s kind of hard to get to the next series of priorities. And I think that’s just the moment we’re in until things start to level out.

What is exacerbating this is in a lot of states. And you all, I’m sure, know this. I frankly know it probably better than I do the expansion of vouchers and ESAs and kind of additional fiscal pressures on top of the macro shifts that are happening. And so whether you’re in Texas or Louisiana or Florida or Arizona, I mean these, there are a lot of states who are passing, these are innovations in their own right at the state level, but create some fiscal pressure on districts and I think that just again makes innovation hard.

Diane Tavenner: I agree with you certainly in the existing system, which is, yeah, makes me sad. Well…

Paymon Rouhanifard: I’m sorry, I’m sorry I took it there.

Michael Horn: No, let’s switch, let’s switch gears because … 

Diane Tavenner: I don’t know about you, but I, I just spent last week in several schools actually on the east coast, which is, you know, we’ve often talked about this East Coast, West Coast sort of difference. It’s always fun to be, be on the East Coast and notice the similarities and differences. And I’m feeling a little bit more optimistic than I have for the last five years. It has been rough, rough, rough times, as you know, and it does feel like there’s a little bit more, you know, sort of energy back in things. But, that’s totally anecdotal. So what are you optimistic about? You know, what do you see as possible? You know, where, where is the hope going forward?

Paymon Rouhanifard: Well, look, in spite of those macro conditions, you know, we are really fortunate to partner with some incredible organizations who are figuring out how to navigate these conditions. And you know, I think both things can be true, which is it’s a tougher environment to innovate and innovation, what’s that old saying? Necessity is the mother of innovation? I think we’re seeing a lot of interesting work happening across different parts of the country and we’re serving schools coast to coast. And I, and I, and I think the moment we’re in with AI, we’ve seen super, super interesting solutions that we necessarily partner with inside of, inside of districts. And so, you know, whether it’s folks pushing foundational skills, literacy, and building that into the master schedule through block instruction and seeing organizations like Amira and Ello, you know, better serving students whether in school or at home, you know, we’re seeing a lot on those fronts. And we’re seeing, I would say, districts that are thinking much more long term in nature, which frankly is refreshing. I do think that there’s been a little less and I don’t have the data to back this up, but I do see folks who are much more like superintendents tend to churn pretty quickly. But I’ve seen a bit more longevity in those roles. And perhaps that’s because the kind of traditional education reform playbook isn’t being implemented as frequently.

But I think what that means is that folks are kind of more playing the long game and thinking much more intentionally about resource allocation, strategy, academic priorities. So there’s a lot to be hopeful for and we’re delighted to be working with a lot of different district and charter partners in spite of these tough conditions.

Mitigating AI Risks

Michael Horn: Continuity and longevity definitely allows you to do things that you wouldn’t otherwise do if you’re sort of thinking about, oh gee, two years and a pile of dust sort of thing. But let me ask this question. You mentioned a couple AI tools in there as well that have you, you know, give you reasons for optimism. I’m curious. Sort of the same premise, but like around what you’re seeing, the conversation is very concerned around AI and how it will have negative impacts. And where do you think that conversation is misplaced or where do you think that conversation is spot on and we ought to be thinking about, you know, AI is a danger, if you will, to education.

Paymon Rouhanifard: Well, look, I think in terms of teacher anxiety that, that I think as far as the teachers who I’ve spoken to who worry AI is going to take our jobs and it’s going to kind of fundamentally change the profession in ways that may not be comfortable, to me, I think that’s misplaced. And you know, I see solutions like Course Mojo, which is a dramatic boon to classroom facilitation and can really empower the teacher to better deliver instruction and to better support students’ holistic needs. So that’s where my head naturally goes in terms of teachers using AI as a copilot and fundamentally being able to deliver instruction in a more effective manner, to differentiate it and really kind of let the content delivery happen in a much more seamless way that puts less pressure on the teacher. I think the flip side of that is we just need to ensure the other part of your question, Michael. We need to ensure that there’s coherence inside of classrooms, across classrooms and across systems. And I think that’s always the challenge with education technology. Going back to kind of earlier waves of adoption of tools. Again, a lot of different point solutions, point solutions are necessary.

Timely is an example of a point solution that has the systematic connection. But when you’re using a lot of disparate point solutions to ensure that there’s an integration and an intentionality of bringing those solutions together. And so I think a lot about core curriculum and do these supplemental tools actually holistically and intentionally integrate with core curriculum, for example? And I think that’s still a real risk that we’re facing.

Diane Tavenner: Well, and just, I can’t, I have to just ask this because I really worry about the technical capabilities of schools and school districts to do the integration of all of these point systems. You pointed out, rightly, Paymon, that you know, the big giant system enterprise system that supposedly does everything, does most things terribly for us and doesn’t meet our needs. And these thoughtful point solutions are more and more especially developed by educators who really understand it much better. But do I have the skill set and the people in a school or a district to integrate all of those things? How, how are you finding the folks you’re working with and their ability to do that?

Paymon Rouhanifard: I think they’re struggling with this and it’s rare to find a district that has intentionally and thoughtfully integrated their ERP with their SIS with their HR data and so on and so forth. And, and frankly what you see is they’re, they’re kind of constantly switching out those systems and bringing in new providers that might be marginally better, but frankly I would argue are kind of do the same thing as before. And so I think it’s a real issue now with AI agents. Could data integration be much more productive and efficient in the future? I’m hopeful. It’s still a little bit early to say, but the guts of the system where those data sets come together to inform decision makers and to allow for these systems level changes, that’s still an ongoing challenge, but I think it just starts with the mindset of really optimizing for, and solving for coherence and thinking about core curriculum and supplemental solutions in a very intentional manner and, and on a parallel track trying to bring those actual data systems together. I’ve seen districts do this. It takes playing the long game and going back to Michael’s point, like maybe we’re not rocking the boat as much as we were before with standard based reform, which is like its own thing and comes with trade offs. But if there’s greater longevity for district leaders, this is an example of something they can actually take on to really bring those systems together and to do the work of building them.

Diane Tavenner: Awesome. You should interrupt me, Michael, because I could talk to Paymon all day.

Michael Horn: I was gonna say. Well, no, I feel like we’re just starting to have a bunch of revelations here, but this has been great. Should we switch to our final segment, Diane?

Diane Tavenner: Yeah, we’ll have to talk.

Michael Horn: Have you back on. That’s the answer.

Paymon Rouhanifard: All right. That’d be fun.

Diane Tavenner: Well, as you know, we every, basically every episode, Michael and I try to turn away from work a little bit. I’m going to fail miserably today and share what we’re reading, watching, listening, and we’d love to invite you to do the same.

Paymon Rouhanifard: So I’m reading two things. I just started reading them and I, and I have to admit, like early stage, kind of founder mode. I’m not making as much time for leisurely reading as I’d like to be, but I guess one book is work related and probably doesn’t even fit the question, but “Predictable Revenue” and it kind of shows you like, in terms of startup mode. I mean, I’m at the foundation business hierarchy there too. The other book I’m reading is “The Lion Women of Tehran,” which is about friendship, two women, but it’s set in Iran, which is where I was born. And in the context of it being from the 1950s and, and into the 80s where there was a lot of political change happening in Iran and our family lived through a lot of that. And so in the 50s, there was a big political tug of war where they took control of oil away from Great Britain. Really sort of charismatic prime minister who led that, which led to an even greater U.S. involvement and then, and then the Islamic revolution in 79. So you kind of understand people’s lives in the story about this friendship as a lot of dramatic changes happening in the country.

Michael Horn: Fun fact, Diane, before you go, the author of that book lives in Lexington, Massachusetts, is that right?

Paymon Rouhanifard: Yeah. Yeah. Wow.

Diane Tavenner: Wow. Amazing. Wow. Incredible.

Michael Horn: Over to you, Diane.

Diane Tavenner: Well, thanks for sharing those Paymon. I wrote them down. All of your recs are always good. So here’s an interesting one. I’m going to admit I’m not technically reading this book, but it’s being read in my house and it’s constantly being discussed at family dinner night and it’s called the “Scaling Era.” An oral History of AI 2019-2025 by Dwarkesh Patel with Gavin Leech. And for sort of the insiders in the AI world, Dwarkesh has a podcast that they sort of all listen to. And this is this fascinating book and it’s kind of, it’s beautiful and weird and funky.

It’s like the recordings from the podcast, but they’re reorganized and it’s part like AI encyclopedia and notes guide and, and part story and oral history. It’s really interesting. So you know me, I don’t really read non fiction cover to cover, so it’s like spots and conversations. Pairing that with, I did just finish the last episode of the “Last Invention” podcast, which I’ve already promoted here, but I just say it again because I was only two episodes in when I first mentioned it. I think totally worth it for those who haven’t gone in yet to understand the moment of time we’re living in and kind of what’s going on. I think it’s really well done and valuable and great journalism and yeah, highly recommend.

Michael Horn: And Diane, when you’re not, you know, working on , we’ll have you take our podcast of seven seasons or whatever and create a book out of it as well with all sorts of crazy excerpts. I also failed on the not-related-to-work front. I guess I alluded to this on an earlier show as well. So I’m sort of exactly where you are on this, Diane. But I finished up the founder of the Florida Virtual School, Julie Young, her draft manuscript that is part memoir and part startup story or creation of Florida Virtual School, and then her work at ASU Prep as well. And I’ll say it was, it was quite an energizing read, I know she’s going to have more edits before the book actually is out, but I’m excited for it to be out because I think for people to read it, it’ll be a bit of a breath of fresh air and it’ll cause some grappling with some of the central messages and conclusions that she has. But, I think it’ll be really good for the field to sort of go back, if you will, to the past a little bit and think about a thoughtful use of technology and education and how it looks a little bit differently from from some of our assumptions around that today, I think.

So that’s been on my mind and I will just say, Paymon, this has been a hugely stimulating conversation. I have a couple of pages of notes from this of things that I want to follow up on. So huge thanks for joining us. Huge thanks for the work you’re doing at Timely and for all of you joining us and listening to us as always keep the questions coming. Keep the comments coming. Diane and I have been energized by it and it has led to us choosing our guests from your questions directly and thinking a lot about the comments that you’ve made to us. So huge thanks as always.

And we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
At These Universities, Using AI Isn’t Shunned — It’s a Graduation Requirement /article/at-these-universities-using-ai-isnt-shunned-its-a-graduation-requirement/ Tue, 17 Feb 2026 11:30:00 +0000 /?post_type=article&p=1028557 While most colleges and universities are reluctantly grappling with of artificial intelligence, a few are not only tolerating it but making it part of their core curricula. In the process, they’re signaling to new students that using and critically evaluating AI will be a large part of their post-college lives.

Indiana’s Purdue University in December approved an AI “working competency” , saying that by the time they earn a diploma, undergraduates must be able to use the latest AI tools effectively in their chosen field while understanding both the technology’s strengths and limitations. 

Graduates must also be able to defend decisions informed by AI while sussing out its “presence, influence and consequences” in their work.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“The root of all of this is really making sure that our students are ready for the workforce and are not left behind by AI,” said , Purdue’s senior vice provost for academic and student success. While admitting that college students likely rely on AI for class assignments, she said what’s missing is the ability to go deeper. 

“Yes, they know how to use it, but are we instilling a framework and a practice where we’re emphasizing critical thinking?” she said. 

The long-term goal of the effort is to ensure that graduates are “wildly successful in an AI-enabled workplace,” while being able to evaluate AI-generated work and criticize it. 

A microbiologist by training, Oliver-Jischke said AI has already “revolutionized” her field. Recent research suggests that AI-enabled analysis of large genomic data sets, for instance, is allowing scientists to look at DNA directly from environmental samples, revealing of previously unknown microbes.

“The technology is here,” said Oliver-Jischke. “You will lose out on opportunities if you don’t understand it or know how to utilize it and apply it effectively.”

Purdue’s faculty and curriculum committees began discussing the new requirement last summer, she said. The university has already identified 35 courses that will lead the way toward fulfilling the requirement. It goes into effect fully for the graduating class of 2030, who are due to arrive on campus in the fall. It won’t require a separate exam or course, but rather it will be embedded into students’ required coursework, she said.

Haley Oliver-Jischke

While it’s unusual, Purdue’s move isn’t unprecedented. 

In January 2025, the State University of New York system its information literacy curriculum to include requirements that SUNY students effectively recognize and ethically use AI. While it integrates AI into an existing requirement, it doesn’t create a standalone competency like Purdue’s.

In June, The Ohio State University unveiled its initiative, which will embed AI education “into the core of every undergraduate curriculum, equipping students with the ability to not only use AI tools, but to understand, question and innovate with them — no matter their major.”

Both Purdue and Ohio State are public , founded within months of each other in 1869 and 1870, respectively, to meet what was at the time a booming demand for agricultural and technical expertise. 

Ohio State’s AI effort will require all graduates, beginning with the class of 2029, to be “fluent” in the technology and how it can be responsibly applied to advance their field. “In the not-so-distant future, every job, in every industry, is going to be impacted in some way by AI,” Walter “Ted” Carter Jr., the university president, said at the time.

Executive Vice President and Provost told The 74 that as AI continues to influence how we work, teach and learn, “we will remain at the forefront of this technology.” 

Is ‘vibe coding’ the future?

The moves come as recent surveys suggest that college students are already making AI a large part of their education, even if they’re mostly outsourcing hard work: The AI and plagiarism detection platform Copyleaks in September found that of college students have used AI for academic purposes, with 53% using it either daily or several times a week. 

While most students say they use it for brainstorming, half use AI to draft outlines and 44% to generate actual drafts of work. About one in three students uses AI to summarize readings.

In light of statistics like these, requiring a deeper competence around AI is “a good step in the right direction,” said Alex Kotran, CEO of the . “Closing out 2025, I was feeling like post-secondary is sort of deer-in-the-headlights” when it comes to AI. “This is promising, but the proof will be in the pudding: Are they building the systems for professional development and learning, because that’s going to be critical. The policy is just step one.”

Kotran noted that the vast majority of job postings now specifically name AI skills as a requirement. Colleges that are seen as more effective at helping students get those skills are likely producing “more employable” graduates.

Purdue’s Oliver-Jischke said the focus at the university, which enrolls , is on “working competencies” and how they can fit into instruction across departments. “This can be a large boat to turn, but because we have a commitment to AI and this is obviously a massive STEM school, everybody is curious, interested and willing to explore how this should be implemented into the core curricula.”

At the same time, she said, AI is evolving quickly and the landscape could soon be very different. “We recognize that, and we want to remain nimble,” she said. “And we will keep our curricula nimble to do that.”

Alex Kotran

The two schools’ focus on differentiated, workplace-specific use of AI is a smart one, Kotran said. But to be effective, universities should go beyond simply relying on off-the-shelf commercial products. “The future of work is not a bunch of employees using ChatGPT or Gemini day-to-day and being more productive because of that,” he said.

Instead, the real value of AI, at least for now, is in the custom software it enables users to build via what’s known informally as “,” or using AI prompts to do the actual behind-the-scenes coding that once took advanced knowledge. “The real unlock comes when you’re building custom software to do stuff more efficiently,” he said.

Since generative AI came to market in 2022, the cost of building apps, websites, games and other software has dropped precipitously, while the task has gotten easier for non-technical users. 

“That’s going to change the way we work,” Kotran said. The more users can develop and control their own software, the more productive they’ll be. “But it’s very hard to get that insight if you haven’t seen vibe coding for yourself.” 

Done right, the efforts at Purdue and Ohio State could be significant, Kotran said. “It just increases the exposure that students are going to get to having the opportunity to build that intuition and to experiment,” he said. “And it will force professors to start building their assessments around it.”

]]>
When It Comes to Developing Policies on AI in K-12, Schools Are Largely On Their Own /article/when-it-comes-to-developing-policies-on-ai-in-k-12-schools-are-largely-on-their-own/ Sat, 14 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028520 This article was originally published in

Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning.

But as schools seek to navigate into the age of generative AI, there’s a challenge: Schools are operating in a policy vacuum. While a number of states , only a couple of states , even as teachers, students and school leaders continue to use generative AI in countless new ways. As a policymaker noted in a survey, “You have policy and what’s actually happening in the classrooms – those are two very different things.”


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


As part of on AI and education policy, I conducted a survey in late 2025 with members of the National Association of State Boards of Education, the only nonprofit dedicated solely to helping state boards advance equity and excellence in public education. The survey of the association’s members reflects how education policy is typically formed through , rather than being dictated by a single source.

But even in the absence of hard-and-fast rules and guardrails on how AI can be used in schools, education policymakers identified a number of ethical concerns raised by the technology’s spread, including student safety, data privacy and negative impacts on .

They also expressed concerns over industry influence and that schools will later be charged by technology providers for large language model-based tools that are currently free. Others report that administrators in their state are very concerned about deepfakes: “What happens when a student deepfakes my voice and sends it out to cancel school or bomb threat?”

At the same time, policymakers said teaching students to use AI technology to their benefit remains a priority.

Local actions dominate

Although chatbots have been widely available for more than three years, the survey revealed that states are in the early stages of addressing generative AI, with most yet to implement official policies. While many states are , or are starting to write state-level policies, local decisions dominate the landscape, with each school district primarily responsible for shaping its own plans.

When asked whether their state has implemented any generative AI policies, respondents said there was a high degree of local influence regardless of whether a state issued guidance or not. “We are a ‘local control’ state, so some school districts have banned (generative AI),” wrote one respondent. “Our (state) department of education has an AI tool kit, but policies are all local,” wrote another. One shared that their state has a “basic requirement that districts adopt a local policy about AI.”

Like other education policies, generative AI adoption occurs within , with authority and accountability balanced between state and local levels. As with previous waves of technology in K-12 schools, local decision-making plays a critical role.

Yet there is generally a lack of evidence related to how AI will affect learners and teachers, which . That lag adds to the challenges in formulating policies.

States as a lighthouse

However, state policy can provide vital guidance by prioritizing ethics, equity and safety, and by being adaptable to changing needs. A coherent state policy can also answer key questions, such as acceptable student use of AI, and ensure more consistent standards of practice. Without such direction, districts are left to their own devices to identify appropriate, effective uses and construct guardrails.

As it stands, AI usage and policy development are uneven, depending on how well resourced a school is. Data from a RAND-led panel of educators showed that teachers and principals in higher-poverty schools are about . The poorest schools are also less likely to use AI tools.

When asked about foundational generative AI policies in education, policymakers focused on privacy, safety and equity. One respondent, for example, said school districts should have the same access to funding and training, including for administrators.

And rather than having the technology imposed on schools and families, many argued for grounding the discussion in human values and broad participation. As one policymaker noted, “What is the role that families play in all this? This is something that is constantly missing from the conversation and something to uplift. As we know, parents are our kids’ first teachers.”

Introducing new technology

According to a Feb. 24, 2025, Gallup Poll, in a range of ways. Our survey also found there is “shadow use of AI,” as one policymaker put it, where employees implement generative AI without explicit school or district IT or security approval.

Some states, such as Indiana, offer schools the opportunity to apply for a one-time competitive grant to fund a pilot of an AI-powered platform of their choosing as long as the product vendors are approved by the state. Grant proposals that focus on supporting students or professional development for educators receive priority.

In other states, schools opt in to pilot tests that are funded by nonprofits. For example, an eighth grade language arts teacher in California participated in a pilot where she used AI-powered tools to generate feedback on her students’ writing. “Teaching 150 kids a day and providing meaningful feedback for every student is not possible; I would try anything to lessen grading and give me back my time to spend with kids. This is why I became a teacher: to spend time with the kids.” This teacher also noted the tools showed bias when analyzing the work of her students learning English, which gave her the opportunity to discuss algorithmic bias in these tools.

offers a different approach than finding ways to implement products developed by technology companies. Instead, schools take the lead with questions or challenges they are facing and turn to industry to develop solutions informed by research.

Core principles

One theme that emerged from survey respondents is the need to emphasize ethical principles in providing guidance on how to use AI technology in teaching and learning. This could begin with ensuring that students and teachers learn about the limitations and opportunities of generative AI, when and how to leverage these tools effectively, critically evaluate its output and ethically disclose its use.

Often, policymakers struggle to know where to begin in formulating policies. Analyzing tensions and decision-making in organizational context – or what my colleagues and I called – is an approach schools, districts and states can take to navigate the myriad of ethical and societal impacts of generative AI.

Despite the confusion around AI and a fragmented policy landscape, policymakers said they recognize it is incumbent upon each school, district and state to engage their communities and families to co-create a path forward.

As one policymaker put it: “Knowing the horse has already left the barn (and that AI use) is already prevalent among students and faculty … (on) AI-human collaboration vs. outright ban, where on the spectrum do you want to be?”The Conversation

This article is republished from under a Creative Commons license. Read the .

]]>
Opinion: Schools Need to Adopt Clear Rules for AI Use. Parents Can Help Make That Happen /article/schools-need-to-adopt-clear-rules-for-ai-use-parents-can-help-make-that-happen/ Tue, 10 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028367 It has been over three years since ChatGPT launched, bringing artificial intelligence to the masses for the first time. Today, AI is reshaping schools, workplaces and entire industries. Yet only   — approximately  — have district-level AI guidance.

The communication gap is stark. found that 26% of teenagers ages 13 to 17 used ChatGPT for their schoolwork in 2024, up from 13% in 2023, yet most lacked formal instruction on responsible use. According to the , nearly three-quarters of parents report that their children’s schools haven’t shared their AI policies. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


This lack of guidance creates two dangerous extremes: students who fear AI because it’s been branded as cheating, and those who misuse it as a shortcut because they’ve never been taught otherwise. In both cases, young people miss the opportunity to practice the critical thinking, problem-solving and ethical judgment skills regarding AI that education is meant to foster — in other words, to develop AI literacy. 

As a researcher, educator and parent, I have worked to in colleges and medical schools. But I do not see the same efforts in most K-12 schools. Advocacy is key, and parents can help make this happen.

My son discovered ChatGPT in seventh grade. Three years later, his South Carolina school district still offered no clear guidelines for AI use, so I began a methodical advocacy campaign. I attended a superintendent’s coffee chat, shared AI education books with district leaders and followed up with emails and a virtual meeting. For months, it seemed as if my efforts had fallen on deaf ears. Then, I was invited to join the district’s AI planning team, a diverse group including students, teachers, parents, administrators, and AI education consultants. Our daylong session covered generative AI applications, ethics in education and guideline development. 

Following the meeting, we participated in a survey and observed a school board presentation on AI policy development. And in January, the district Board of Trustees governing the use of artificial intelligence in classrooms.

This experience taught me that parent voices matter. But effective advocacy requires patience, persistence and a constructive approach. Fortunately, families wanting to get involved have proven models to follow.

In , the state’s official AI Framework for Education emphasizes ethical use, transparency and family engagement, with guidance for schools to communicate clearly with parents about AI tools. In , the school board voted in 2025 to begin developing districtwide guidelines for classroom AI use, including the creation of family-facing resources to promote responsible use at home. 

Resources like offer a strong foundation for AI literacy advocacy. The handbook encourages parents to stay informed about new technologies, ask questions when schools lack clear guidelines, build relationships with staff and participate in school meetings to influence policy. These efforts can open doors to influencing policy and curriculum decisions.

Parents also can advocate for their school district to join initiatives like the which aims to train 400,000 teachers nationwide in AI fluency by 2030. They can push for partnerships with nonprofits like and , which provide free, grade-appropriate AI curricula, teacher training and ethical use frameworks. If the school district is open to collaboration, they can also request a pilot or demo for tools like , a platform that provides access to multiple AI models in one place with a focus on education. Boodlebox offers to help cover the cost of subscription. 

Local AI councils  — groups of experts from fields such as law, technology, and education who advise local governments on using AI responsibly — provide another avenue for parent involvement. In Montgomery County, Pennsylvania,  the brings together experts from the private sector, academia, public service and beyond. In Montgomery County, Maryland, officials formed an to ”ensure the successful evaluation, coordination, implementation and adoption of AI solutions,” in the county. Parents can encourage their districts to establish similar advisory committees or collaborate with such county-level groups if they already exist in their area. 

Through this process, I’ve compiled a comprehensive list of that parents can use as conversation starters with their districts — from state frameworks to nonprofit curricula — categorized by audience: administration, teachers and students. I also keep an eye out for grant opportunities for my district. For example, the recently opened applications for the 2026 program, which helps high school educators gain AI knowledge and skills that they can take back to their computer science, science, mathematics and health classrooms.The stakes couldn’t be higher. Without AI literacy, students will struggle to navigate a world increasingly shaped by artificial intelligence. They’ll lack the ethical framework to use these tools responsibly and will enter college and the workforce at a significant disadvantage compared with peers who received proper guidance. Momentum is building, but districts won’t act without parent demand and involvement. If parents don’t push for AI literacy now, they risk raising a generation fluent in fear or shortcuts rather than the skills that matter and the resilience needed to thrive.

]]>
Opinion: It’s Time to Embrace AI Literacy for Kids /article/its-time-to-embrace-ai-literacy-for-kids/ Sun, 08 Feb 2026 11:30:00 +0000 /?post_type=article&p=1028182 Artificial intelligence has become an incredibly polarizing topic, with one side eager to integrate it into every aspect of life and the other side running from it as fast as they can. Is this new technology an existential threat or a transformational opportunity? According to Pew from September, “Americans are more concerned than excited” about the proliferation of AI and want to exert more control over its use.

About 62% of U.S. adults report interacting with AI several times a week, and adults and children alike engage on a regular basis with AI without even realizing it. Children are growing up in a world where this technology is unquestionably a part of daily life, shaping their lives in ways no one can yet fully understand. Giving them a clearer understanding of how AI works has never been more important.

This fall, the three of us met at an event at the National Children’s Museum which brought together technology leaders, museum educators, policymakers, teachers and academic researchers focused on guiding our kids safely and productively into our technology-driven world.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Our key takeaway? Regardless of where you stand on this issue, a common ground must be forged now. Constructive dialogue must happen, and it needs voices from both sides to produce a healthy outcome for our children. Helping kids understand AI means being both optimistic and cautious, recognizing its promise while acknowledging its shortcomings and risks.

What if, alongside helping our youngest learn to use AI, we placed greater emphasis on teaching them how it works? By nurturing children’s critical thinking skills, we give them the power to understand it as a tool—where it can augment human effort, and where it fails miserably.

AI is ushering in a new wave of innovation, but it is also enabling new forms of deception and manipulation. It provides access to a wealth of knowledge and opportunities, but the resulting information overload can undermine learning, cognition, creativity and human connection.

Society as a whole, from educational institutions to policy makers to parents at the dinner table, need to invest in children’s AI literacy now. In doing so, we can instill some of the most important lessons: how to be creative and discerning in the world in which we live, preparing them for a future full of new opportunities.

According to the World Economic Forum’s Future of Jobs Report, employers expect that 39% of workers’ core skills will change by 2030, with technological skills gaining importance most rapidly. AI will open up new fields of biomedical research. It will help us feed our growing global population. But it will also force many of us to rethink our jobs and educational pathways.

So, on a global scale, an investment in our children’s AI literacy not only ensures a competitive workforce but also safeguards national prosperity, security and the responsible use of powerful technologies. Whether you think AI is exciting or threatening, children must be introduced to age-appropriate concepts about it so that they can build fluency and prepare for the future.

Another takeaway from our conversation? Adults must learn alongside — and sometimes from — our kids. As adults, we have the responsibility of fostering children’s safe use of this powerful tool. But let’s give ourselves the grace to acknowledge that we don’t understand AI either.  We didn’t grow up with it, and experts and technology leaders believe that generative AI has surpassed the understanding of its creators.

There is a window of opportunity to bring everyone to the table. As parents, educators and lifelong learners, we need to have deeper conversations about AI — especially how it shapes children’s learning, development and daily lives. We don’t have to fully comprehend it or agree with all its intended uses; we just have to be open to talking about it and taking action. By approaching this with curiosity, we can thoughtfully consider appropriate uses and guardrails for kids—something we didn’t do early enough when America’s children first began using online tools like social media.

There are organizations starting to address AI literacy and technology education for families. Sesame Street and Google collaborated to release a on the healthy use of digital technology. Common Sense Media, with support from the National Parents Union and EDSAFE AI, has a series of about digital citizenship and AI arranged by grade level and a for parents as well. The website provides research-based articles, podcasts and other resources to help parents navigate age-appropriate technology use. Children’s museums are developing hands-on, screen-free experiences to help demystify the processes underlying AI. There needs to be more of this, supporting children’s understanding of the fundamentals, not just how to use its applications.

AI’s purpose is not to replace human life, but to enhance it. Yet, the current conversation — especially around children’s use of AI — is too passive, treating these complex systems as inevitable rather than intentional creations. Educators, industry leaders and policymakers need to insist on a richer, more engaging dialogue about how it shapes kids’ learning, choices and experiences. 

Whether it’s the weather report from a smart device or personalized help from a chatbot, AI literacy is now essential for young people to navigate civic life. No matter your viewpoint, it is time to embrace AI literacy. The stakes are too high for anything less than universal, active participation in preparing children for the world they’re inheriting and will soon lead.

]]>
The Dangers of AI Toys: Why This Teddy Bear Was Canceled /article/the-dangers-of-ai-toys-why-this-teddy-bear-was-canceled/ Fri, 06 Feb 2026 21:54:45 +0000 /?post_type=article&p=1028333
]]>
Reflections on Whether AI is Actually Changing Schools — and Where /article/reflections-on-whether-ai-is-actually-changing-schools-and-where/ Thu, 05 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028147 Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode, Michael Horn and Diane Tavenner step away from their interviews to reflect one-on-one at the midpoint of their season on artificial intelligence in education. Diving into its evolving role in the classroom, they ask whether AI is truly transforming the system or simply being layered onto outdated structures. They explore a framework of three school models and discuss the challenges of meaningful innovation amid existing accountability systems and education policies. From these models, Horn and Tavenner analyze how one might expect transformational change to occur in K–12 schooling — through traditional schools incrementally changing and evolving over time or, as they argue, through fundamental migration away from the existing system.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good that you came to Boston and in the freezing cold weather, no less, to hang out a little bit with me here and have a conversation.

Diane Tavenner: It’s really fun to be in person. We haven’t done this for a long time and the timing worked out perfectly because we are in the midst of this super interesting season where we’re exploring AI and education. And we’ve had several touch points where I’m like, oh, my gosh, there’s so many things that are coming up for me that I want to talk with you about. And so we get to have a conversation, the two of us, this morning.

Michael Horn: I am looking forward to it. And I’m sure you’re going to say things. I’m going to say, wait a minute, I think I know what you mean, but double click on that. Tell us more. And so I’m excited to go deep on wherever you want to go because the conversations, they’ve both been illuminating, but they brought up more questions for me, as seems to be constantly the case with this topic.

AI Disrupting Education Processes

Diane Tavenner: Indeed. Indeed. Okay, well, let’s dive in. And I had the great pleasure of spending time with you in your class yesterday. Thank you again, so much fun. And one of the topics that came up was this idea of. I think it turned out to be more provocative than I anticipated it to be. But this idea that I started said, you know, one of the things, a phrase I read almost constantly right now and hear everywhere is AI is changing education.

And I don’t believe that that phrase is true or accurate. And in fact, I believe AI is not changing education. And, and so I want to dig into that idea a little bit. You know, I would argue that it’s creating a lot of problems for folks in education who are sort of in the traditional model of schools. But I don’t think it’s changing education yet. And what do you think about that?

Michael Horn: I largely agree. So I’ve been thinking about this, but a different wavelength because I’ve been seeing over X and the various pundits. There’s a lot of conversation right now of banning cell phones in schools, as you know, and there’s a lot of conversation of not just cell phones, but screens, period, you know, Google Classroom, all the rest, because it creates access to all these other things, ban it all sort of things. And then you see the occasional commentators saying, does anyone ever believe otherwise at this point?

Diane Tavenner: Right.

Michael Horn: And I had this moment because I think I’m seen often as the tech guy in education. But if you read Disrupting Class, what we actually say is that just layering tech over the existing system is not going to do anything.

Diane Tavenner: Right. I think we’re going to get to that idea in a moment.

Michael Horn: So I think so I guess my instinct is, I agree with you. Like I think we’re layering a lot of AI over existing processes. It’s breaking, frankly, a lot of education. So the one push I might have on you is it may be creating the impetus to ask some bigger questions. And, and I’m not just saying I’m not going down the road of just because the world is AI, therefore this should be AI but like legitimately, you know, we have current assignments where you can now hack them through AI. That’s called cheating. And all of a sudden everyone goes in a tailspin.

Well, let’s ask some questions about the assignments and the work itself is sort of my take from that. So I think it might be an interesting push. But I agree most of what AI is doing right now is layering over existing processes. Some of them, I suspect it’s making more efficient. Great, maybe some of them I think it’s exacerbating problems that already existed. Is that what you have in mind or.

Diane Tavenner: That is what I have in mind. And you brought up, you know, the, one of the biggest conversations is about cheating. Right now we’re seeing all these distortions and strange behaviors and blue books returning. And I’m sure the company that makes those is happy about that. But you know, they might be, they’re.

Michael Horn: Still around or they have to resuscitate. We should look that up.

Diane Tavenner: Yeah, I think when I think about it, what’s happening with this idea is that everyone knows that they’re supposed to have an AI policy and strategy now, but most people don’t. And so this is confusing. And a lot of people, I think AI in education right now is very kind of one offy. Like people, individual people pulling it in and people, you know, and so it’s not coherent, it’s not a strategy. We see it in sort of, you know, lesson planning and assignment making, which is related to, you know, why are we even teaching what we’re teaching to your point? And if you can cheat on it, then what are we trying to do? And then it goes down the line to a lot of fear that I think it’s injecting everything from these very high profile cases we’re seeing of suicide that, you know, is potentially induced by the AI to, to big widespread data privacy. So all of that to say, I’m hopeful. I believe the technology itself, if deployed, can actually change education. But I think humans are going to have to do that redesign and that deployment in a really strategic, thoughtful way for it to change.

Otherwise, I just think it’s plaguing us with problems.

Michael Horn: Yeah, I think that’s right. And systems structures, models, matter and processes and, you know, they’re sort of automating or, you know, playing off the existing ones. We may have a small disagreement on one thing. I’m curious about this. So, like, we don’t have many disagreements, so I’m gonna lean in if we do. I do think, so the Blue book comment aside, I can imagine that there are things we want to do in the classroom that have no AI at all involved with them, because some foundational knowledge or skill that a student can hack using AI out of the classroom is something that they actually should still work on in an analog way to create automaticity on that.

Diane Tavenner: OK.

Michael Horn: I don’t know if that’s Blue Books or what form factor. I’ll take the point there, but I guess that’s. I suspect if we break things down, there are still some foundational things we would want students to have to wrestle with that might not involve AI and be offline, if that makes sense. And then my take would be, okay, but don’t stop there. Now what are we going to use AI to create as opposed to consume with AI?

Diane Tavenner: I think that’s right. I really loved the conversation we just had with Laurence where he brought up some really interesting examples, to your point, of, you know, young people literally working together and in dialogue and, and then he talked about how AI could be supportive and enhance that. But to your point, the actual skill of having that conversation with another human and what you’re talking about is not about AI, so completely agree with that. My concern is when people are taking, you know, very old assignments and.

Michael Horn: And just dusting them off without any thought. Yeah. And I think I also think this gets the older you go, as in, I could be wrong about this. And this is, I’m sure, overly simplistic, but I think for a younger student, and, you know, I’ve got kiddos still in elementary school, so I’m still thinking a lot about that. I do think, like, that part of the landscape looks different from the older student in high school and college that, you know, it’s more problematic when you’re just dusting off that assignment, perhaps for that student.

Diane Tavenner: Right.

Michael Horn: But I do think, you know, developing number sense and automaticity with those things offline before you introduce the calculator and AI and so forth. That makes a heck of a lot of sense for a younger student. And so it’s as always with these conversations in education, I think we sort of make a statement and think it applies everywhere and there is nuance there.

Clarifying AI’s Role in Education

Diane Tavenner: That’s exactly where I’d like to go next because, so I think the dialogue around AI and education is complicated right now. And I hear a lot of people talking past each other and over each other because I think we’re using these very broad, sweeping general terms. So, for example, AI and education, and I was with a really great group of people a couple weeks ago and fortunately some really, you know, smart people noticed this talking past and talking over and called it out. And literally we went around this room and we were like, what do you mean by AI in education? And just within seconds we surfaced. Oh, well, you know, using LLMs like GPT and Claude and Gemini for instructional or operational support, using AI powered education apps, Khanmigo, Class Mojo, Magic School, AI policy development, you know, AI literacy lessons for students. And, people are literally using the phrase AI strategy, AI and education AI to mean all those things and more. And, and I’m finding that it’s very complicated to try to have meaningful dialogue when there isn’t a definition right now or people aren’t. We don’t have specificity yet.

I mean, I think some people don’t even know what AI is.

Michael Horn: Yeah, you’re probably right.

Diane Tavenner: Yeah, yeah.

Michael Horn: And it’s probably extremely fearful in those quarters. And the social media analogy is rampant right now as a result, probably because we’re not defining or breaking down. I mean, do you really not want AI to help an administrator better communicate or schedule or like really, that seems crazy, for example, on that end of it.

Diane Tavenner: And my sense is that what jumps to most people’s mind when they think about AI in education, we’ve sort of railed against this from the beginning, is literally how a student is engaging with it either in the classroom or at home. And most people have in their mind some version of some chatbot, generally speaking, which is incredibly narrow and limited, I think. And you just gave a good example of like, we could literally never bring it directly into the classroom with students. And there’s a million different uses for it in just running something as complicated as a school and a school system. And so, yeah, I guess this is just my plea for us collectively to start developing a more specific vocabulary, more intentionality. About what we mean. Let’s stop saying we’re doing AI.

Oh my gosh, everyone’s doing AI. What does that mean? And being really specific about it. And I think for me, I just want to flag as we go through the rest of this season because we’re going to have some really interesting conversations next. I’m going to push us to be really specific about what people are literally doing with AI. What does that mean?

Michael Horn: Yeah, and the conversation with Laurence, I think opened us up to that because it started to talk about very specific use cases. It occurs to me this problem has always existed in education since I’ve been in the field. Right. That we talk past each other or I remember, you know, there’s project based learning adherence to like an extreme degree. And they’ll say everything ought to be learned through projects. And then you say, well, okay, the kid learning to read though, in first grade, they’re like, oh no, no, no, that kid should get phonics and direct instruction and blah, blah, blah. And you’re like, okay, so there’s nuance, but we have to break apart, novice versus expert.

What’s the topic? What’s the goal? Right. Like, and so skill versus knowledge, as you know, that gets conflated, conflated all the time. And we don’t have precision. And so I think it’s a good plea you’re making, which is just like, let’s be more specific. What’s the objective? What’s the learner coming in with if that’s the level at which we’re talking?

Diane Tavenner: OK, all right.

Michael Horn: Where are we going next?

Diane Tavenner: To one of my favorite topics, which is school models.

Michael Horn: Okay. Yep.

Diane Tavenner: So I’ve been reflecting on a number of conversations. I’ve been having a bunch of stuff. I’ve been reading dialogue that I know that’s happening. There’s a variety of people trying to think about the future and what it looks like with AI. And there’s. I think none of these are set yet. They’re all kind of rough, but they’re starting to fall into this pattern of where people are talking about three different models, if you will, of schools. And I want to come back to what is a model in a moment.

But, but this idea that there’s. I’m going to call it, I think generally people agree that we have an industrial model school at this point. And we have had for quite a long time. We’ve talked about this ad nauseam and that. So let’s call model one sort of current industrial model. And when with the emergence of AI, model one sort of stays industrial model, but you know, AI gets used in some of the ways we just talked about. You know, like there’s, you keep all your existing structures of grade levels and schedules and teaching roles, but you have an AI enabled tools where you’re helping it to grade student work or you’re using it to lesson plan and you know, instructionally plan. You’re, you’re doing some adaptive practice and feedback.

You know, I think the stuff that people probably are more familiar with because they see it. So, that’s kind of Model 1 still in the industrial world. I’m going to jump to model three before I talk about two, because two confuses me a little bit. So model three, let’s call that native AI education. I think most people I know would argue that this has not been invented yet. It doesn’t exist yet as a model.

Michael Horn: Do we know what it means?

Diane Tavenner: I think that the way people have started to describe it I’m not sure that I agree with. And so here’s where I am on this one, which is I don’t think we know what it looks like yet. I think we’re failing in our imagination right now of what’s possible. I think it’s a moment to go into the proverbial garage and do some real designing. Yeah, but let’s call that the post industrial model. I don’t like to call it the AI model because of the definitional problems we just said, but let’s just call it whatever the next school model, like the full model would be.

Michael Horn: OK.

Diane Tavenner: So then there’s two, model two and this one gets kind of squeezed in the middle. I think some people are calling it AI integrated education. Okay. And basically the, the emerging definition I’ve heard is that it’s where you sort of modify selected structures where the sort of benefits justify the disruption. So for example, you know, you have much more interdisciplinary curriculum. You have competency based progression in certain places, you have flexibility in existing schedules in blocks or things like that. You might start seeing some of the time out of the building or, but you’re still sort of, I would argue, existing in the industrial model kind of box, if you will. Okay, but you’re, but you’re using an integrated AI approach to kind of hack some of those things.

OK, yeah, so let me pause there before I start asking my question. See if like those resonate if you’ve heard about them, you know.

Michael Horn: Yeah, no, I haven’t thought about it this way. So I’m noodling as you’re saying it, this is real time. I guess I’m curious. Like models, like a Montessori, like a classical education or the new versions of classical education we’re seeing in microschools or you know, I don’t think Waldorf fits into your typology but like where would you slot. Like those are models too.

Diane Tavenner: They are.

Michael Horn: How do they slot into the schematic?

Diane Tavenner: Yeah. Well, let’s just take Montessori as an example. Right. So in some ways it’s still industrial. Most Montessori schools still exist Monday through Friday, kind of between 8 to 3 ish. They still have a teacher, you know, one to kind-of-many class. There’s, you know, they’ve sort of released or relaxed age grade bands, although I think society kind of imposes them on them. So you, you know there’s some sort of gravitational.

Michael Horn: I mean, you know my frustrations.

Diane Tavenner: I do know your frustrations. So I still think Montessori, maybe Montessori would be kind of a two.

Competency-Based Learning

Michael Horn: That’s what I was wondering is trying, it’s like it’s not AI, not AI enabled, but it uses the technology of the 1910s or whatever it was to have broken out of these certain structures. And so it’s a very competency based math sequence. Very competency based on the learning to read part of it and probably less so on everything else is your point. And there’s still some sort of, you were born in the year of the Scorpion and whatever it is, and therefore you’re going to learn this on this date with everyone else sort of element to it, I think is what you’re saying.

Diane Tavenner: I think that’s right. And, and one of the reasons I wanted to talk to you about this kind of framing is I’ve been trying to think about what sits in the model two category. Okay. I mean it feels very easy for me to identify, you know, almost every school as a Model 1 and many of them are starting to bring in these like AI tools if you will.

Michael Horn: Yeah.

Diane Tavenner: But they’re still clearly industrial models. It’s pretty easy for me to say I don’t think we’ve seen a model 3 yet with the infusion of AI. And then I think about like for example, what we did at Summit and Summit learning.

Michael Horn: Yeah.

Diane Tavenner: I think at the high school level that might be a model 2 without AI yet.

Michael Horn: Right.

Diane Tavenner: Where again we were sort of pushing the boundaries of that industrial framework of a model to try to, you know, reimagine or re-engineer portions or parts of what was happening with expeditions, for example, what kind of breaks the traditional five period, six period day, but all but doesn’t really break the calendar, if you will, or the, you know, eight to three kind of situation. So what do you think about that?

Michael Horn: That’s interesting. So I know we could probably geek out all day and create a taxonomy. So I won’t do that to our listeners, but I am thinking like you’ve seen almost different shots of goal, like. So I think of Florida Virtual School as an example. And I’m reading Julie Young’s draft. I’m not sure I’m supposed to say this, but draft memoir right now. And it breaks certain elements of that, but it’s still course based.

Diane Tavenner: Right, right. There you go.

Michael Horn: So the two things are interesting. And then I start to wonder. Everyone’s talking about Alpha Schools. We’re gonna have an episode on it, so stay tuned. Maybe we don’t get into it here, but, but things like that, where does that slot into your framework? Or I think about Acton Academy, probably falls into two is my guess. And so this is, I guess, what I’m trying to start to sort through as you, as you frame this.

Diane Tavenner: It’s why I wanted to bring it up today because we are about to shift to start talking with people who are either trying to redesign whole models or portions of it. And I think it will be helpful for us, for me for sure, to have this kind of framing in my mind.

Michael Horn: So you can say, pull it back. So we’re talking with an entrepreneur. Okay. You’re working in number one context. You’re working in two, three, maybe the frontier there.

Diane Tavenner: Exactly.

Michael Horn: OK.

AI Tools

Diane Tavenner: And I think there’s a couple of reasons why this is important. The first is back to that, talking past and over each other. One of the things I noticed is there are a lot of people who are gravitating to sort of the AI, you know, enabled tools that will definitely improve, you know, Model one industrial model, if you will. And they’re very passionate about that. They have really strong arguments about, like, there’s kids in schools today who need things to be better. And so we should be, you know, deploying these tools as best we can to do that. Then there’s a whole other group of people, smaller, who are like obsessing about designing Model three, a post industrial model. I don’t think anyone who’s been listening will be confused about where my kind of passions and interests lie.

So I’m definitely, you know, my attention goes to this question, and this, my energy is in that direction. And I really caught myself because I can be dismissive of that first group. And I think that is really problematic for me to do that because I. There. Well, here’s my question.

Michael Horn: Yeah.

Diane Tavenner: Do you think if those models are true in the way we’ve sort of laid them out, is the theory of action or change that you progress from 1 to 2 to 3? Because some people believe that.

Michael Horn: I strongly don’t think so.

Diane Tavenner: I don’t either. Okay, good. Say more because you’re the expert.

Michael Horn: Yeah, no, well, so. So my energy is also in three, as you know. And no one listening will be confused about that. But I think it is prudent from a systems perspective, like thinking about the country, that 80% of the dollars in energy are going into the number one. I think that is from a like sound strategy perspective. Makes a ton of sense. Right. It’s where most of the students are.

It’s like classic sustaining innovation. If I’m running a company and I see the new thing coming that I think is going to upset the apple cart, I don’t push stop on what we’re doing today.

Diane Tavenner: Right.

Michael Horn: I start to test and learn what we talked about on the fringes. And then like, I start to move things out there. Okay. So that’s where I go to the statement that I don’t see any cases where number one morphs into number three or we learn stuff from number three. And I had a guest in the class say, how do we pull it back into number one? I’ve never seen that work. You’ve never seen that number three replaces number one

Diane Tavenner: So then it has to be effectively designed from scratch, grown from scratch. It’s not, you know, evolving. No. Okay. Well, some people think it’s gonna.

Michael Horn: No, I know. And I just, I. And I think it’s totally rational to be putting bets and have a portfolio strategy that are in all three buckets. And I think you can learn lessons between them. Absolutely right. I mean, we know a lot about cognitive science from number one. We also don’t know a lot, I think, because. Take growth mindset, for example.

Right. My read of the literature is incredibly powerful. And if anything in the environment undermines the message of growth mindset, it pulls the kid back into the fixed mindset view and undermines all of that intervention. And basically every structure in number one does that.

Diane Tavenner: Right.

Michael Horn: So we can have our lesson on growth mindset. I don’t think that’s the best way to do it. But like we can have our lesson on growth mindset. We might see a temporary bump on some sort of assessment and then like immediately you get the C grade in the class and you’ve been labeled because you can’t take the feedback and do anything with it. You’re not even reading the feedback and you no longer think that.

Diane Tavenner: Yeah, well, and this is the point of growth mindset not being permanent. It’s not. You don’t either have one or you don’t.

Michael Horn: Right.

Diane Tavenner: It’s a continuous state that you’re in and you can fluctuate from in and out of that state regularly. Okay, so. Well, that’s an interesting conversation to have with folks who believe that the theory of change is that progression versus what we just.

Michael Horn: And I guess stay with it one more second because I remember when we came out with Disrupting Class, a lot of people would push us and say, well, we’re talking about systems change. What are you talking about? And I think we were talking about systems change too. But my theory of system change is system replacement.

Diane Tavenner: Well, there you go.

Michael Horn: And I think it’s really hard in the US for all the reasons we know. And one of the reasons I’m in some ways more optimistic than I have been is I actually see a path for that change, that replace or disruption of systems that I haven’t seen because.

Diane Tavenner: The technology is so.

Michael Horn: Well, and the ESA policies.

Diane Tavenner: Oh, and ESAs.

Customized Education Choices Rising

Michael Horn: Right. And so we see a level of entrepreneurship, a choice and I would argue now a family increasingly, if you’re in Arizona, Florida, Arkansas, wherever. It’s not just like the free public school or I pay money, it’s like, oh, if I just default to the free public school, I’m actually foregoing 8 to 12, $13,000 that I could be spending on my kids education in the way that’s customized for what they need and what they have shown interest in, et cetera, et cetera. That’s like a very different decision set now where all of a sudden it’s actually expensive to default to the free.

Diane Tavenner: Well, and to your point, it might take a little bit of time, but it really changes people’s, you know, mindsets around everything.

Michael Horn: And I was shocked. I. I have to look deeper into this. But Ron Mattis at Step up for Students in Florida sent me this report they did. He said the number of learners in Florida who are now doing a la carte learning. So not they don’t have a primary school five days a week. It’s a billion dollar market is going through that and I was like, I have to like sit with that.

Right. Still. And I haven’t fully digested it because that’s, that seems like a lot. But he, but it basically, if that’s true, over the course of a decade or so, whatever the choice landscape in Florida has been, people went from, okay, I have education, savings accounts, I choose a school.

Diane Tavenner: Right.

Michael Horn: To your point, with technology and a lot of entrepreneurship and a change in the landscape, to all of a sudden saying I can unbundle and do a whole set of things with this, that’s a, that’s faster than I would have expected.

Diane Tavenner: That is faster. Oh, I’d be so curious.

Michael Horn: I want to dig in all sorts of things now.

Diane Tavenner: Let’s do that at some point. Well, and what it suggests is that individual families are essentially crafting their own personal model. Now is it AI native?

Michael Horn: Probably not.

Diane Tavenner: Probably not yet. But I bet they’re starting to use some of, you know, the AI enabled tools as part of that. Yeah.

Michael Horn: And they’re probably making also some of these trade offs in terms of like when is it analog because they control the home environment. When is AI a tool to create something? They’re probably making a bunch of these nuanced choices on the ground that like you couldn’t dictate from a central planning curriculum standards perspective.

Diane Tavenner: Right. Although that might be a feature of whatever the new Model 3 is. I mean, my hope is that it is that it is personalized to that degree within the context.

Michael Horn: Yeah, great point.

Diane Tavenner: Yeah.

Michael Horn: And so now we’ve just blown both of our minds.

Diane Tavenner: I want to go back to Model 2 for a minute because I had this really fascinating conversation with your, you know, former colleague and collaborator Julia Freeland Fisher. And she said, huh, I wonder if this model two is akin to what happened when the steam powered ship was sort of invented and there was this period of time where the new steam powered ships had to be outfitted with sails because the new technology was so unreliable. And she suggested that maybe model two was that. And what the interesting point she made is she said those were the most expensive models because you had to have both technologies on them. And this hybrid version is really expensive. So I, what do you think of that?

Michael Horn: 100%. I agree. I, I hadn’t framed it immediately into that typology, but that’s almost every industry, when you see disruption, you see the old players take the new technology, right. Like there’s sort of a line, oh, they ignore the new technology. Not true. They layer it on the existing structure. Right. And the sailing ships are the perfect example.

I think the first sail ships to navigate the US was like 1819 or something like. Or 1803 and then 1819, the first transatlantic ship, the USS Savannah. And they had sales and they had steam bolted on. And I think only I’m going to get the numbers wrong but like 80 hours out of the 600 or whatever it took to cross were powered by steam. Basically every time that wind went the wrong way, they fired it up and kept going. Right. And so it’s a classic sustaining innovation on the old paradigm.

Diane Tavenner: OK. But it’s still. Those models do not get us to model 3.

Michael Horn: They don’t. Yeah. It’s, you know, the story is that it was a 100 year disruption.

Diane Tavenner: Yeah.

Michael Horn: Where still ultimately the steamship native companies, shipbuilders ultimately upended the sail ship. And it was around 1900 I think.

Diane Tavenner: And it’s a different model ship.

Michael Horn: It’s a completely different model. Right. You don’t have the same components. You can do things differently in terms of construction because you’re not outfitting around an aerodynamic sail. Right. Like a totally different set of things you can do. So.

Diane Tavenner: OK, I have a question. Now, you said you felt comfortable with the field sort of spending 80% of its resources on Model 1 improvements, leveraging AI. Is there a risk that we over invest in Model 1 and undermine the emergence of Model 3 because we kind of keep this old industrial model going, breathe new life into it and there isn’t a sense of urgency around model three creating three. Yeah.

Michael Horn: Two thoughts. Clay used to always say this. The best experts in a field, like you’re a very strange anomaly. The best, deepest experts in a field are almost always consumed with the toughest problems in, we’re going to call it Model 1 at the edge of the existing paradigm.

Diane Tavenner: Interesting.

Innovation Beyond Traditional Expertise

Michael Horn: And it’s these people who are almost less expert in some way or for some reason have taken their expertise and brought it out that invent the future. But like it’s very hard to persuade the people who are dealing with the hardest, most intractable problems in the first paradigm to be persuaded to design out there. It’s why I think like, you know, when you and I met for the first time and you actually liked Disrupting Class, that was like a bit of a revelation because like we couldn’t get all these people to sort of like actually engage with it. Right. And so. Or, or they thought they were engaging with it but missing the point. Right. And so I don’t know where that goes.

Except, like, in some ways, I’m not surprised that that’s the current moment we’re in. I think the danger is if those individuals then block off our avenues to pursue three, I’m okay with them being consumed with one. I think it’s great. There are a lot of underserved kids there that need better education. And I think if they use that as a justification to block off three, through policy change, through blocking entrepreneurship, through blocking families making these choices, that would be deeply concerning.

Diane Tavenner: So glad we’re having this conversation. There’s two places where I have fear about that and.

Michael Horn: Well, you’ve lived it.

Diane Tavenner: I did, yes. Continue to, it’s my life. And there’s two places that I just want to raise here. And at the risk of how, you know, these are sort of controversial and they’re very nuanced. I often am misunderstood, so I don’t talk about them out loud very often.

Michael Horn: But thanks for doing it here.

Diane Tavenner: Here we go. So the first is the big assessment and accountability system. And you know that my belief is that that structure, which is well intended and people are deeply passionate and invested in making sure that we have real data and know what’s going on. I just spent time with a parent advocate who’s like, those tests are the only receipts we have of what’s happening with our kids. Right.

Michael Horn: There’s a great article recently around how people are just shocked because the tests have gone away and they’ve been relying on grades, which are even more worthless measures. Yeah.

Diane Tavenner: Right. And so there’s a lot of energy going to. How do we bring those back? How do we reestablish them? And, and my belief is, and my lived experience is, and most people don’t like hearing this, who believe in them, is that the existence of that accountability structure, I truly believed deeply dampened innovation and the move towards now would be model three. And I’m super disinterested in hearing about waivers and all these things. And. No, it really has an impact.

Michael Horn: Let’s get into how, because I’ve moved toward you a lot on this one. But in one standpoint, it’s like, well, it’s just focused on outcomes, frees up the inputs. You get there however you want. Like, how does it actually restrict the innovation? And is that a. And why is that a bad thing?

Diane Tavenner: Yeah, I think that it’s. Well, let me share a quote that I hear very often.

Michael Horn: OK.

Diane Tavenner: Which is, look, I’m not opposed to measuring different things but we don’t have those measurements yet. And so until we do, give me reading and math. And you know, I’m going to judge schools on reading and math, basically, which is effectively what we test in this country. And first of all, I think the problem is we actually do have those other assessments and they are crowded out. They aren’t accepted as, you know, mainstream, valid, reliable. No one is moving towards adopting them because it’s all about reading and math. And so I think it is really, you know, you measure what you value, you value what you measure. And there isn’t.

The system is not saying, no, completely unacceptable that we’re literally measuring our entire system on these two Important. Yeah, very important. Please do not misinterpret me. People always accuse you don’t want kids to read.

Michael Horn: Well, by the way. But I’m curious what you think of this. This is a classic case where I think defining the age span is important because I am strongly in favor of not losing the measures to families. Note how I said it, by the way, but measures to families on can your kid learn how to read, get those skills through, hopefully third grade. But you know, I’m. I’m actually willing to live with some variants in the age.

Michael Horn: All the reading tests after that are really knowledge tests.

Diane Tavenner: Correct.

Michael Horn: And so I would be much more comfortable, frankly, with every school picking like the. Or student, hey, you just did a deep dive on X. Go show your competency in X. I think that’d be a much more interesting. It’d be super jagged, students showing all sorts of deep dives on a variety of things and so forth. I think that’d be way more interesting. Math, I think, is a little different.

Diane Tavenner: Yes.

Michael Horn: And I don’t know where it stops. Probably around algebra, but. Yeah.

Diane Tavenner: Well, you just said a key point that really bothers me the most, which is the accountability and testing framework that we’ve had in this country is not about informing parents. And it’s not actionable data. It’s not timely data. It’s not what we would call that feedback, honest, actual timely data.

Michael Horn: No. And in fact, it’s negative reinforcement cycles.

Diane Tavenner: Exactly. And so let’s just take reading as an example. The oldest assessment technology is a reading record. I mean, schools could literally choose to assess every single kid that way and put resources towards that. It might not even be that many more minutes than they already spend on stage.

Michael Horn: By the way, AI can really do that now.

Diane Tavenner: Well, and I’m not even getting into…

Michael Horn: What technology can do.

Diane Tavenner: So why, why these old assessments. Right. And so anyway, I’m deeply concerned that there’s so much good intent there and so much potential.

Michael Horn: But you’re arguing that it’s crowding out a ton of these other measures that either are there or could be developed more robustly.

Diane Tavenner: Right. And in the same way that I can be sort of dismissive of efforts around Model one, I think a lot of folks focused on today and now in kids in school are very hand wavy and very dismissive of the impact this has on the potential for innovation. So I’m, you know,

Michael Horn: Super interesting. Yeah. Okay.

Diane Tavenner: The second one is

Michael Horn: You’re taking a breath, you’re giving me a look for those that can’t. We’re not a video this time.

Diane Tavenner: No, we’re not.

Michael Horn: Yeah, go ahead. Where are you going?

Diane Tavenner: Special education.

Michael Horn: Oh, okay.

Diane Tavenner: And I want to say up front, my belief is, are we, by the.

Michael Horn: Are we at the 50th anniversary of special ed at the IDA, the federal level?

Diane Tavenner: We might be.

Michael Horn: I think we are, yeah.

Reimagining Education for Every Child

Diane Tavenner: Okay. Yeah. The intention is right. So many amazing people working on behalf of kids here and most people who’ve spent so much time in schools like I have with families, you know, it’s a system that is about compliance more than it is about children, is. I don’t believe it gets young people what they need. And I think that has a really challenging impact on our ability to educate all of our children. And this is one of, in my view, one of the biggest promises of a post industrial model is that truly every child gets a personalized education.

Michael Horn: Because everyone’s now getting an ILP as a good. Exactly right.

Diane Tavenner: Exactly, exactly. And my worry is that in both the assessment case and special education, that new models, model threes, will be judged and held accountable to the current accountability systems and the law, which completely compromises their ability to design completely new and better approaches.

Michael Horn: Yeah. And my colleague, or I guess former colleague at the Christensen Institute, Tom Arnett, has written a lot about this one, about how when you apply the standards to the new system that were for the old, you hamstring and often stunt it completely. I think that’s very fair. My pushback historically has been. Yeah, but the existing system is all input driven and then it has outcomes layered over. If we strip out the inputs, which by the way, people are trying to put back on for the Attempts at Model 3 right now as well. Right. Like accreditation, really.

Michael Horn: I think you’re pointing out even though these output measures, I don’t even think they’re outcome measures, but output measures have been layered on, I do see where they could pull model three back in some unfortunate ways for design. And I think those are to me, that’s where the fears are really. It’s. It’s less the effort question in dollars and more the are we hamstringing it to actually just look like the existing thing we already have in slightly modified?

Diane Tavenner: Right. I’ve certainly learned from you the most, you know, how disruption happens is that people take it outside of the existing system. They have different expectations. You know, they look at it fundamentally differently. And so maybe this is the importance of ESAs. And I mean, as a person deeply invested in public schools in America, I would be very sad if we’re going to push all the innovation out into the private sector because we can’t welcome it into the public sector.

Michael Horn: Yeah.

Diane Tavenner: And maybe that’s what we’re gonna see.

Michael Horn: Yeah. I’ve always felt like the public officials ought to be responsible not for the institutions, but for the constituents. Right. And so the models may change. And by the way, look, in Florida, you have districts now launching their own microschools and creating certain services a la carte. And like, like they’re spinning off autonomously. Let’s see where it goes.

Michael Horn: Right. I mean, I don’t think we know the final thing yet. And the conversation I was having with one of my students yesterday as well was, you know, no one’s cracked yet, I think, in these. So they’re not really model three attempts because they’re not AI native. But let’s just call like this sort of emerging ecosystem. We haven’t seen a lot of high school models.

Diane Tavenner: Nope.

Michael Horn: And I think part of it is because disruption starts as primitive, able to solve simple problems, not the most complex. Identity formation becomes much more important in high school. Right. And all these rituals that we may roll our eyes at around Friday Night Lights or prom or whatever else, they’re part of this identity formation and asking who am I in relation to others? And these small, you know, I think, you know, Tyler Thigpen, Forest School, Acton Academy, he’s done a good job of creating rituals, but most high school attempts have not yet built that. And so I kind of wonder, is the upmarket, if you will, solving for all of those things with very different traditions that don’t look like Friday Night Lights, but are actually more meaningful for the current time around identity formation?

Diane Tavenner: Totally. Well, and now you’re getting at the heart of what I’m trying to contribute to with Futre, which is how do we support some of that positive identity formation and search for who I am and the life I want to lead, both in the digital world and then connect that to real world experience.

Michael Horn: Well, I think it’s interesting though, that your market is the traditional industrial Model one, largely. And so I’m, I mean, I’m curious how you think about that.

Diane Tavenner: I’m living in a bipolar world. Yeah,

Michael Horn: … yeah, yeah. Okay, okay, okay, okay. Well, I. You’ve built it with a modular interface, as I understand. Right. So it can exist in both, I think is part of your answer. And I, I imagine you’d say a native model 3 would actually answer a lot of the future questions as part of the design of the model itself.

Building Towards Model 3 Framework

Diane Tavenner: I think so. And I do think, you know, yes, I hope that what we’re building can live in both worlds and is one of, you know, the early ideas or components of what a Model 3 will look like. And I certainly will be engaging with folks on pushing that area, so hopefully we’ll talk more about that. I think where this is all leading for me is the next part of our season. So we’re gonna talk to a bunch of different people and I’m gonna be really. I’m gonna be in the back of my mind thinking, all right, well, where do you sit in this imperfect framework, this developing frame? But, but sort of, where is your effort sitting in that? Are you literally a whole school model? Are you an element to a model? Are you, you know, an AI enabled tool? Are you really trying to push the boundaries of designing for Model 3? Are you an interesting model two? And what do those look like? So.

Michael Horn: Yeah, well, and that’ll be interesting because I think as I look at the guests ahead, we have a lot of folks in Model 1 who are working with that system. And I’ve been wondering, given the hypothesis that we have fleshed out over the last couple of seasons of AI, like how that fits with the things that we’re interested in. And this is good. I think we’ve given a good framework on the importance, frankly, of all three of those elements and the work that they need to be doing and the dangers of crossing over perhaps, assumptions from the worlds across the different models.

Diane Tavenner: Perhaps. Awesome.

Michael Horn: This got interesting. A little spicy.

Diane Tavenner: A little bit spicy. Well, super useful for me and helpful for me to think about things. Any last things on your mind?

Michael Horn: I have one last thing. Hopefully we won’t get cut out of the studio, which is, I thought a lot about what is the world into which people are going and how does that map back to what is still core and what is not core and so forth. And I just want to float an idea by you and have you attack it.

Diane Tavenner: Great.

Michael Horn: The reflection I’ve had is we know there’s a considerable amount of cognitive science that suggests we learn best through story stories, narrative arc, and we don’t actually deliver most learning or offer learning opportunities in that. And so I guess I’ve been wondering as we think through, you know, we had the back and forth of do they need to memorize state capitals? And we both said, probably not. But I do think they should know that there’s a thing as a state capital. And so my thought about it is almost like Montessori has the I’m gonna mess it up, the great lessons or something like that. Right. And it’s a narrative arc. But I almost can imagine narrative interactive arcs where you’re like sort of, okay, how did the country’s governance evolve over time? And these thin layers that would build a lot of common reservoir of knowledge. And I think I’m largely talking K5, maybe K8, that that could be a big part.

And like in, in the various disciplines, if you will. Right. Civics, a variety of deep dives in history, et cetera, et cetera, science. I think it should be active. I think it should be multimodal. It’s not clear to me. It’s the teacher delivering the story.

Diane Tavenner: Say what you mean by multimodal, because a lot of people are using that term and I don’t think many people know what it means.

Michael Horn: Yeah, yeah. So I guess I see it as being like, you can imagine it being some of these lessons being video based through an AI. You can imagine an auditory sound. Right. You can imagine interactive where you’re actually answering questions both verbally and written as you’re working through something, you can imagine, like the state capitol one. So you have a lesson around how did state capitals evolve in state government?

Diane Tavenner: I mean, it could be VR, like literally immersive.

Michael Horn: Right, Exactly. And then you could almost imagine then like you pop out and like, my kids still draw maps. I actually think that’s really valuable. But I don’t think that they then have to drill memorizing every feature, but they don’t know what question to ask Gemini or ChatGPT without like sort of that thin knowledge base. Right. And that’s sort of where I’m wondering if you’re. We evolved to something like that that recognizes the importance of some knowledge.

Diane Tavenner: Yes.

Michael Horn: We could have mastery assessments where we thought it was really important.

Diane Tavenner: Yes.

Michael Horn: We don’t have to have it for everything, frankly, it’s just exposure is probably good enough, especially if it’s interactive. I don’t know. What do you think of that idea? What are the flaws? And sorry. And then creating the space then for like, hey, you’re interested in this? Okay, here’s your project. Go deep, right? Like, and that’s where the deep explorations of learning how to learn and developing the skills would really be.

Diane Tavenner: This feels very fun to me to think about this. And these are the types of thoughts I’m constantly playing with and that I think should influence the design of Model 3. I love that you brought up this idea of memorizing the 50 state capitals because I think maybe we are misunderstood when we both say we. We don’t necessarily think kids should memorize the 50 capitals. That’s not because we don’t love America, believe in America, think that they shouldn’t. I think what we’re both more interested in is literally having them have like a deep story about each of the capitals and really internalizing. I mean, I will tell you, we get to travel a lot. Do you, do you like how I frame that? We get to travel a lot.

And when I travel, I love this country so much. It’s so fascinating. There’s so much.

Michael Horn: It’s so much fun to dive in, right? And take the, like you’re in, you’re in wherever and you go to the Alamo or whatever it is. And like, it’s so much fun.

Deep Learning Over Memorization

Diane Tavenner: It’s so curiosity driven. And so what if young kids didn’t memorize 50 capitals? But what if they went deep on a couple of them, like in a story based way, in an immersive way, and they got the idea of state capitals and what they mean and the importance. They got very cool stories about, you know, a few of them at that age. And then they got a lifetime of like, oh, I could, there’s so many more I can learn. And there’s so many interesting stories about them. And they’re not just a name on the page and, you know, on a flat map, but they’re real places that have real significance and they’re different from each other and because they have such access to knowledge now, if they really need to go look it up, they can go look it up..

Michael Horn: They can do the deep dive. Right? And I think the knowledge conversation, I’m a big believer in the importance of a fundamental knowledge base and the depth at which those occur. I think we don’t have a nuanced conversation around.

Diane Tavenner: Right. And I also am okay with it, I’m gonna call it the Swiss cheese of knowledge.

Michael Horn: Yeah, so am I.

Diane Tavenner: That you don’t have. Every fourth grader in America does not need to know the same facts.

Michael Horn: Yeah.

Diane Tavenner: It’s okay if we learn them at different points and different times and that there’s, you know, sort of regional differences around that. I’m much more committed to everyone having a common set of really important skills, at least at a baseline level. And then ideally spike lots of people spiking in the different skills in different places because we need all those.

Michael Horn: But when you say the skills, you’re thinking that it’s been developed through them working in different domains and areas repeatedly in deep dives. Right. And so

Diane Tavenner: Because you need content to practice skills.

Michael Horn: Exactly right. And you create that integration. I think a lot of times in school it goes the other way where like, oh, we learn how to think critically about what.

Diane Tavenner: Exactly.

Michael Horn: And so again, these crosswalks extremes, I think are right. Yeah. Anyway,

Diane Tavenner: Yeah. And so, you know, and this is why we both like a project based environment because it’s the integration of the two and there’s such power in what AI can do now where you can really do personalized learning on, in the content to bring to those, you know, engaging, collaborative, communal type, project based experiences. So I mean, I love what you’re saying in the direction you’re going. It’s very nuanced as you know, it’s.

Michael Horn: We should have some more fun later on and. But I just wanted to float the general idea because I had this moment in our conversation with Alex where I was like, at what level are we thinking about difference and what does stay the same? And I think part of my reflection has been there’s actually a fair amount that stays the same, but how we’ve done it probably changes pretty radically.

Diane Tavenner: Indeed.

We’ve been recording pretty frequently and I know we’re both feeling a little stretched on thinking about new books and things we’re reading. We’ve maybe exhausted our list so I thought maybe we’d take a break from that list only today. Thank you. And replace it with this will make this episode a little less evergreen. But for those who are listening, we’re actually recording this right before the week of Thanksgiving, and I thought I would end with some gratitude.

Michael Horn: Oh, I like it.

Diane Tavenner: So one of the fun moments of yesterday’s engagement with your class and then the office hours afterwards was there for so many young, amazing people who so many of their questions were very personal yesterday about, you know, how to be a mom and lead and how mentorship and all of. And, you know, my relationship with my husband over the years. And I’m so. I’m appreciative that they were thinking about that. And one of the things that came up was just our friendship. And I think you know this. But I am so grateful for our friendship, and it is truly one of, for me, the big, you know, if there are any highlights coming out of COVID the fact that we decided to do this, it gives us time together. It’s just so much fun, and I’m so grateful.

Michael Horn: You know, I’m a crier, so I’m trying not to right now. Thank you. I feel the same way. And it’s one of those things where I feel like, how lucky am I that we get to have this conversation? Even though I moved away from the Bay Area over a decade ago, which is wild, 12 years, but. Yeah. And I think it’s. So when this comes out, it’ll be after the new year, I think, and so forth. But I always tell my students, because, as you saw, like, 55 or so percent are not from the U.S.

I say take the time because how cool is it to have a day when you get to say thanks? So thank you as well. Yeah. And thank you all for joining us through the sentimental moment. But also on Class Disrupted. And just keep your questions and curiosity coming. We suspect there’ll be things you disagree with that we said here, and we can’t wait to learn from you. So thank you, as always, and we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
AI Trailblazer Google Doesn’t Want Schools to ‘Bypass the Human’ /article/ai-trailblazer-google-doesnt-want-schools-to-bypass-the-human/ Mon, 02 Feb 2026 11:30:00 +0000 /?post_type=article&p=1027968 In 1999, the Indian computer scientist and educational theorist Sugata Mitra created a small, if audacious, learning experiment: He and colleagues at the National Institute of Information Technology in a street-level wall of their New Delhi office building and mounted an Internet-connected personal computer, usable by anyone who passed by. No instructions, no suggestions, no lesson plans. Just access.

Within hours, Mitra would later write, children from a nearby slum appeared “and glued themselves to the computer.” They learned how to use the mouse, download games and music, play videos and surf the Web, all by teaching themselves.

The experiment in what Mitra called “minimally invasive education” was . It became in the ed tech world, evidence that children simply need access to tools to be successful.

Dr Sugata Mitra in front of his ‘hole in the wall’ experiment.

But don’t mention Mitra too enthusiastically to Ben Gomes, the computer scientist who co-leads Google’s education efforts. While the “hole in the wall” experiment is a hopeful, charming story, he’d say, it’s missing a key element: teachers.

People are fundamental in the learning process. People learn from other people, and people learn because of other people.

Ben Gomes, Google

“We are paying attention to pedagogy, and we’re working with the teachers,” he said. “We’re not saying we just want a thousand flowers to bloom randomly.”

As AI becomes more ubiquitous in schools, Gomes maintains that Google has a duty to train teachers not just how to use its products but also how to help them move students from taking shortcuts to using AI for deeper, often independent learning.

That strategy could dull longstanding complaints that ed tech more broadly is focused on replacing teachers with tech tools that don’t .

“It’s a belief backed by science, to a large extent, that people are fundamental in the learning process,” Gomes said, “that people learn from other people, and people learn because of other people.”

Children certainly can and do learn independently, but deep conceptual understanding and literacy require guidance — especially now, nearly three decades after Mitra’s hole in the wall, with many developers looking for ways to replace teachers with AI.

“Teachers are critical in this process,” Gomes said. “We don’t want to bypass the human.”

AI as ‘thought partner’

In a recent , Gomes and a handful of colleagues explored how AI could reverse declining global learning, largely through supporting teachers and turbocharging personalization. In mid-January, Google said it was on AI in the classroom, offering its AI-driven Gemini app to more educators and students for free, making tools such as available and partnering with Khan Academy to power a writing coach tool.

The search giant has put a former NASA trainer in charge of much of the effort. Julia Wilkowski, a neuroscientist, has also taught sixth-grade math and science. She began her career at an outdoor environmental school, where she recalled hiking trips in which she’d ask students to figure out the velocity of a stream using only an orange, a length of string and a stopwatch.

Wilkowski now spends “pretty much 100% of my time” focused on ensuring that Google’s AI for students rests on sound learning science.

In interviews over the past few weeks, Gomes and Wilkowski spoke openly about their work, in several instances admitting that much of it amounts to helping teachers find ways to get students to stop outsourcing their thinking.

“Teachers have the opportunity to teach their students how to use these tools ethically and effectively that don’t bypass those critical thinking skills,” said Wilkowski.

As an example, she said, she has worked with English teachers to help them instruct  students on how to use AI as “a thought partner” in essay writing, not as the writer itself.

These teachers, she said, have succeeded by breaking down essay writing into its component parts and openly discussing its goals. They use AI to help students brainstorm essay topics, refine thesis statements, help generate first drafts and offer feedback on them, giving students “guidance and guardrails” without allowing them to turn in AI-written essays.

The work, stretching back a year and a half, “has really informed my optimism about how AI can be used successfully,” she said.

Guided learning

Both Wilkowski and Gomes spoke often of “guided learning,” saying students learn best when they move beyond simple answers to develop their own ideas and think critically. To get them to do so, teachers must guide them with carefully designed questions.

There's no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.

Amanda Bickerstaff, AI for Education

Perhaps unsurprisingly, Google has for that, a section of Gemini that acts much as a private tutor or guide, offering students a taste of “productive struggle” that engages but also challenges them without offering answers (at least not immediately). Rather, it steers them to the answer through a series of questions.

Gomes said the principle is working its way into most of Google’s AI products, including a newer one called , which uses the technology to help students learn topics in interactive, more appealing ways most textbooks can’t: as a text with quizzes, a narrated slideshow, an audio lesson and a “mind map” that lays out related ideas in connected graphics.

At its root, Gomes said, the dilemma over AI and cheating stems from motivation. “If I look back at my own childhood, there are certainly cases where I was just interested in getting something done for tomorrow,” he said. “And there are other cases where I was curious and I wanted to read more.”

The ratio between how much time students spend in one state vs. the other varies, he said, “but getting more people into the state where they are motivated, I think, is the goal.”

But Amanda Bickerstaff, co-founder and CEO of , a training and policy organization, said the reasons students turn to AI are “far more complicated than lack of motivation.” 

Students are dealing with “perfectionism, high-stakes assessments that prioritize grades, skill and language gaps,” among other dilemmas. “Framing this primarily as a motivation issue oversimplifies what’s actually happening in classrooms.”

She said Google’s shift toward Socratic reasoning “sounds promising, but there’s a fundamental problem: There’s no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.”

The chatbots are “sycophantic by nature,” Bickerstaff said, offering answers and completing tasks even when not explicitly asked to. “That’s the opposite of productive struggle.”

And most young people, she said, don’t have sufficient AI literacy to use these tools strategically. “Without that foundation, chatbots become for schoolwork rather than a learning tool. You can’t solve that problem through interface design alone.”

More, better feedback

For her part, Wilkowski said much of the struggle over AI comes down to feedback: How much should students get, how often, and what should it look like?

Wilkowski said her daughter is in high school and was required to write an essay for a final exam in December. When Wilkowski spoke to The 74 in early January, she said the essay still hadn’t been graded. 

“I would rather have AI-generated feedback,” she said. “Give the first draft, and then the teacher [can] review it, of course, before giving it to the students.”

Teachers have the opportunity to teach their students how to use these tools ethically and effectively that don't bypass those critical thinking skills.

Julia Wilkowski, Google

More broadly, she said, AI could soon change how students are assessed altogether, helping teachers move away from tools such as multiple-choice tests, whose problems are well-known in the testing world: They’re easy to create, administer and grade, and they’re reliable. But they also allow students to guess rather than show understanding, and they encourage students to learn by rote memorization rather than deeper engagement with material. 

Multiple-choice tests also can’t evaluate higher-order thinking skills, creativity, student writing or the ability to construct arguments. If AI can make essays or long-form questions or even projects easier to grade, wouldn’t that put the multiple-choice test out of business?

“Let’s say you’re in physics class and you’re studying acceleration-versus-time graphs and you ride your bike home,” Wilkowski said. “An AI tool might pop up and say, ‘Hey, here’s your acceleration-versus-time graph of your bike ride home. What did you notice about your velocity? How did it change as you changed acceleration? Was there a hill that you had to overcome?’” 

More relevant assignments and assessments, she said, could get students to think more critically, incorporating school into their real life in deeper ways. “It goes back to the heart of what excited me as a teacher: those excited, hands-on lessons. I’m seeing a way that … AI can facilitate those in the future.”

AI for Education’s Bickerstaff said it’s encouraging to see Google working to create more “fit-for-purpose tools” for student use. 

“The education sector desperately needs companies to move beyond general-purpose chatbots and build tools that actually support cognitive work rather than replace it,” she said. “But there’s still a lot of work to do — and a lot of research that needs to happen — before we can know if these tools are effective learning guides.”

]]>