education technology – The 74 America's Education News Source Thu, 05 Feb 2026 17:24:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png education technology – The 74 32 32 Reflections on Whether AI is Actually Changing Schools — and Where /article/reflections-on-whether-ai-is-actually-changing-schools-and-where/ Thu, 05 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028147 Class Disrupted is an education podcast featuring author Michael Horn and Futre’s Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode, Michael Horn and Diane Tavenner step away from their interviews to reflect one-on-one at the midpoint of their season on artificial intelligence in education. Diving into its evolving role in the classroom, they ask whether AI is truly transforming the system or simply being layered onto outdated structures. They explore a framework of three school models and discuss the challenges of meaningful innovation amid existing accountability systems and education policies. From these models, Horn and Tavenner analyze how one might expect transformational change to occur in K–12 schooling — through traditional schools incrementally changing and evolving over time or, as they argue, through fundamental migration away from the existing system.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good that you came to Boston and in the freezing cold weather, no less, to hang out a little bit with me here and have a conversation.

Diane Tavenner: It’s really fun to be in person. We haven’t done this for a long time and the timing worked out perfectly because we are in the midst of this super interesting season where we’re exploring AI and education. And we’ve had several touch points where I’m like, oh, my gosh, there’s so many things that are coming up for me that I want to talk with you about. And so we get to have a conversation, the two of us, this morning.

Michael Horn: I am looking forward to it. And I’m sure you’re going to say things. I’m going to say, wait a minute, I think I know what you mean, but double click on that. Tell us more. And so I’m excited to go deep on wherever you want to go because the conversations, they’ve both been illuminating, but they brought up more questions for me, as seems to be constantly the case with this topic.

AI Disrupting Education Processes

Diane Tavenner: Indeed. Indeed. Okay, well, let’s dive in. And I had the great pleasure of spending time with you in your class yesterday. Thank you again, so much fun. And one of the topics that came up was this idea of. I think it turned out to be more provocative than I anticipated it to be. But this idea that I started said, you know, one of the things, a phrase I read almost constantly right now and hear everywhere is AI is changing education.

And I don’t believe that that phrase is true or accurate. And in fact, I believe AI is not changing education. And, and so I want to dig into that idea a little bit. You know, I would argue that it’s creating a lot of problems for folks in education who are sort of in the traditional model of schools. But I don’t think it’s changing education yet. And what do you think about that?

Michael Horn: I largely agree. So I’ve been thinking about this, but a different wavelength because I’ve been seeing over X and the various pundits. There’s a lot of conversation right now of banning cell phones in schools, as you know, and there’s a lot of conversation of not just cell phones, but screens, period, you know, Google Classroom, all the rest, because it creates access to all these other things, ban it all sort of things. And then you see the occasional commentators saying, does anyone ever believe otherwise at this point?

Diane Tavenner: Right.

Michael Horn: And I had this moment because I think I’m seen often as the tech guy in education. But if you read Disrupting Class, what we actually say is that just layering tech over the existing system is not going to do anything.

Diane Tavenner: Right. I think we’re going to get to that idea in a moment.

Michael Horn: So I think so I guess my instinct is, I agree with you. Like I think we’re layering a lot of AI over existing processes. It’s breaking, frankly, a lot of education. So the one push I might have on you is it may be creating the impetus to ask some bigger questions. And, and I’m not just saying I’m not going down the road of just because the world is AI, therefore this should be AI but like legitimately, you know, we have current assignments where you can now hack them through AI. That’s called cheating. And all of a sudden everyone goes in a tailspin.

Well, let’s ask some questions about the assignments and the work itself is sort of my take from that. So I think it might be an interesting push. But I agree most of what AI is doing right now is layering over existing processes. Some of them, I suspect it’s making more efficient. Great, maybe some of them I think it’s exacerbating problems that already existed. Is that what you have in mind or.

Diane Tavenner: That is what I have in mind. And you brought up, you know, the, one of the biggest conversations is about cheating. Right now we’re seeing all these distortions and strange behaviors and blue books returning. And I’m sure the company that makes those is happy about that. But you know, they might be, they’re.

Michael Horn: Still around or they have to resuscitate. We should look that up.

Diane Tavenner: Yeah, I think when I think about it, what’s happening with this idea is that everyone knows that they’re supposed to have an AI policy and strategy now, but most people don’t. And so this is confusing. And a lot of people, I think AI in education right now is very kind of one offy. Like people, individual people pulling it in and people, you know, and so it’s not coherent, it’s not a strategy. We see it in sort of, you know, lesson planning and assignment making, which is related to, you know, why are we even teaching what we’re teaching to your point? And if you can cheat on it, then what are we trying to do? And then it goes down the line to a lot of fear that I think it’s injecting everything from these very high profile cases we’re seeing of suicide that, you know, is potentially induced by the AI to, to big widespread data privacy. So all of that to say, I’m hopeful. I believe the technology itself, if deployed, can actually change education. But I think humans are going to have to do that redesign and that deployment in a really strategic, thoughtful way for it to change.

Otherwise, I just think it’s plaguing us with problems.

Michael Horn: Yeah, I think that’s right. And systems structures, models, matter and processes and, you know, they’re sort of automating or, you know, playing off the existing ones. We may have a small disagreement on one thing. I’m curious about this. So, like, we don’t have many disagreements, so I’m gonna lean in if we do. I do think, so the Blue book comment aside, I can imagine that there are things we want to do in the classroom that have no AI at all involved with them, because some foundational knowledge or skill that a student can hack using AI out of the classroom is something that they actually should still work on in an analog way to create automaticity on that.

Diane Tavenner: OK.

Michael Horn: I don’t know if that’s Blue Books or what form factor. I’ll take the point there, but I guess that’s. I suspect if we break things down, there are still some foundational things we would want students to have to wrestle with that might not involve AI and be offline, if that makes sense. And then my take would be, okay, but don’t stop there. Now what are we going to use AI to create as opposed to consume with AI?

Diane Tavenner: I think that’s right. I really loved the conversation we just had with Laurence where he brought up some really interesting examples, to your point, of, you know, young people literally working together and in dialogue and, and then he talked about how AI could be supportive and enhance that. But to your point, the actual skill of having that conversation with another human and what you’re talking about is not about AI, so completely agree with that. My concern is when people are taking, you know, very old assignments and.

Michael Horn: And just dusting them off without any thought. Yeah. And I think I also think this gets the older you go, as in, I could be wrong about this. And this is, I’m sure, overly simplistic, but I think for a younger student, and, you know, I’ve got kiddos still in elementary school, so I’m still thinking a lot about that. I do think, like, that part of the landscape looks different from the older student in high school and college that, you know, it’s more problematic when you’re just dusting off that assignment, perhaps for that student.

Diane Tavenner: Right.

Michael Horn: But I do think, you know, developing number sense and automaticity with those things offline before you introduce the calculator and AI and so forth. That makes a heck of a lot of sense for a younger student. And so it’s as always with these conversations in education, I think we sort of make a statement and think it applies everywhere and there is nuance there.

Clarifying AI’s Role in Education

Diane Tavenner: That’s exactly where I’d like to go next because, so I think the dialogue around AI and education is complicated right now. And I hear a lot of people talking past each other and over each other because I think we’re using these very broad, sweeping general terms. So, for example, AI and education, and I was with a really great group of people a couple weeks ago and fortunately some really, you know, smart people noticed this talking past and talking over and called it out. And literally we went around this room and we were like, what do you mean by AI in education? And just within seconds we surfaced. Oh, well, you know, using LLMs like GPT and Claude and Gemini for instructional or operational support, using AI powered education apps, Khanmigo, Class Mojo, Magic School, AI policy development, you know, AI literacy lessons for students. And, people are literally using the phrase AI strategy, AI and education AI to mean all those things and more. And, and I’m finding that it’s very complicated to try to have meaningful dialogue when there isn’t a definition right now or people aren’t. We don’t have specificity yet.

I mean, I think some people don’t even know what AI is.

Michael Horn: Yeah, you’re probably right.

Diane Tavenner: Yeah, yeah.

Michael Horn: And it’s probably extremely fearful in those quarters. And the social media analogy is rampant right now as a result, probably because we’re not defining or breaking down. I mean, do you really not want AI to help an administrator better communicate or schedule or like really, that seems crazy, for example, on that end of it.

Diane Tavenner: And my sense is that what jumps to most people’s mind when they think about AI in education, we’ve sort of railed against this from the beginning, is literally how a student is engaging with it either in the classroom or at home. And most people have in their mind some version of some chatbot, generally speaking, which is incredibly narrow and limited, I think. And you just gave a good example of like, we could literally never bring it directly into the classroom with students. And there’s a million different uses for it in just running something as complicated as a school and a school system. And so, yeah, I guess this is just my plea for us collectively to start developing a more specific vocabulary, more intentionality. About what we mean. Let’s stop saying we’re doing AI.

Oh my gosh, everyone’s doing AI. What does that mean? And being really specific about it. And I think for me, I just want to flag as we go through the rest of this season because we’re going to have some really interesting conversations next. I’m going to push us to be really specific about what people are literally doing with AI. What does that mean?

Michael Horn: Yeah, and the conversation with Laurence, I think opened us up to that because it started to talk about very specific use cases. It occurs to me this problem has always existed in education since I’ve been in the field. Right. That we talk past each other or I remember, you know, there’s project based learning adherence to like an extreme degree. And they’ll say everything ought to be learned through projects. And then you say, well, okay, the kid learning to read though, in first grade, they’re like, oh no, no, no, that kid should get phonics and direct instruction and blah, blah, blah. And you’re like, okay, so there’s nuance, but we have to break apart, novice versus expert.

What’s the topic? What’s the goal? Right. Like, and so skill versus knowledge, as you know, that gets conflated, conflated all the time. And we don’t have precision. And so I think it’s a good plea you’re making, which is just like, let’s be more specific. What’s the objective? What’s the learner coming in with if that’s the level at which we’re talking?

Diane Tavenner: OK, all right.

Michael Horn: Where are we going next?

Diane Tavenner: To one of my favorite topics, which is school models.

Michael Horn: Okay. Yep.

Diane Tavenner: So I’ve been reflecting on a number of conversations. I’ve been having a bunch of stuff. I’ve been reading dialogue that I know that’s happening. There’s a variety of people trying to think about the future and what it looks like with AI. And there’s. I think none of these are set yet. They’re all kind of rough, but they’re starting to fall into this pattern of where people are talking about three different models, if you will, of schools. And I want to come back to what is a model in a moment.

But, but this idea that there’s. I’m going to call it, I think generally people agree that we have an industrial model school at this point. And we have had for quite a long time. We’ve talked about this ad nauseam and that. So let’s call model one sort of current industrial model. And when with the emergence of AI, model one sort of stays industrial model, but you know, AI gets used in some of the ways we just talked about. You know, like there’s, you keep all your existing structures of grade levels and schedules and teaching roles, but you have an AI enabled tools where you’re helping it to grade student work or you’re using it to lesson plan and you know, instructionally plan. You’re, you’re doing some adaptive practice and feedback.

You know, I think the stuff that people probably are more familiar with because they see it. So, that’s kind of Model 1 still in the industrial world. I’m going to jump to model three before I talk about two, because two confuses me a little bit. So model three, let’s call that native AI education. I think most people I know would argue that this has not been invented yet. It doesn’t exist yet as a model.

Michael Horn: Do we know what it means?

Diane Tavenner: I think that the way people have started to describe it I’m not sure that I agree with. And so here’s where I am on this one, which is I don’t think we know what it looks like yet. I think we’re failing in our imagination right now of what’s possible. I think it’s a moment to go into the proverbial garage and do some real designing. Yeah, but let’s call that the post industrial model. I don’t like to call it the AI model because of the definitional problems we just said, but let’s just call it whatever the next school model, like the full model would be.

Michael Horn: OK.

Diane Tavenner: So then there’s two, model two and this one gets kind of squeezed in the middle. I think some people are calling it AI integrated education. Okay. And basically the, the emerging definition I’ve heard is that it’s where you sort of modify selected structures where the sort of benefits justify the disruption. So for example, you know, you have much more interdisciplinary curriculum. You have competency based progression in certain places, you have flexibility in existing schedules in blocks or things like that. You might start seeing some of the time out of the building or, but you’re still sort of, I would argue, existing in the industrial model kind of box, if you will. Okay, but you’re, but you’re using an integrated AI approach to kind of hack some of those things.

OK, yeah, so let me pause there before I start asking my question. See if like those resonate if you’ve heard about them, you know.

Michael Horn: Yeah, no, I haven’t thought about it this way. So I’m noodling as you’re saying it, this is real time. I guess I’m curious. Like models, like a Montessori, like a classical education or the new versions of classical education we’re seeing in microschools or you know, I don’t think Waldorf fits into your typology but like where would you slot. Like those are models too.

Diane Tavenner: They are.

Michael Horn: How do they slot into the schematic?

Diane Tavenner: Yeah. Well, let’s just take Montessori as an example. Right. So in some ways it’s still industrial. Most Montessori schools still exist Monday through Friday, kind of between 8 to 3 ish. They still have a teacher, you know, one to kind-of-many class. There’s, you know, they’ve sort of released or relaxed age grade bands, although I think society kind of imposes them on them. So you, you know there’s some sort of gravitational.

Michael Horn: I mean, you know my frustrations.

Diane Tavenner: I do know your frustrations. So I still think Montessori, maybe Montessori would be kind of a two.

Competency-Based Learning

Michael Horn: That’s what I was wondering is trying, it’s like it’s not AI, not AI enabled, but it uses the technology of the 1910s or whatever it was to have broken out of these certain structures. And so it’s a very competency based math sequence. Very competency based on the learning to read part of it and probably less so on everything else is your point. And there’s still some sort of, you were born in the year of the Scorpion and whatever it is, and therefore you’re going to learn this on this date with everyone else sort of element to it, I think is what you’re saying.

Diane Tavenner: I think that’s right. And, and one of the reasons I wanted to talk to you about this kind of framing is I’ve been trying to think about what sits in the model two category. Okay. I mean it feels very easy for me to identify, you know, almost every school as a Model 1 and many of them are starting to bring in these like AI tools if you will.

Michael Horn: Yeah.

Diane Tavenner: But they’re still clearly industrial models. It’s pretty easy for me to say I don’t think we’ve seen a model 3 yet with the infusion of AI. And then I think about like for example, what we did at Summit and Summit learning.

Michael Horn: Yeah.

Diane Tavenner: I think at the high school level that might be a model 2 without AI yet.

Michael Horn: Right.

Diane Tavenner: Where again we were sort of pushing the boundaries of that industrial framework of a model to try to, you know, reimagine or re-engineer portions or parts of what was happening with expeditions, for example, what kind of breaks the traditional five period, six period day, but all but doesn’t really break the calendar, if you will, or the, you know, eight to three kind of situation. So what do you think about that?

Michael Horn: That’s interesting. So I know we could probably geek out all day and create a taxonomy. So I won’t do that to our listeners, but I am thinking like you’ve seen almost different shots of goal, like. So I think of Florida Virtual School as an example. And I’m reading Julie Young’s draft. I’m not sure I’m supposed to say this, but draft memoir right now. And it breaks certain elements of that, but it’s still course based.

Diane Tavenner: Right, right. There you go.

Michael Horn: So the two things are interesting. And then I start to wonder. Everyone’s talking about Alpha Schools. We’re gonna have an episode on it, so stay tuned. Maybe we don’t get into it here, but, but things like that, where does that slot into your framework? Or I think about Acton Academy, probably falls into two is my guess. And so this is, I guess, what I’m trying to start to sort through as you, as you frame this.

Diane Tavenner: It’s why I wanted to bring it up today because we are about to shift to start talking with people who are either trying to redesign whole models or portions of it. And I think it will be helpful for us, for me for sure, to have this kind of framing in my mind.

Michael Horn: So you can say, pull it back. So we’re talking with an entrepreneur. Okay. You’re working in number one context. You’re working in two, three, maybe the frontier there.

Diane Tavenner: Exactly.

Michael Horn: OK.

AI Tools

Diane Tavenner: And I think there’s a couple of reasons why this is important. The first is back to that, talking past and over each other. One of the things I noticed is there are a lot of people who are gravitating to sort of the AI, you know, enabled tools that will definitely improve, you know, Model one industrial model, if you will. And they’re very passionate about that. They have really strong arguments about, like, there’s kids in schools today who need things to be better. And so we should be, you know, deploying these tools as best we can to do that. Then there’s a whole other group of people, smaller, who are like obsessing about designing Model three, a post industrial model. I don’t think anyone who’s been listening will be confused about where my kind of passions and interests lie.

So I’m definitely, you know, my attention goes to this question, and this, my energy is in that direction. And I really caught myself because I can be dismissive of that first group. And I think that is really problematic for me to do that because I. There. Well, here’s my question.

Michael Horn: Yeah.

Diane Tavenner: Do you think if those models are true in the way we’ve sort of laid them out, is the theory of action or change that you progress from 1 to 2 to 3? Because some people believe that.

Michael Horn: I strongly don’t think so.

Diane Tavenner: I don’t either. Okay, good. Say more because you’re the expert.

Michael Horn: Yeah, no, well, so. So my energy is also in three, as you know. And no one listening will be confused about that. But I think it is prudent from a systems perspective, like thinking about the country, that 80% of the dollars in energy are going into the number one. I think that is from a like sound strategy perspective. Makes a ton of sense. Right. It’s where most of the students are.

It’s like classic sustaining innovation. If I’m running a company and I see the new thing coming that I think is going to upset the apple cart, I don’t push stop on what we’re doing today.

Diane Tavenner: Right.

Michael Horn: I start to test and learn what we talked about on the fringes. And then like, I start to move things out there. Okay. So that’s where I go to the statement that I don’t see any cases where number one morphs into number three or we learn stuff from number three. And I had a guest in the class say, how do we pull it back into number one? I’ve never seen that work. You’ve never seen that number three replaces number one

Diane Tavenner: So then it has to be effectively designed from scratch, grown from scratch. It’s not, you know, evolving. No. Okay. Well, some people think it’s gonna.

Michael Horn: No, I know. And I just, I. And I think it’s totally rational to be putting bets and have a portfolio strategy that are in all three buckets. And I think you can learn lessons between them. Absolutely right. I mean, we know a lot about cognitive science from number one. We also don’t know a lot, I think, because. Take growth mindset, for example.

Right. My read of the literature is incredibly powerful. And if anything in the environment undermines the message of growth mindset, it pulls the kid back into the fixed mindset view and undermines all of that intervention. And basically every structure in number one does that.

Diane Tavenner: Right.

Michael Horn: So we can have our lesson on growth mindset. I don’t think that’s the best way to do it. But like we can have our lesson on growth mindset. We might see a temporary bump on some sort of assessment and then like immediately you get the C grade in the class and you’ve been labeled because you can’t take the feedback and do anything with it. You’re not even reading the feedback and you no longer think that.

Diane Tavenner: Yeah, well, and this is the point of growth mindset not being permanent. It’s not. You don’t either have one or you don’t.

Michael Horn: Right.

Diane Tavenner: It’s a continuous state that you’re in and you can fluctuate from in and out of that state regularly. Okay, so. Well, that’s an interesting conversation to have with folks who believe that the theory of change is that progression versus what we just.

Michael Horn: And I guess stay with it one more second because I remember when we came out with Disrupting Class, a lot of people would push us and say, well, we’re talking about systems change. What are you talking about? And I think we were talking about systems change too. But my theory of system change is system replacement.

Diane Tavenner: Well, there you go.

Michael Horn: And I think it’s really hard in the US for all the reasons we know. And one of the reasons I’m in some ways more optimistic than I have been is I actually see a path for that change, that replace or disruption of systems that I haven’t seen because.

Diane Tavenner: The technology is so.

Michael Horn: Well, and the ESA policies.

Diane Tavenner: Oh, and ESAs.

Customized Education Choices Rising

Michael Horn: Right. And so we see a level of entrepreneurship, a choice and I would argue now a family increasingly, if you’re in Arizona, Florida, Arkansas, wherever. It’s not just like the free public school or I pay money, it’s like, oh, if I just default to the free public school, I’m actually foregoing 8 to 12, $13,000 that I could be spending on my kids education in the way that’s customized for what they need and what they have shown interest in, et cetera, et cetera. That’s like a very different decision set now where all of a sudden it’s actually expensive to default to the free.

Diane Tavenner: Well, and to your point, it might take a little bit of time, but it really changes people’s, you know, mindsets around everything.

Michael Horn: And I was shocked. I. I have to look deeper into this. But Ron Mattis at Step up for Students in Florida sent me this report they did. He said the number of learners in Florida who are now doing a la carte learning. So not they don’t have a primary school five days a week. It’s a billion dollar market is going through that and I was like, I have to like sit with that.

Right. Still. And I haven’t fully digested it because that’s, that seems like a lot. But he, but it basically, if that’s true, over the course of a decade or so, whatever the choice landscape in Florida has been, people went from, okay, I have education, savings accounts, I choose a school.

Diane Tavenner: Right.

Michael Horn: To your point, with technology and a lot of entrepreneurship and a change in the landscape, to all of a sudden saying I can unbundle and do a whole set of things with this, that’s a, that’s faster than I would have expected.

Diane Tavenner: That is faster. Oh, I’d be so curious.

Michael Horn: I want to dig in all sorts of things now.

Diane Tavenner: Let’s do that at some point. Well, and what it suggests is that individual families are essentially crafting their own personal model. Now is it AI native?

Michael Horn: Probably not.

Diane Tavenner: Probably not yet. But I bet they’re starting to use some of, you know, the AI enabled tools as part of that. Yeah.

Michael Horn: And they’re probably making also some of these trade offs in terms of like when is it analog because they control the home environment. When is AI a tool to create something? They’re probably making a bunch of these nuanced choices on the ground that like you couldn’t dictate from a central planning curriculum standards perspective.

Diane Tavenner: Right. Although that might be a feature of whatever the new Model 3 is. I mean, my hope is that it is that it is personalized to that degree within the context.

Michael Horn: Yeah, great point.

Diane Tavenner: Yeah.

Michael Horn: And so now we’ve just blown both of our minds.

Diane Tavenner: I want to go back to Model 2 for a minute because I had this really fascinating conversation with your, you know, former colleague and collaborator Julia Freeland Fisher. And she said, huh, I wonder if this model two is akin to what happened when the steam powered ship was sort of invented and there was this period of time where the new steam powered ships had to be outfitted with sails because the new technology was so unreliable. And she suggested that maybe model two was that. And what the interesting point she made is she said those were the most expensive models because you had to have both technologies on them. And this hybrid version is really expensive. So I, what do you think of that?

Michael Horn: 100%. I agree. I, I hadn’t framed it immediately into that typology, but that’s almost every industry, when you see disruption, you see the old players take the new technology, right. Like there’s sort of a line, oh, they ignore the new technology. Not true. They layer it on the existing structure. Right. And the sailing ships are the perfect example.

I think the first sail ships to navigate the US was like 1819 or something like. Or 1803 and then 1819, the first transatlantic ship, the USS Savannah. And they had sales and they had steam bolted on. And I think only I’m going to get the numbers wrong but like 80 hours out of the 600 or whatever it took to cross were powered by steam. Basically every time that wind went the wrong way, they fired it up and kept going. Right. And so it’s a classic sustaining innovation on the old paradigm.

Diane Tavenner: OK. But it’s still. Those models do not get us to model 3.

Michael Horn: They don’t. Yeah. It’s, you know, the story is that it was a 100 year disruption.

Diane Tavenner: Yeah.

Michael Horn: Where still ultimately the steamship native companies, shipbuilders ultimately upended the sail ship. And it was around 1900 I think.

Diane Tavenner: And it’s a different model ship.

Michael Horn: It’s a completely different model. Right. You don’t have the same components. You can do things differently in terms of construction because you’re not outfitting around an aerodynamic sail. Right. Like a totally different set of things you can do. So.

Diane Tavenner: OK, I have a question. Now, you said you felt comfortable with the field sort of spending 80% of its resources on Model 1 improvements, leveraging AI. Is there a risk that we over invest in Model 1 and undermine the emergence of Model 3 because we kind of keep this old industrial model going, breathe new life into it and there isn’t a sense of urgency around model three creating three. Yeah.

Michael Horn: Two thoughts. Clay used to always say this. The best experts in a field, like you’re a very strange anomaly. The best, deepest experts in a field are almost always consumed with the toughest problems in, we’re going to call it Model 1 at the edge of the existing paradigm.

Diane Tavenner: Interesting.

Innovation Beyond Traditional Expertise

Michael Horn: And it’s these people who are almost less expert in some way or for some reason have taken their expertise and brought it out that invent the future. But like it’s very hard to persuade the people who are dealing with the hardest, most intractable problems in the first paradigm to be persuaded to design out there. It’s why I think like, you know, when you and I met for the first time and you actually liked Disrupting Class, that was like a bit of a revelation because like we couldn’t get all these people to sort of like actually engage with it. Right. And so. Or, or they thought they were engaging with it but missing the point. Right. And so I don’t know where that goes.

Except, like, in some ways, I’m not surprised that that’s the current moment we’re in. I think the danger is if those individuals then block off our avenues to pursue three, I’m okay with them being consumed with one. I think it’s great. There are a lot of underserved kids there that need better education. And I think if they use that as a justification to block off three, through policy change, through blocking entrepreneurship, through blocking families making these choices, that would be deeply concerning.

Diane Tavenner: So glad we’re having this conversation. There’s two places where I have fear about that and.

Michael Horn: Well, you’ve lived it.

Diane Tavenner: I did, yes. Continue to, it’s my life. And there’s two places that I just want to raise here. And at the risk of how, you know, these are sort of controversial and they’re very nuanced. I often am misunderstood, so I don’t talk about them out loud very often.

Michael Horn: But thanks for doing it here.

Diane Tavenner: Here we go. So the first is the big assessment and accountability system. And you know that my belief is that that structure, which is well intended and people are deeply passionate and invested in making sure that we have real data and know what’s going on. I just spent time with a parent advocate who’s like, those tests are the only receipts we have of what’s happening with our kids. Right.

Michael Horn: There’s a great article recently around how people are just shocked because the tests have gone away and they’ve been relying on grades, which are even more worthless measures. Yeah.

Diane Tavenner: Right. And so there’s a lot of energy going to. How do we bring those back? How do we reestablish them? And, and my belief is, and my lived experience is, and most people don’t like hearing this, who believe in them, is that the existence of that accountability structure, I truly believed deeply dampened innovation and the move towards now would be model three. And I’m super disinterested in hearing about waivers and all these things. And. No, it really has an impact.

Michael Horn: Let’s get into how, because I’ve moved toward you a lot on this one. But in one standpoint, it’s like, well, it’s just focused on outcomes, frees up the inputs. You get there however you want. Like, how does it actually restrict the innovation? And is that a. And why is that a bad thing?

Diane Tavenner: Yeah, I think that it’s. Well, let me share a quote that I hear very often.

Michael Horn: OK.

Diane Tavenner: Which is, look, I’m not opposed to measuring different things but we don’t have those measurements yet. And so until we do, give me reading and math. And you know, I’m going to judge schools on reading and math, basically, which is effectively what we test in this country. And first of all, I think the problem is we actually do have those other assessments and they are crowded out. They aren’t accepted as, you know, mainstream, valid, reliable. No one is moving towards adopting them because it’s all about reading and math. And so I think it is really, you know, you measure what you value, you value what you measure. And there isn’t.

The system is not saying, no, completely unacceptable that we’re literally measuring our entire system on these two Important. Yeah, very important. Please do not misinterpret me. People always accuse you don’t want kids to read.

Michael Horn: Well, by the way. But I’m curious what you think of this. This is a classic case where I think defining the age span is important because I am strongly in favor of not losing the measures to families. Note how I said it, by the way, but measures to families on can your kid learn how to read, get those skills through, hopefully third grade. But you know, I’m. I’m actually willing to live with some variants in the age.

Michael Horn: All the reading tests after that are really knowledge tests.

Diane Tavenner: Correct.

Michael Horn: And so I would be much more comfortable, frankly, with every school picking like the. Or student, hey, you just did a deep dive on X. Go show your competency in X. I think that’d be a much more interesting. It’d be super jagged, students showing all sorts of deep dives on a variety of things and so forth. I think that’d be way more interesting. Math, I think, is a little different.

Diane Tavenner: Yes.

Michael Horn: And I don’t know where it stops. Probably around algebra, but. Yeah.

Diane Tavenner: Well, you just said a key point that really bothers me the most, which is the accountability and testing framework that we’ve had in this country is not about informing parents. And it’s not actionable data. It’s not timely data. It’s not what we would call that feedback, honest, actual timely data.

Michael Horn: No. And in fact, it’s negative reinforcement cycles.

Diane Tavenner: Exactly. And so let’s just take reading as an example. The oldest assessment technology is a reading record. I mean, schools could literally choose to assess every single kid that way and put resources towards that. It might not even be that many more minutes than they already spend on stage.

Michael Horn: By the way, AI can really do that now.

Diane Tavenner: Well, and I’m not even getting into…

Michael Horn: What technology can do.

Diane Tavenner: So why, why these old assessments. Right. And so anyway, I’m deeply concerned that there’s so much good intent there and so much potential.

Michael Horn: But you’re arguing that it’s crowding out a ton of these other measures that either are there or could be developed more robustly.

Diane Tavenner: Right. And in the same way that I can be sort of dismissive of efforts around Model one, I think a lot of folks focused on today and now in kids in school are very hand wavy and very dismissive of the impact this has on the potential for innovation. So I’m, you know,

Michael Horn: Super interesting. Yeah. Okay.

Diane Tavenner: The second one is

Michael Horn: You’re taking a breath, you’re giving me a look for those that can’t. We’re not a video this time.

Diane Tavenner: No, we’re not.

Michael Horn: Yeah, go ahead. Where are you going?

Diane Tavenner: Special education.

Michael Horn: Oh, okay.

Diane Tavenner: And I want to say up front, my belief is, are we, by the.

Michael Horn: Are we at the 50th anniversary of special ed at the IDA, the federal level?

Diane Tavenner: We might be.

Michael Horn: I think we are, yeah.

Reimagining Education for Every Child

Diane Tavenner: Okay. Yeah. The intention is right. So many amazing people working on behalf of kids here and most people who’ve spent so much time in schools like I have with families, you know, it’s a system that is about compliance more than it is about children, is. I don’t believe it gets young people what they need. And I think that has a really challenging impact on our ability to educate all of our children. And this is one of, in my view, one of the biggest promises of a post industrial model is that truly every child gets a personalized education.

Michael Horn: Because everyone’s now getting an ILP as a good. Exactly right.

Diane Tavenner: Exactly, exactly. And my worry is that in both the assessment case and special education, that new models, model threes, will be judged and held accountable to the current accountability systems and the law, which completely compromises their ability to design completely new and better approaches.

Michael Horn: Yeah. And my colleague, or I guess former colleague at the Christensen Institute, Tom Arnett, has written a lot about this one, about how when you apply the standards to the new system that were for the old, you hamstring and often stunt it completely. I think that’s very fair. My pushback historically has been. Yeah, but the existing system is all input driven and then it has outcomes layered over. If we strip out the inputs, which by the way, people are trying to put back on for the Attempts at Model 3 right now as well. Right. Like accreditation, really.

Michael Horn: I think you’re pointing out even though these output measures, I don’t even think they’re outcome measures, but output measures have been layered on, I do see where they could pull model three back in some unfortunate ways for design. And I think those are to me, that’s where the fears are really. It’s. It’s less the effort question in dollars and more the are we hamstringing it to actually just look like the existing thing we already have in slightly modified?

Diane Tavenner: Right. I’ve certainly learned from you the most, you know, how disruption happens is that people take it outside of the existing system. They have different expectations. You know, they look at it fundamentally differently. And so maybe this is the importance of ESAs. And I mean, as a person deeply invested in public schools in America, I would be very sad if we’re going to push all the innovation out into the private sector because we can’t welcome it into the public sector.

Michael Horn: Yeah.

Diane Tavenner: And maybe that’s what we’re gonna see.

Michael Horn: Yeah. I’ve always felt like the public officials ought to be responsible not for the institutions, but for the constituents. Right. And so the models may change. And by the way, look, in Florida, you have districts now launching their own microschools and creating certain services a la carte. And like, like they’re spinning off autonomously. Let’s see where it goes.

Michael Horn: Right. I mean, I don’t think we know the final thing yet. And the conversation I was having with one of my students yesterday as well was, you know, no one’s cracked yet, I think, in these. So they’re not really model three attempts because they’re not AI native. But let’s just call like this sort of emerging ecosystem. We haven’t seen a lot of high school models.

Diane Tavenner: Nope.

Michael Horn: And I think part of it is because disruption starts as primitive, able to solve simple problems, not the most complex. Identity formation becomes much more important in high school. Right. And all these rituals that we may roll our eyes at around Friday Night Lights or prom or whatever else, they’re part of this identity formation and asking who am I in relation to others? And these small, you know, I think, you know, Tyler Thigpen, Forest School, Acton Academy, he’s done a good job of creating rituals, but most high school attempts have not yet built that. And so I kind of wonder, is the upmarket, if you will, solving for all of those things with very different traditions that don’t look like Friday Night Lights, but are actually more meaningful for the current time around identity formation?

Diane Tavenner: Totally. Well, and now you’re getting at the heart of what I’m trying to contribute to with Futre, which is how do we support some of that positive identity formation and search for who I am and the life I want to lead, both in the digital world and then connect that to real world experience.

Michael Horn: Well, I think it’s interesting though, that your market is the traditional industrial Model one, largely. And so I’m, I mean, I’m curious how you think about that.

Diane Tavenner: I’m living in a bipolar world. Yeah,

Michael Horn: … yeah, yeah. Okay, okay, okay, okay. Well, I. You’ve built it with a modular interface, as I understand. Right. So it can exist in both, I think is part of your answer. And I, I imagine you’d say a native model 3 would actually answer a lot of the future questions as part of the design of the model itself.

Building Towards Model 3 Framework

Diane Tavenner: I think so. And I do think, you know, yes, I hope that what we’re building can live in both worlds and is one of, you know, the early ideas or components of what a Model 3 will look like. And I certainly will be engaging with folks on pushing that area, so hopefully we’ll talk more about that. I think where this is all leading for me is the next part of our season. So we’re gonna talk to a bunch of different people and I’m gonna be really. I’m gonna be in the back of my mind thinking, all right, well, where do you sit in this imperfect framework, this developing frame? But, but sort of, where is your effort sitting in that? Are you literally a whole school model? Are you an element to a model? Are you, you know, an AI enabled tool? Are you really trying to push the boundaries of designing for Model 3? Are you an interesting model two? And what do those look like? So.

Michael Horn: Yeah, well, and that’ll be interesting because I think as I look at the guests ahead, we have a lot of folks in Model 1 who are working with that system. And I’ve been wondering, given the hypothesis that we have fleshed out over the last couple of seasons of AI, like how that fits with the things that we’re interested in. And this is good. I think we’ve given a good framework on the importance, frankly, of all three of those elements and the work that they need to be doing and the dangers of crossing over perhaps, assumptions from the worlds across the different models.

Diane Tavenner: Perhaps. Awesome.

Michael Horn: This got interesting. A little spicy.

Diane Tavenner: A little bit spicy. Well, super useful for me and helpful for me to think about things. Any last things on your mind?

Michael Horn: I have one last thing. Hopefully we won’t get cut out of the studio, which is, I thought a lot about what is the world into which people are going and how does that map back to what is still core and what is not core and so forth. And I just want to float an idea by you and have you attack it.

Diane Tavenner: Great.

Michael Horn: The reflection I’ve had is we know there’s a considerable amount of cognitive science that suggests we learn best through story stories, narrative arc, and we don’t actually deliver most learning or offer learning opportunities in that. And so I guess I’ve been wondering as we think through, you know, we had the back and forth of do they need to memorize state capitals? And we both said, probably not. But I do think they should know that there’s a thing as a state capital. And so my thought about it is almost like Montessori has the I’m gonna mess it up, the great lessons or something like that. Right. And it’s a narrative arc. But I almost can imagine narrative interactive arcs where you’re like sort of, okay, how did the country’s governance evolve over time? And these thin layers that would build a lot of common reservoir of knowledge. And I think I’m largely talking K5, maybe K8, that that could be a big part.

And like in, in the various disciplines, if you will. Right. Civics, a variety of deep dives in history, et cetera, et cetera, science. I think it should be active. I think it should be multimodal. It’s not clear to me. It’s the teacher delivering the story.

Diane Tavenner: Say what you mean by multimodal, because a lot of people are using that term and I don’t think many people know what it means.

Michael Horn: Yeah, yeah. So I guess I see it as being like, you can imagine it being some of these lessons being video based through an AI. You can imagine an auditory sound. Right. You can imagine interactive where you’re actually answering questions both verbally and written as you’re working through something, you can imagine, like the state capitol one. So you have a lesson around how did state capitals evolve in state government?

Diane Tavenner: I mean, it could be VR, like literally immersive.

Michael Horn: Right, Exactly. And then you could almost imagine then like you pop out and like, my kids still draw maps. I actually think that’s really valuable. But I don’t think that they then have to drill memorizing every feature, but they don’t know what question to ask Gemini or ChatGPT without like sort of that thin knowledge base. Right. And that’s sort of where I’m wondering if you’re. We evolved to something like that that recognizes the importance of some knowledge.

Diane Tavenner: Yes.

Michael Horn: We could have mastery assessments where we thought it was really important.

Diane Tavenner: Yes.

Michael Horn: We don’t have to have it for everything, frankly, it’s just exposure is probably good enough, especially if it’s interactive. I don’t know. What do you think of that idea? What are the flaws? And sorry. And then creating the space then for like, hey, you’re interested in this? Okay, here’s your project. Go deep, right? Like, and that’s where the deep explorations of learning how to learn and developing the skills would really be.

Diane Tavenner: This feels very fun to me to think about this. And these are the types of thoughts I’m constantly playing with and that I think should influence the design of Model 3. I love that you brought up this idea of memorizing the 50 state capitals because I think maybe we are misunderstood when we both say we. We don’t necessarily think kids should memorize the 50 capitals. That’s not because we don’t love America, believe in America, think that they shouldn’t. I think what we’re both more interested in is literally having them have like a deep story about each of the capitals and really internalizing. I mean, I will tell you, we get to travel a lot. Do you, do you like how I frame that? We get to travel a lot.

And when I travel, I love this country so much. It’s so fascinating. There’s so much.

Michael Horn: It’s so much fun to dive in, right? And take the, like you’re in, you’re in wherever and you go to the Alamo or whatever it is. And like, it’s so much fun.

Deep Learning Over Memorization

Diane Tavenner: It’s so curiosity driven. And so what if young kids didn’t memorize 50 capitals? But what if they went deep on a couple of them, like in a story based way, in an immersive way, and they got the idea of state capitals and what they mean and the importance. They got very cool stories about, you know, a few of them at that age. And then they got a lifetime of like, oh, I could, there’s so many more I can learn. And there’s so many interesting stories about them. And they’re not just a name on the page and, you know, on a flat map, but they’re real places that have real significance and they’re different from each other and because they have such access to knowledge now, if they really need to go look it up, they can go look it up..

Michael Horn: They can do the deep dive. Right? And I think the knowledge conversation, I’m a big believer in the importance of a fundamental knowledge base and the depth at which those occur. I think we don’t have a nuanced conversation around.

Diane Tavenner: Right. And I also am okay with it, I’m gonna call it the Swiss cheese of knowledge.

Michael Horn: Yeah, so am I.

Diane Tavenner: That you don’t have. Every fourth grader in America does not need to know the same facts.

Michael Horn: Yeah.

Diane Tavenner: It’s okay if we learn them at different points and different times and that there’s, you know, sort of regional differences around that. I’m much more committed to everyone having a common set of really important skills, at least at a baseline level. And then ideally spike lots of people spiking in the different skills in different places because we need all those.

Michael Horn: But when you say the skills, you’re thinking that it’s been developed through them working in different domains and areas repeatedly in deep dives. Right. And so

Diane Tavenner: Because you need content to practice skills.

Michael Horn: Exactly right. And you create that integration. I think a lot of times in school it goes the other way where like, oh, we learn how to think critically about what.

Diane Tavenner: Exactly.

Michael Horn: And so again, these crosswalks extremes, I think are right. Yeah. Anyway,

Diane Tavenner: Yeah. And so, you know, and this is why we both like a project based environment because it’s the integration of the two and there’s such power in what AI can do now where you can really do personalized learning on, in the content to bring to those, you know, engaging, collaborative, communal type, project based experiences. So I mean, I love what you’re saying in the direction you’re going. It’s very nuanced as you know, it’s.

Michael Horn: We should have some more fun later on and. But I just wanted to float the general idea because I had this moment in our conversation with Alex where I was like, at what level are we thinking about difference and what does stay the same? And I think part of my reflection has been there’s actually a fair amount that stays the same, but how we’ve done it probably changes pretty radically.

Diane Tavenner: Indeed.

We’ve been recording pretty frequently and I know we’re both feeling a little stretched on thinking about new books and things we’re reading. We’ve maybe exhausted our list so I thought maybe we’d take a break from that list only today. Thank you. And replace it with this will make this episode a little less evergreen. But for those who are listening, we’re actually recording this right before the week of Thanksgiving, and I thought I would end with some gratitude.

Michael Horn: Oh, I like it.

Diane Tavenner: So one of the fun moments of yesterday’s engagement with your class and then the office hours afterwards was there for so many young, amazing people who so many of their questions were very personal yesterday about, you know, how to be a mom and lead and how mentorship and all of. And, you know, my relationship with my husband over the years. And I’m so. I’m appreciative that they were thinking about that. And one of the things that came up was just our friendship. And I think you know this. But I am so grateful for our friendship, and it is truly one of, for me, the big, you know, if there are any highlights coming out of COVID the fact that we decided to do this, it gives us time together. It’s just so much fun, and I’m so grateful.

Michael Horn: You know, I’m a crier, so I’m trying not to right now. Thank you. I feel the same way. And it’s one of those things where I feel like, how lucky am I that we get to have this conversation? Even though I moved away from the Bay Area over a decade ago, which is wild, 12 years, but. Yeah. And I think it’s. So when this comes out, it’ll be after the new year, I think, and so forth. But I always tell my students, because, as you saw, like, 55 or so percent are not from the U.S.

I say take the time because how cool is it to have a day when you get to say thanks? So thank you as well. Yeah. And thank you all for joining us through the sentimental moment. But also on Class Disrupted. And just keep your questions and curiosity coming. We suspect there’ll be things you disagree with that we said here, and we can’t wait to learn from you. So thank you, as always, and we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
Opinion: To Make Ed Tech More Secure, Software Companies Need to Step Up /article/to-make-ed-tech-more-secure-software-companies-need-to-step-up/ Tue, 25 Feb 2025 19:30:00 +0000 /?post_type=article&p=740414 Last month it was revealed that student information system provider PowerSchool suffered the in history, as stolen credentials were used to expose and steal sensitive data belonging to over 60 million students and teachers. In 2024, K-12 schools have become the for ransomware, with recovery costs averaging over this past year alone — more than . 

Education technology – or edtech – software is often the entry point for these cybercriminals, accounting for of K-12 school data breaches between 2016 and 2021. As one can imagine, the COVID-19 pandemic forced school districts across the country to shift to remote learning; they received significant federal and state funding to support this transition, of which was spent on acquiring new software.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


The average district now uses edtech products – nearly the number from 2018 – increasing the attack surface for cybersecurity threats at a time when only school districts employ a full-time IT staff member.

To further complicate the matter, K-12 school districts are chronically understaffed and underfunded on the this front, with the average school spending less than of its IT budget on cybersecurity, and one in five schools dedicating less than . 

Schools are not equipped to secure all of the edtech products they depend on, but these software products are critical to running a modern school. Most critical school functions – including attendance, bus routing, lunch information, learning and grading systems, staff management and finance – all rely on edtech products to operate. 

That puts edtech software manufacturers in a unique position to improve cybersecurity outcomes for K-12 schools by integrating more security features into their products, shifting the burden from schools to industry.  

A forum held last October by UC Berkeley’s Center for Long-Term Cybersecurity, conducted in partnership with the U.S. Department of Education, convened representatives from 12 software manufacturers serving a large portion of U.S. school districts to discuss measures to help strengthen K-12 cybersecurity. Two key themes emerged again and again during the discussion, which are top of mind for industry as we go into 2025.

First, edtech software manufacturers need to take a greater responsibility for improving security outcomes for their K-12 customers.

The use of multi-factor authentication (MFA), an essential security feature in edtech products, is seldom enforced as a mandatory requirement, even for privileged users. However, some software manufacturer participants demonstrated industry leadership by requiring it for all administrative accounts. 

One provider, inspired by Microsoft’s forthcoming requirements in and , implemented mandatory MFA for administrative accounts and financial staff and adopted phishing-resistant authentication. But the rollout was difficult; despite many advance notifications of the change, the provider described the transition as disruptive for customers, even though the change ultimately provided better security. 

Software manufacturers who have implemented mandatory MFA recommended other providers try a phased approach, such as extending authentication prompt intervals to once every one to two weeks to allow school administrators, IT staff, and teachers adequate time to adapt to the new requirements. They also recommended deploying changes during the summertime when school districts’ IT demands are at their lowest. 

Some are experimenting with new MFA tactics and security features, like authentication based on suspicious account activity and tracking data changes in their systems. Other solutions discussed include monitoring the dark web to identify stolen passwords and systems that prompt users to choose stronger alternatives, as well as solutions tailored for schoolchildren and parents, such printable QR code badges that students can scan to authenticate during login.

Second, vendors must overcome obstacles to integrating basic security controls into their products.

One of the biggest obstacles software manufacturers face in launching mandatory security features is balancing security with user convenience. They cite feeling pressured to prioritize ease of use, fearing that customers would switch to competitors with “simpler” but less secure solutions. 

Vendors shared case studies of schools that resisted platform changes that introduced friction into their operations or student learning, such as requiring an additional step to log on. For example, some providers observed that K-12 users prefer less secure authentication methods, such as email and text messaging services, over more phishing-resistant methods, such as app-based tokens or hardware keys.

Technical hurdles pose another barrier. Providers noted that some school districts rely on legacy software for HR, payroll, and bus routing that may be incompatible with modern authentication protocols such as SAML or OAuth. Some systems lack support for these protocols altogether or only offer them as paid features, especially for mobile applications. This makes integration challenging, requiring extensive testing to resolve compatibility issues, making the process resource- and time-intensive for software manufacturers. 

What’s Next

Incidents like the PowerSchool breach demonstrate the urgent need for edtech software vendors to do more to protect K-12 student and teacher data. Fortunately, the federal government has made headway on the issue in recent years.

For example, the Cybersecurity and Infrastructure Security (CISA) agency’s initiative, launched in September 2023, expanded from a K-12 specific pledge with 12 signatories into an enterprise-wide pledge by May 2024, with over 260 industry signatories. CISA also recently released a guidance for software manufacturers.

The growing industry interest in prioritizing cybersecurity is encouraging. Evidence from our roundtable conveys that there’s an appetite from K-12 and companies to do more to relieve the burden on schools and secure edtech products. It is critical to continue this momentum; the edtech industry must pursue product changes that improve security, and federal agencies like CISA should continue building a coalition of companies who do so. 

Secure products benefit everyone, from teachers, to parents, to school children. Let’s double down on our progress before the next breach happens.

]]>
Leanlab Founder Says Ed Tech Should Root Itself in Community Voice, Co-Design /article/leanlab-founder-says-ed-tech-should-root-itself-in-community-voice-co-design/ Mon, 06 Jan 2025 13:30:00 +0000 /?post_type=article&p=737706 Over the past decade, has helped several tech startups gain a foothold in classrooms. They include the social-emotional learning tool , the gamified learning management system and the math tool , among others.

In the process, the Kansas City nonprofit has become synonymous with a research technique known as “co-design,” which says innovation should begin not with outsiders offering solutions, but with those trying to solve problems for themselves.

Leanlab took shape after its founder, a former teacher named Katie Boody Adorno, began studying how education systems work. The child of community organizers with roots in Puerto Rico, Boody Adorno had taught for five years and realized that education could learn from the way her parents’ efforts worked.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“It was surprising to me how much we excluded community voice in the education sector,” she said, “and how unimaginative we were in that sector. It was still very top-down bureaucratic, and it wasn’t particularly effective.”

Leanlab began life as a kind of tech incubator, evolving into a quietly influential organization that helps ed tech developers work with educators to evaluate their products in real classrooms. 

From there it has moved into several different aspects of research, partnering with University College-London to co-found the , a hub for researchers, policymakers, philanthropists and tech investors to work together. One of its most found that of 1,640 tech tools in schools, just 11% are evaluated externally.

Last summer, Leanlab also partnered with 20 school districts, charter schools, microschools and afterschool programs to create something called the . It works with educators, helping them research their most pressing questions. And it pays teachers $50 per hour for their work implementing the research. The focus, Boody Adorno says, is on solving real-life problems in their classrooms. In the process, teachers are also trained in research techniques.

In an interview with The 74, Boody Adorno stressed the importance of co-design and her belief that technology is changing schools rapidly — so rapidly that large-scale, randomized control trials, which can take years to design and implement, are no longer a good fit for many research questions.

This interview has been edited for length and clarity.

The 74: You talk about being the child of community organizers. When you talk about “community voice,” what are you thinking? 

Katie Boody Adorno: I was a middle school teacher in both public schools and charter schools. This was during the No Child Left Behind era, 2008 to 2013. It was a very test-driven culture that was very bureaucratic. You had a lot of big administrators from central office or the state department of education telling you what you needed to do. You had scripted curriculum. You had pacing guides that were very assessment-driven. We also were dealing with massive school closures in the city.

All of this felt very “done to” communities. You were literally told what you were going to teach, how you were going to teach it, if your school was going to stay open or not. Teachers didn’t have a say in this. Building leaders didn’t have a say in this. Parents didn’t have a say. Certainly not our students! And then we were shocked when, year over year, we were seeing pretty dismal results — or when the tests came back and said, “Your schools are not performing on par with more affluent or white schools.”  

Let’s talk about how Leanlab came about. Were you still teaching at the time? Had you moved out of the classroom?

I taught for five years. I’d moved to an instructional coach role, where I still was teaching one class. I had gone back to graduate school to become an administrator and I was on that career trajectory. I was in a charter school system and I really thought I wanted to be a school leader. But the more I was exposed to the system, the more skeptical I was becoming. I now had the opportunity to slice the data from a higher-level viewpoint and I was just like, “Man, this isn’t working. We’re not seeing transformation for our kids. But it’s not because we don’t have teachers who are working really hard.”

We had a brilliant staff, but it was unsustainable. We were burning teachers out. Our kids were going to school, at that point, six days a week. Even when we were getting strong test results in the state exam, it wasn’t translating into life outcomes for our kids. I became pretty disillusioned, but really committed to the future of our kids. 

At the same time, you were getting to know the startup community in Kansas City? 

I got the chance to attend a couple of events and two things struck me: The way they talked about the innovation process was so much more human-centered and actually aligned to community organizing principles. It was really this notion of elevating end-user insights, designing with who you’re serving, which was something that we never really talked about in education.

The other piece was just the radical innovation. Now it’s kind of a trope and we laugh about it, this idea of disrupting systems. But this was a decade ago, when this language was just coming out. And it struck me as really, really interesting: There’s another way to think about education that we haven’t talked about. I started hosting pop-up events to teach educators how to go through these innovation processes. I was trying to organize at a really grassroots level, bringing together educators, researchers, community leaders to think about what it would mean to begin prototyping new things that could accelerate student outcomes. 

When you say “prototyping new things,” you mean products?

In those early days, it really was all over the place. Oftentimes it was technology products. Oftentimes it was new school models, new programs or curricula. But the idea would be: Start first with a problem that your community is identifying. It could be, “Hey, we’re really struggling on basic literacy. We’re really struggling with students feeling safe at school.” Whatever it is, start first by validating that from the community itself that’s experiencing that pain, and then begin prototyping a new solution. 

Fast-forward 10 years: When you look at what you’re doing now, what do you see?

We got pretty good at building commercially viable ventures — a lot of these companies were going on and scaling and making money. But we weren’t solving the problem of knowing if they were actually moving the needle for students across the board. And when we looked at the sector about five years ago, we realized very few tech solutions had any evidence that they were accelerating learning outcomes for kids or even demonstrating any evidence that they were beneficial to students.

So that’s when we underwent another pivot, just before the pandemic, where we said, “We have great relationships with these local schools. We have emerging respect from this entrepreneur community. What does the world look like where we actually bring in researchers more seriously, deepen our relationships with schools so that we can run more in-depth trials?”

The end goal here is that we actually want to see outsized impact. We want technology to actually live up to its promise of accelerating student outcomes beyond what we’ve seen. At that point, we brought in an internal research team. We flipped our business model. So we charge for-profit ed tech companies. We grant money — we believe in unrestricted grants — to schools, and offer market rate stipends. We typically pay teachers $50 an hour. 

Let’s talk about co-design. I’d like to hear a little bit about that process.

Co-design is really interesting because it’s become buzzy. People are throwing it around a lot and there’s not a lot of understanding of exactly what it means. When you design solutions or technologies in a silo with a technologist developer and a company that doesn’t have access to the realities of a school environment, it cannot be very beneficial. And unfortunately, this has happened a lot in the ed tech sector because typically these technologies are selling to administrators but delivering to students. So oftentimes they don’t think about the design for students or teachers because the customer is not the user. Co-design has come to be used as colloquial lingo around what it means to elevate the user’s perspective and give feedback to the product or solution when it’s being developed, so that it actually benefits them.

Let’s talk about these problems that actual educators identify. What have been some of the hits? 

There’s no clear winner, but I would say across the board, folks are still very concerned about learning loss, primarily post-pandemic, primarily in literacy and math. We’re thinking about solutions that target upper elementary literacy and math, particularly in literacy, where a lot of schools have now gotten for the Science of Reading, but our older kids might have been left out. So those kids are still struggling with basic concepts.

Social-emotional learning or well-being — whatever geography allows you to say — is still a huge priority. That’s also now become educator well-being. Teaching with technology is a big one, particularly with AI, “What is safe? What is reasonable? How do we prepare the workforce to do that?” And then college and career readiness and preparation for an ever-changing workforce. People call that different things: real world learning and experiential learning, portrait of a graduate, college readiness. But it’s this idea of, “How are they getting prepared for the future?”

Are there products, for lack of a better term, that people would recognize that have come out of all this work? Or is it too early to talk about that? 

No, it’s not. We’ve done 80 studies. And prior to that, in our five years of doing research and development, we worked with small startups that are just getting off the ground. Folks that are federally funded. We’ve done a lot of research for a tool called Sown to Grow, based out of the Bay Area. They’ve gone on to do really great evidence-based work around cultivating a culture of increased belonging and well-being and social-emotional learning. And we work with big incumbents on new products, like Logitech, McGraw-Hill and others. 

So do you become their research arm? What can you do for McGraw-Hill that they don’t already have in-house? 

What we functionally do is match them with schools and we have a third party R&D team that will help them figure out if their product is working. And we align all of our evaluation to the federal criteria for evidence. So if they really want third-party evidence, we can be the one that does that. What’s unique about us is that we’re really nimble. We’re one of the only research firms that’s connected to a school network. So we’ll say, “Hey, let’s put this in front of a diverse student or school audience. Let’s get feedback.”

So you’ve kind of gone from being an incubator to being an ?

We’re an R&D firm that’s much more nimble and focused exclusively on co-design and technologies. We’re uniquely positioned where we’re different than AIR or . We’re not really doing randomized controlled trials (RCTs). We’re not doing what we believe is maybe an outdated mode of research. We’re really trying to elevate school community insights and take an iterative approach.

I’ve been writing about education research for 25 years, and when people complain about it, most often the complaint is, “You’re not doing an RCT. That’s the gold standard.” To hear somebody like you say, “Hold on a second, that’s not the best format” is striking.

RCTs are a pain in the ass to do for actual educators. Any educator, ask them if they think it’s equitable to randomly implement a solution that you think is going to be really effective for their kids, but assign it randomly, withhold it randomly and try to implement a cohesive curricular framework around that. That’s just really hard to do and raises some interesting ethical questions. The other piece is that we’re just in an age of technology vastly outpacing our traditional R&D systems. If you do a traditional RCT, it might take you a year and a half to recruit the number of participants you need to engage in that study. The intervention has completely changed by that point — and it’s going to change again by the time you get the findings back. So how helpful is that for the field?

Disclosure: The Chan Zuckerberg Initiative and Walton Family Foundation provide financial support to Leanlab and The 74.

]]>
Feds Charge Once-Lauded AllHere AI Founder in $10M Scheme to Defraud Investors /article/feds-charge-once-lauded-allhere-ai-founder-in-10m-scheme-to-defraud-investors/ Wed, 20 Nov 2024 15:58:42 +0000 /?post_type=article&p=735634 Updated, Nov. 20

Federal prosecutors have of the once-celebrated education technology company AllHere, accusing her of defrauding investors of nearly $10 million as the startup that made AI chatbots for schools fell into bankruptcy.

Joanna Smith-Griffin, a Forbes “30 Under 30” recipient and Harvard graduate, was arrested at her home in Raleigh, North Carolina, Tuesday on allegations of securities and wire fraud and aggravated identity theft. 

The 33-year-old former educator’s arrest is the latest chapter in the downfall of “Ed,” a buzzy, $6 million AI chatbot that Smith-Griffin’s company was tapped to build for the Los Angeles Unified School District before the project was halted and the company shuttered. L.A. schools Superintendent Alberto Carvalho and Smith-Griffin appeared together at several events earlier this year to promote the chatbot, an ed tech innovation Carvalho said was “unprecedented in American public education.”


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


The indictment by the U.S. Attorney’s Office for the Southern District of New York unsealed in Manhattan federal court accuses Smith-Griffin of defrauding investors and using company funds for a down payment on her North Carolina house and to pay for . 

Smith-Griffin “orchestrated a deliberate and calculated scheme to deceive investors” in the company she founded through a Harvard University startup incubator in 2016 to provide a tech-driven solution to student absences. She inflated “the company’s financials to secure millions of dollars under false pretenses,” U.S. Attorney Damian Williams said in a media release. “The law does not turn a blind eye to those who allegedly distort financial realities for personal gain.” 

Smith-Griffin is being represented by Eric Brignac, an assistant public defender with the Federal Public Defender’s Office. Brignac, who is based out of Raleigh, did not respond to a request for comment.

In a statement to The 74, an L.A. schools spokesperson portrayed the district, by far AllHere’s biggest customer, as one of many taken in by Smith-Griffin. Previously, the school district and its inspector general’s office opened separate inquiries into the school system’s work with AllHere.

“The indictment and the allegations represent, if true, a disturbing and disappointing house of cards that deceived and victimized many across the country,” the spokesperson wrote in an email. “We will continue to assert and protect our rights.”

Between 2017 and June 2024, prosecutors allege, Smith-Griffin used her control over AllHere’s bank accounts to transfer at least $600,000 in company funds to her personal account, generally using PayPal and Zelle to make repeat wire transfers under $10,000. 

Federal prosecutors said the fraud scheme began as early as November 2020, when Smith-Griffin allegedly began to misrepresent to her investors the company’s revenue, customer base and cash on hand. In the spring of 2021, she told investors AllHere had generated some $3.7 million in revenue in the previous year, including through contracts with the New York City and Atlanta school districts. In reality, federal prosecutors allege, the company had only generated $11,000 — and contracts with the two major urban school systems didn’t exist. 

Key AllHere funders include the venture firms Rethink Education, Spero Ventures and Potencia Ventures. Their representatives  didn’t respond to requests for comment. 

When investors and an outside accountant accidentally discovered the discrepancies between the company’s actual financials and its claim to backers, Smith-Griffin masqueraded as a financial consultant to perpetuate the scheme, prosecutors allege. She was accused of creating a fake email address for the phony outside consultant, which she used to send fraudulent documents to her largest investor. 

Though one of the firm’s biggest investors “recruited high profile” education leaders to the company’s board of directors, including former Chicago Public Schools CEO Janice Jackson, the indictment notes that Smith-Griffin “exercised exclusive control” over AllHere’s communications with investors, board members, customers and outside vendors.

The indictment adds further uncertainty around the AI chatbot the company created for and launched with such fanfare earlier this year with Los Angeles schools, the country’s second-largest district.

As K-12 school systems nationwide rush to inject artificial intelligence into their teaching practices, the L.A. chatbot has of what could go wrong. On Tuesday, the U.S. Education Department on ways schools can harness AI while ensuring they don’t have a discriminatory impact on vulnerable and underserved students. 

In April, Smith-Griffin and Carvalho unveiled the chatbot together at the influential ASU+GSV ed tech conference in San Diego. Carvalho said Ed was the nation’s first AI-enabled “personal assistant” and would drive academic improvement while providing Los Angeles’s roughly 540,00 students and their families with a trove of helpful information upon request.

Los Angeles Unified Supt. Alberto Carvalho, during the official launch of the AI-powered chatbot, “Ed.” (Getty Images)

Signs of turmoil emerged in June, when The 74 first reported that Smith-Griffin was out of a job as AllHere furloughed a majority of its staff due to its “current financial position.” A statement from the L.A. district said the company had been put up for sale. 

The company then filed for Chapter 7 bankruptcy in August. At a bankruptcy hearing in September, Toby Jackson, one of AllHere’s only remaining employees and its former chief technology officer, struggled to explain why the company had paid Smith-Griffin $243,000 in expenses in the past year alone. 

“That is one of the outstanding questions that we also have,” said Jackson, who said that Smith-Griffin “did do quite a bit of travel as the CEO of the company.”  

Jackson did not respond to a request for comment.

The 74 first reported the possible criminal charges in early October, when Delaware court documents related to AllHere’s bankruptcy case revealed a grand jury subpoena by federal prosecutors. Even before the company laid off employees and announced its financial woes, a former employee-turned-whistleblower told The 74 that AllHere had struggled to produce a “proper product” for the L.A. district and took shortcuts that ran afoul of school district policies and bedrock student data privacy principles. 

By the time AllHere went bankrupt earlier this year, it never had more than 31 customers total — less than a third the number Smith-Griffin told investors she had by early 2021. By the time the company collapsed this year, only three of AllHere’s customers generated more than $100,000 in revenue. 

In total, the felony charges carry a 42-year prison sentence for Smith-Griffin, who began her  career working in a Boston charter school as a teacher and family engagement director.

“Her alleged actions impacted the potential for improved learning environments across major school districts by selfishly prioritizing personal expenses,” FBI Assistant Director in Charge James Dennehy said in the release. “The FBI will ensure that any individual exploiting the promise of education opportunities for our city’s children will be taught a lesson.” 

]]>
Ed Tech Startup Behind L.A. Schools’ Failed $6M AI Chatbot Files for Bankruptcy /article/allhere-ai-los-angeles-schools-tool-bankruptcy-filing/ Thu, 12 Sep 2024 10:30:00 +0000 /?post_type=article&p=732760 The education technology company behind Los Angeles schools’ failed $6 million foray into artificial intelligence was in a Delaware bankruptcy court Tuesday seeking relief from its creditors and to sell off its meager assets before shutting down entirely.

The latest chapter in AllHere’s dizzying collapse revealed more information about the once-lauded company’s finances and its relationship with the Los Angeles Unified School District. But the hearing failed to answer key questions about why AllHere went under after garnering $12 million in investor capital, a blizzard of positive press and a contract with the nation’s second-largest school district to create “Ed,” the buzzy, AI-powered chatbot.

During the hearing held over Zoom, one of AllHere’s only remaining executives, former chief technology officer Toby Jackson, struggled to explain why the company paid ousted CEO Joanna Smith-Griffin $243,000 in expenses from the past year and owed $630,000 to its largest creditor, education technology salesperson Debra Kerr. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“I don’t know exactly the nature of all of [Smith-Griffin’s] expenses. She was the CEO and so that is one of the outstanding questions that we also have,” Jackson said when quizzed about the six-figure amount by the bankruptcy trustee. “She did do quite a bit of travel as the CEO of the company.” 

Similarly, Jackson said he had no invoices to substantiate the $630,000 debt to Kerr, who is a longtime associate and of Los Angeles schools Superintendent Alberto Carvalho, dating back to his days leading Miami-Dade schools. Kerr’s son, Richard, is a former AllHere account executive who told The 74 this week he pitched the AllHere deal to Los Angeles school leaders.

“I’m not really sure what exactly that entails,” Jackson said of Kerr’s claim.

Moments later, Kerr chimed into the Zoom hearing, arguing the company owed her the money after she helped AllHere close the lucrative deal in L.A. Kerr said she was never paid her commission from the first payments that LAUSD made to AllHere under the contract. 

The district has said it paid AllHere roughly $3 million of the $6 million for the chatbot, which was taken offline shortly after AllHere announced in June that it was in financial distress and had furloughed most of its employees. 

“I never did collect any commissions and it’s in the contract based on commission percentages that would have been made on any sales accrued,” Kerr told the trustee.

Smith-Griffin, who now lives in North Carolina, was not present for the Zoom hearing and could not be reached for comment. There were indications in the hearing that her separation from AllHere was not amicable, including that the former CEO has refused to disclose the password to her $500 company-owned laptop, one of its few remaining assets. 

Court records show that Jackson, now the head restructuring officer, earned $305,000 a year in his role with the company before it shuttered, nearly three times the $105,000 paid to Smith-Griffin, a Harvard University graduate who built AllHere in 2016 with financial backing from the prestigious institution. 

Filed in mid-August, AllHere’s Title 7 bankruptcy petition strengthens doubts that it could find a new owner to take over its mission as an AI pioneer in K-12 schools. That scenario was put forth by a Los Angeles school district spokesperson earlier this year with the assertion that “Ed” could still be successfully launched as a personalized, interactive learning acceleration tool for all of the district’s roughly 540,000 students and their families.

Instead, court records show AllHere’s few remaining employees are preparing for “the wind down of the company” and officials acknowledged during Tuesday’s proceeding that AllHere was unable to fulfill the terms of its contract with L.A. Unified. 

A lawyer representing the school district was present at the hearing. In a statement Tuesday evening, a district spokesperson said LAUSD is “evaluating its next steps to pursue and protect its rights in the bankruptcy proceedings.” 

Los Angeles schools Superintendent Alberto Carvalho appears in a photograph with Debra Kerr, which the education technology salesperson later posted on LinkedIn. (Screenshot)

Kerr and Carvalho 

Ties between Kerr and Carvalho go back to at least 2010, when she worked for the behemoth education company Back then, she gave Carvalho and Miami students what she to an original print of the U.S. Declaration of Independence. Ever since, Carvalho, who took over leadership in Los Angeles in 2022, has been a regular staple on Kerr’s social media. 

A LinkedIn post promoting L.A.’s chatbot noted that the tool worked in partnership with services from seven companies including , the creators of digital education program ABCmouse and where Kerr previously worked as head of sales. 

Kerr didn’t respond to requests for comment but her son, Richard, who began working at AllHere in 2022, said among the school district deals he worked on for the company was the chatbot project in Los Angeles. 

“We had a big deal in L.A. and the investors, I guess, didn’t have patience to wait to get paid from it,” he said. 

Kerr said he met with education officials in Los Angeles and “did a lot of work” helping the company secure the ageement. When asked about his mother’s role in closing AllHere’s contract in Los Angeles, Kerr said “she had a lot to do with it,” but didn’t elaborate further. 

A statement from the L.A. district spokesperson said that “Los Angeles Unified launched a competitive” request for proposals that received “multiple responses,” which eventually led to AllHere’s selection. This spring, Carvalho went on the road with Smith-Griffin to promote “Ed,” billing the chatbot personified by a yellow sun as being “unprecedented in American public education.”

Before he was furloughed, Richard Kerr said AllHere was a great place to work — in part because of Smith-Griffin’s leadership.

“It’s very unfortunate what happened to Joanna. I thought she was on a great path and she was doing an amazing thing,” he said, adding that she made a mistake when she “brought in the wrong investors that were pretty vindictive” and decided to cut short the company without giving it a proper chance. 

AllHere’s former senior director of software engineering, who became a company whistleblower, told The 74 earlier this year that AllHere struggled to meet the terms of its contract in Los Angeles and took shortcuts that violated bedrock student privacy principles and district rules. Both the district’s independent inspector general and top administrators have launched separate investigations into what went wrong with AllHere.

Even though his mother, Debra Kerr, was on the Delaware court’s Zoom call Tuesday, Richard Kerr said he was unaware his former employer had filed for bankruptcy.

What’s left

The company’s few remaining employees and board members, including former Chicago Public Schools Chief Executive Janice Jackson, have not made themselves available for comment. 

AllHere investor Andrew Parker, who was on vacation Tuesday and didn’t attend the court hearing, now serves as the company’s secretary. In addition to Janice Jackson, other players who signed AllHere’s bankruptcy petition are Andre Bennin, a managing partner with the investment firm , and education consultant Jeff Livingston. 

Even though Smith-Griffin is no longer with the company, court records show she still has a significant stake, holding 81% equity in its common stock. Rethink Education was by far the company’s biggest outside investor. 

Other top creditors, according to court records, are the law firm of at nearly $275,000, the information technology company at $190,000 and $123,000 to well-known education consulting firm  

Earlier in the summer, The 74 spoke with Gunderson Dettmer partner Jay Hachigian, who said he had only worked with AllHere early in its formation. He didn’t respond to requests for comment this week about his firm’s large outstanding balance with the company. Whiteboard Advisors spokesperson Thomas Rodgers said in an email that his firm previously worked with AllHere but its role is covered by a nondisclosure agreement. 

Court records show the company earned $2.4 million in gross revenue last year but had generated much less since January, about $587,000.

At the time of bankruptcy, court records show the company had active contracts with just 10 school districts, including those in Cincinnati, Miami and Weehawken, New Jersey. Only Weehawken sought to use the chatbot platform created for LAUSD, while the rest relied on an earlier text messaging tool designed to combat chronic absenteeism. 

Despite landing millions of dollars in backing from a group of social impact investment firms, several of which cited their enthusiasm for investing in AllHere specifically because it was led by a Black woman, court records reveal the company’s coffers are nearly empty. AllHere claimed nearly $2.9 million in property and just shy of that — $1.75 million — in liabilities. The company’s actual assets, Toby Jackson acknowledged in court, are much lower. 

It claimed an “unknown” value on pending patents, which Jackson conceded Tuesday had been denied, and $2.88 million for licenses, franchises and royalties for its LAUSD contract. Other assets, including its website and chatbot source code, were also listed at a value of “unknown.”

Jackson said the Los Angeles contract was valued at $2.88 million for the remaining outstanding balance the district owes to fulfill the agreement — money he admitted AllHere would be unable to collect because it has not “held up our part of the bargain in the contract” and is closing shop.

Financial statements to the court show AllHere had $18,000 in savings and just $500 in physical assets: the value of Smith-Griffin’s work laptop, whose contents remain outside the tech company’s reach. 

“We have not been able to obtain the credentials for Mrs. Smith’s laptop. We did not receive any cooperation with that,” Jackson testified Tuesday. “She has been cooperative with some other matters, but not with this one.”

]]>
University of Nebraska-Google Career Certificates Partnership Opens This Week /article/university-of-nebraska-google-career-certificates-partnership-opens-this-week/ Tue, 18 Jun 2024 16:30:00 +0000 /?post_type=article&p=728646 This article was originally published in

Enrollment opens this week for the University of Nebraska’s new partnership to offer Google Career Certificates in a variety of fields.

Beginning Wednesday, June 19, NU students, alumni and Nebraskans at large can begin to for a variety of self-paced, noncredit courses. Interim NU President Chris Kabourek said that since announcing the in April, with little marketing, more than 1,000 people had already .

Melissa Lee, NU’s chief communication officer, said 1,247 people had registered as of Friday. Of those registrants, 20% are current NU students and 40% are alumni, meaning hundreds of Nebraskans who might have no connections to NU are interested in more education.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“I just think it solidifies what we thought, that Nebraskans are yearning for more skill sets and more education,” Kabourek told the Nebraska Examiner.

An email last week to pre-registrants the week of June 10 from Ana Lopez Shalla, lead for NU’s microcredentials, offered to completing the minicourses. She wrote that registrants were “helping to drive impact not only in your own career, but in our regional workforce, too.”

The Google Career Certificates will be offered in three cycles in the next year, with 2,500 seats available for each session. They begin in August, December and April. Enrollment will be open through July 31; courses in the first session will begin the next day.

In April, NU announced a special first-year rate of $20 per enrollment.

Kabourek said at the time that the partnership is designed for opportunities, not revenue, and that funds would be used to cover costs and any associated technological needs.

will be offered:

  • Cybersecurity
  • IT support
  • Data analytics
  • Digital marketing and e-commerce
  • Project management
  • User experience (UX) design
  • IT automation with python.Advanced data analytics
  • Business intelligence.

Kabourek, who will return to his sole role as NU’s chief financial officer come July 1, said one of his priorities as interim president has been to help the university reconnect with Nebraskans, which will include getting out to visit high schools in the fall.

As a rural Nebraskan from David City, Kabourek said, he knows every Nebraskan can find a place within NU.

“We never want your ability to go get your education or develop your skill sets or enhance your resume to be limited by your family situation or your location,” Kabourek said.

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Nebraska Examiner maintains editorial independence. Contact Editor Cate Folsom for questions: info@nebraskaexaminer.com. Follow Nebraska Examiner on and .

]]>
One-Third of Teachers Have Already Tried AI, Survey Finds /article/one-third-of-teachers-have-already-tried-ai-survey-finds/ Thu, 30 May 2024 10:30:00 +0000 /?post_type=article&p=727770 One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.

In all, two-thirds of respondents said they hadn’t used AI in their work, including 9 percent who reported they’d never heard of tools and products like OpenAI’s ChatGPT or Google’s Gemini. By contrast, 18 percent of participants said they regularly relied on such offerings, and 15 percent said they had tried them before but don’t intend to use them more regularly.

Melissa Kay Diliberti, a policy researcher at RAND and one of the report’s co-authors, said the current minority of users constitutes a “foothold” in schools that is poised to grow with time — and that has already expanded massively in the 17 months to an unsuspecting public in November 2022.

“There seem to be a small number of people on the bandwagon, but the bandwagon is moving forward,” Diliberti said.

The poll, incorporating responses from a nationally representative sample of more than 1,000 teachers in 231 public school districts, offers the most recent data from a technological shift that has as . The potential of AI to maximize teacher efficiency, individualize instruction for every pupil, and offer support to kids struggling with mental health problems has stoked a growing demand for new products that is quickly being met by major tech players like Google and Khan Academy.

The gleanings of broader public opinion research are somewhat diffuse, but there is reason to think that the level of AI take-up by teachers is comparable to, or even further along than, that of other professionals. In previous polls, similar minorities of (15 percent), (28 percent), human resources staff (), and doctors () have reported using AI in a variety of tasks. 

OpenAI CEO Sam Altman, whose company developed ChatGPT. (Getty Images)

And teachers’ outlook on the future is suggestive: Nearly all respondents who already use AI tools believe they will use it more in the 2024–25 school year than they do now, while 28 percent of non-users predicted they would eventually try them out. 

Use of artificial intelligence was roughly even across different kinds of schools, whether broken down by student demographics, poverty levels or rural/urban geography. By contrast, middle and high school teachers were almost twice as likely to say they used AI than their counterparts in elementary school (23 percent vs. 12 percent), and English and social studies instructors reported higher use than those in STEM disciplines (27 percent versus 19 percent).

While cautioning against overinterpreting results in a relatively small sample, Diliberti reasoned that English and social studies teachers are also more likely to create or modify their own curricular materials, or source them from online marketplaces like Teachers Pay Teachers. Outsourcing some of those efforts — along with periodic non-instructional tasks, such as composing emails to parents or letters of recommendation to colleges — to AI could save hundreds of hours over the course of a school year.

“You could see where AI might be a way to ease the burden of a task they’re already doing,” she said. “That might be why these teachers appear to be more inclined to use AI than a math teacher, who could be more tightly focused on a given curriculum that’s used throughout the school.”

Among teachers regularly using AI, close to half said they did so to generate classroom assignments or worksheets (40 percent), lesson plans (41 percent), or assessments for students (49 percent). 

Establishing a ‘foothold’

Amanda Bickerstaff, CEO of , a company that advises school districts on the use of artificial intelligence, said the RAND poll is notable for being “the first survey I’ve seen that seems representative of what is happening in schools. 

In training sessions she has conducted for tens of thousands of classroom teachers and administrators since last year, Bickerstaff said she and her colleagues have received a warm reception from audiences, but also uneven awareness of what AI can accomplish. Early adopters might simply be tech enthusiasts, or they could be special education teachers .

Curiosity about the new technology “is coming from the bottom-up as well as the top-down,” she observed. “One of the more interesting things is that we’re seeing more teachers using AI in schools than schools and districts teaching them to use it.”

Partly because guidance and professional development still trail teacher interest, a little under 10 percent of all survey respondents said they were seeking out AI tools of their own initiative. At present, the most commonly used products were popular platforms like Google Classroom, adaptive learning systems offered by Khan Academy and i-Ready, and the nearly ubiquitous chatbots. 

Diliberti said she wasn’t surprised that incumbent players like Google and OpenAI, powered by billions of dollars in investment and promotion, have gained early primacy in the K–12 arena. But she added it was striking that lesser-known products that are specifically geared toward activities like lesson planning and assessment generation haven’t won the following of more multifunctional alternatives like ChatGPT.

“It’s notable that teachers seem to be using more generic tools instead of dedicated tools that were developed for this purpose,” she said.

Bickerstaff argued that the survey results demonstrated that teachers, increasingly finding their own way to AI, should be provided more training on the use of existing tools. Beyond that, she said, public and private actors should broaden access to more advanced versions of those tools, at subscription costs averaging about $20 per month, to allow teachers to gain a better understanding of their applications. 

“These tools make mistakes, they’re biased, and they require significant training to be able to use them. You need support on how to use the tools before you can get the best out of them.”

]]>
Banning Smartphones at Schools: Research Shows Higher Test Scores, More Exercise /article/banning-smartphones-at-schools-research-points-to-higher-test-scores-less-anxiety-more-exercise/ Wed, 11 Oct 2023 11:15:00 +0000 /?post_type=article&p=716103 The international debate over technology and youth was jolted last week by a surprising announcement: Schools in the United Kingdom will . 

the U.K.’s secretary of state for education, the new guidance builds on controls already in place in many schools across the country, most of which take explicit aim at both online bullying and student inattention during lessons. But it may have the further effect of encouraging advocates, both at home and abroad, to pursue further-reaching policies limiting children’s access to tech and social media. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Parents, teachers, and education leaders across the United States have entertained similar proposals in recent years as devices have increasingly become a fixture in students’ daily lives. The near-ubiquity of electronics in American homes (a from the nonprofit Common Sense Media showed that 43 percent of children aged 8–12 personally owned a smartphone), as well as their potential links to worsening mental health for young people, moved U.S. Surgeon General Dr. Vivek Murthy to warning against excessive social media use. 

Still, it is doubtful whether similar prohibitions can be attempted in the U.S. Unlike in most other Western countries, K–12 education in America is administered at the state and local level, leaving decisions about school management and culture mostly up to district boards. In addition, fears of school shootings and other on-site emergencies mean that some parents want to remain in contact with their kids at all times — even as most research shows that the presence of phones in classrooms tends to harm academic achievement. Among older students, the removal of cell phones during courses is correlated with lower anxiety and higher levels of course understanding, while adolescents engage in more physical play when phones are barred from recess.

Doug Lemov is a well-known educator and expert on classroom practice whose book has become an international bestseller and a highly influential text among both novice and veteran teachers. He has also against the use of phones in school, arguing that they meaningfully hamper instruction and prevent children from forming real-world relationships. 

Doug Lemov

Bans such as the one proposed in the United Kingdom might be difficult to enforce, Lemov acknowledged, given kids’ attachment to their devices. But clever methods of evasion are no reason not to seriously contemplate restrictions on phones in schools, he said. 

“If a kid feels like he has to sneak off to the bathroom and hide in the stall to use his cell phone, it’s still a win. Because it means that in 99 percent of the places in the building, people are walking around without their cell phones out, they are concentrating in class, and they’re having fully present relationships with one another.”

Effects on academics, exercise

The United Kingdom isn’t the first country to impose restrictions on phones in school. According to released this summer on education systems in roughly 200 countries, about one-quarter have enacted comparable rules. But some of the most compelling research on the effects of cell phone bans comes from England.

Many teachers already confiscate cell phones during classes. New guidance in the U.K. will push more schools to ban them throughout the school day. (Getty Images)

In , academics Louis-Phillipe Beland and Richard Murphy found that across the large English cities of Birmingham, Leicester, London, and Manchester, dozens of high schools that instituted bans on mobile phones saw significant improvement in scores on high-stakes tests. The increase was especially large for the lowest-performing pupils, who saw a jump in scores more than twice as large as the average student. 

Overall, the authors argued, the greater effects on these students of banning mobile phones — roughly equivalent to adding an hour to each school week — suggested that their higher-achieving classmates were better able to ignore distractions and focus on their work. The lure of texts and apps, therefore, might be expected to increase achievement gaps over time. 

Play and exercise are also linked to the use of electronics. A published in 2021 showed that a four-week ban on phones during recess significantly increased both the frequency and intensity of physical activity of children aged 10–14. And the consequences of a lack of movement can be strongly negative: In of nearly 25,000 U.S. teenagers, about 20 percent used screened devices (smartphones, tablets, or video games) more than five hours per day; that group was 43 percent more likely to be obese than participants who experienced less screen time.

While comparatively few studies have been conducted on the impact of information technology on K–12 learning, some have focused on its presence in university settings. One paper, , studied cell phone use and texting in a large sample of college students, ultimately finding that they were associated with relatively lower grades and higher levels of self-reported anxiety. Relatedly, subjects who texted and used their phones less experienced higher “satisfaction with life.”

Jonathan Haidt

Far beyond its measured influence over grades or test scores, huge public concern has increasingly been directed at the effects of phone and internet use on adolescent mental health. Psychologists like and have pointed to the recent explosion of screen time (generally pegged to the widespread adoption of home internet access and the emergence of smartphones) as a key culprit in and anxiety.

The chorus of critics gained a powerful new voice in May, when Murthy issued his cautionary guidance on the use of social media. While stopping far short of recommending a blanket ban on youth access to apps like Instagram and Snapchat, the document struck a distinctly foreboding note.

“The current body of evidence indicates that while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents,” the surgeon general wrote.

Whether the advisory will exert any influence on local authorities — and whether it is widely interpreted as a warning about phones as well as social media — is difficult to tell. Districts attempted to curb the use of phones in school throughout the 2000s through a variety of means, most unsuccessful: New York City implemented a full-on ban in 2005 under then-Mayor Michael Bloomberg, only for it a decade later by his successor, Bill de Blasio. In Spokane, Washington, one high school to keep students from texting during class (the experiment was quickly abandoned when its legality was called into question). 

Some jurisdictions have a at restrictions over the past few years, however. This spring, Massachusetts’s state board of providing grants to districts that tightened their policies.

‘Bans do not stop bullying’

Good reason exists to doubt the efficacy of strict prohibitions. According to , during the 2019–20 school year, 77 percent of public schools said they disallowed the non-academic use of phones during school hours. But released earlier this year revealed that 97 percent of children aged 11–17 used their phones during the school day, suggesting that the restrictions were not widely observed. 

Those figures were a stark reflection of the pre-COVID penetration of cell phones into school spaces. But students and families became even more accustomed to relying on technology during the pandemic, when instruction shifted online for months at a time. School districts loaned out thousands of devices and rushed to bring internet connectivity to students who lived in remote areas so that their learning would not be interrupted.

By most indicators, the migration online led to significant learning losses. But students also reported that during the worst stretches of isolation, social media helped them stay in touch with their friends and teachers — in cyberspace, if not real life. Many are reluctant to let go of their phones even with the return to in-person learning. 

American parents, too, have come to appreciate the convenience of having their children accessible during the school day. Many to be able to stay connected in the event of extreme events, including mass shootings, that have seized national attention in recent years. (Notably, security experts on the benefits of phones during emergencies, with some arguing that trapped students would be better off directing their attention solely at teachers and administrators.)

Liz Kolb

Liz Kolb, a clinical professor of education technologies at the University of Michigan, said that while cell phones represent an undeniable source of distraction in academic settings, barring them from schools could also curtail opportunities to role model their constructive use. 

“Bans do not stop bullying, harrassment, [fear of missing out], feelings of depression or suicide, or accessing harmful content,” Kolb added in an email. “So schools that ban cell phones need to be explicit about still addressing these issues, even if they are not seeing phones every day.”

Lemov said that while some pushback from students was to be expected, most would likely change their minds in response to academic and social environments improved from the lack of phones. And while strict bans might be particularly challenging to implement, schools could also like Yondr pouches, which allow schools to collect and seal away phones during the day, but selectively offer students access to them if necessary. 

Companies like Yondr market lockable pouches that schools can use to selectively restrict phone access. (Getty Images)

Lemov, who said his own daughter’s school district used Yondr pouches, said they might help assuage parents’ worries about safety. Looking past methods of restriction, he encouraged schools to go further by proactively building a more engaging social and educational space; seductive objects should not only be removed, but replaced with opportunities for kids to learn, interact, and have fun, he argued.

“We have to eliminate an engine of distraction and disconnection, but we have to make sure we do it really well,” Lemov said. “It’s not just about banning cell phones, but also building vibrant student culture to make sure skeptics buy in.” 

]]>
How Ed Tech Tools Track Kids Online — and Why Parents Should Care /article/how-ed-tech-tools-track-kids-online-and-why-parents-should-care/ Fri, 22 Sep 2023 11:15:00 +0000 /?post_type=article&p=715160 As technology becomes more and more ingrained in education — and as students become increasingly concerned about how their personal information is being collected and used — startling new research shows how schools have given for-profit tech companies a massive data portal into young people’s everyday lives. 

, led by researchers at the University of Chicago and New York University, highlights how the scramble to adopt new technologies in schools has served to create an $85 billion industry with significant data security risks for teachers, parents and students. The issue has become particularly pervasive since the pandemic forced students nationwide into remote, online learning. 

Students’ sensitive information is increasingly leaked online following high-profile ransomware attacks and user data monetization is a key business strategy for tech companies, including those that serve the education market, like Google. Yet student privacy is rarely a top consideration when teachers adopt new digital tools, researchers learned in interviews with district technology officials. In fact, schools routinely lack the resources and know-how to assess potential vulnerabilities.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Such a reality could spell trouble: In an analysis of education technologies widely used or endorsed by districts nationwide, researchers discovered privacy risks abound. The analysis relied on , a privacy inspector tool created by the nonprofit news website The Markup which scours websites to uncover data-sharing practices. Those include the use of cookies that track user behaviors to deliver personalized advertisements. Analyzed education tools, they found, make “extensive use of tracking technologies” with potential privacy implications. 

Most alarming to the researchers were the 7.4% that used “session recorders,” a type of tracker that documents a user’s every move. 

“Anyone visiting those sites would have their entire session captured which includes information such as which links they clicked on, what images they hovered over and even data entered into fields but not submitted,” the report notes. “This could include data that users might otherwise consider private such as the autofilling of saved user credentials or social network data.” 

The 74 caught up with report co-author Jake Chanenson, a University of Chicago Ph.D. student, to gain insight into the report’s findings and to understand why he believes that parents and students should be concerned about how ed tech companies collect, store and use their personal data. 

The conversation has been edited for length and clarity. 

Why did remote learning pique your interest in digital privacy and what are the primary implications that worry you? 

Remote learning can be done well but we all had to get to it very quickly without a plan because we all suddenly got thrown at home because of the global pandemic. Suddenly schools had to scramble and find new solutions to reach their students, to educate their students, without being able to test the field, to think critically about it. They really were, with shoestring and gum, trying to keep their classes together. 

Whether you were in school, whether you were at work, whether you were at neither and still just trying to keep in touch with your friends, you were using anything that came your way because that’s what you had to do. I found that really interesting — and a bit concerning. It’s no one’s fault because we don’t understand the ramifications of these technologies and now that we’ve used them a lot of them are here to stay. 

I don’t want to sound like some sort of demonizing figure saying that all tech is bad — that is certainly not the case. It’s merely the fact that sometimes these promises are oversold, and now we have this added element of data privacy. 

When you interact with any of these platforms, tons and tons of student data — from how you interact with it, how well you do on their assignments, when you do it, if you’re a chronic procrastinator, if you’re always getting your work done, if you seem more interested in your art class than your math class. These are all data points collected by these companies and I wanted to know, ‘What is it they’re collecting? What are they doing with it,’ and, specifically for this study, ‘What are schools thinking about in this space if anything at all?’

This study took a two-pronged approach. You conducted surveys with experts in this space and then used technology to identify information that folks might not be aware of. Let’s discuss the surveys first. How did the school administrators and district technology officials you interviewed view privacy issues? 

Lots of them knew that something wasn’t quite up to snuff in their security and privacy practices. 

The best security and privacy practices that I saw in these school districts were entirely because someone, usually in the IT department, had an independent interest in student privacy. They were going above and beyond what their job descriptions required because they cared about the students. 

That’s not to imply that school officials don’t care about the kids —they care about them very much — but they’re so busy making sure the lights are on and making sure there are teachers for the classrooms, dealing with discipline issues, dealing with staffing concerns. They’re not necessarily focused on data privacy and security. 

Your research takes a unique approach to show the real-world impacts of education technology on student privacy. You identify that some of these tools raise significant privacy implications. How did you go about that?

We looked at the online websites of educational sites and tried to understand, what are the privacy risks here? What we found is that 7.4% of all these websites had a session recorder, which records everything you do when you’re interacting with a web page. How long you hovered over a certain element, how often you scrolled, what you clicked on and what you didn’t click on. 

That’s a scary amount of data collection for something that’s normally an education site. On top of that we found a high prevalence of cookies and other types of trackers that were being sent to third-parties, basically advertising networks, that were taking that data to track these students across the web. As a student, even while I’m doing my work, they’re creating an ad profile of me that not only encompasses who I am as a consumer in my spare time, but who I am as a student inside of school for this more comprehensive picture of who I am to sell me ads. 

That could be upsetting to somebody who thinks that what I’m doing in school is only the business of me and the teacher, my parents and the principal. 

Why would an education technology company use a session recorder? 

We were able to identify that these trackers, like session recorders, were running on these websites, but we don’t have any idea what they’re recording, which is a project that we’re currently working on and trying to understand. 

I can’t make any well-grounded assumptions to what this is being used for, whether it be nefarious or benign. It’s not uncommon for a session recorder to be used for diagnostic information for a technology company if they want to understand how their users use a site so they can improve it. That’s a legitimate use of one of these session recorders, but without knowing what data they collect, it could be that they’re collecting data that isn’t strictly relevant to improving the service or are over-collecting data in the guise of improving the service and retaining it for future use. 

There are, of course, but I won’t speculate on that because I don’t have definitive proof that’s what’s happening. 

Why should people care about districts’ technology procurements? School districts are using a huge swath of digital tools, some from Google and some from tiny tech companies. If school leaders aren’t putting privacy at the forefront of deciding which tools to use, what concerning outcomes can come from that? 

There are several concerning outcomes, the first being that the data these companies collect don’t necessarily sit on their servers. They sometimes are sold to third parties. Some companies state third parties ambiguously and others list out who they are selling it to and why. 

Just on a normative basis, I think that what you do in the classroom shouldn’t be harvested and sold, especially when many of these companies are raking in somewhere between five- and seven-figure contracts to license this technology. It’s not like they don’t have other sources of income, but the things they can take from students can be incredibly alarming: Information about socioemotional behavior, so if I act out in school, if I am in trouble for something that’s happening at home or I’m bullying another student, that data is collected by a specific service and that data is held somewhere. And of course, when you hold data, it’s a security risk. 

There was a big breach in New York City where hundreds of thousands of students had their personal information leaked because a company was holding onto all of this data. It was leaked to hackers who got that data and can do who knows what with it. That’s a huge privacy violation. Some of the things they stole in that particular breach were names, birthdays and standard things you can use to commit identity fraud, which is a problem. But it can also be more sensitive stuff, such as [special education] accommodation lists or if you qualify for free lunch. There’s stuff about disability or your economic status, stuff that is all collected by these ed tech companies and held somewhere. 

Learning management systems have incredible amounts of metadata. ‘Are you someone who procrastinates and only finishes an assignment one minute before it’s due? Did you do it early? Are you someone who didn’t do the reading but showed up to class anyway? Are you someone who took 10 times to get this quiz right or did it only take you one time’ 

These data are recorded and are available for teachers to see, but because teachers can see it, it’s sitting on a server somewhere. 

Because they’re being stored somewhere and they are not being deleted regularly and these companies are not following data minimization principles, it’s a potential privacy risk for these students should another breach happen, which we’ve seen happen again and again and again. 

Breaches have affected sensitive student information. In her book Danielle Citron argues for federal rules that would protect intimate privacy as a civil right. Why are such rules needed and how would they work in an educational context? 

There are certain types of information, like nonconsensual disclosures of intimate images, so-called revenge porn. I think you can make a straight analogy for student data. Just as there should be a zone of intimate privacy around your personal intimate life, your sexuality, whatever else, we should have a similar zone around your educational life. 

Education is a space where students should be able to learn and make mistakes, and if you cannot make those mistakes without being recorded, then that can have repercussions for you later. If you’re not perfect on your first try and someone gets a hold of that, I could see that affecting your college admissions or that could affect an employment record. If I am someone who wants to hire you and I have a list of every student in a school that turns in their assignments early and all of these people were either habitually late or always procrastinating then obviously I’m going to be more interested in hiring the worker that turned stuff in early. But what that list might not tell you is that it was one data point in eighth grade and that one of those students when they were in high school finally got on top of their executive dysfunction and started turning things in on time. 

It’s ultimately nobody’s business how you do in the classroom. You have final grades, but those fine-grained data are nobody else’s business but yours and the teacher’s. You have a safe space to learn and grow and make mistakes in the educational environment and to not be penalized for them outside of that classroom.

]]>
Iowa Professors Say Students Must Be Educated About Artificial Intelligence /article/iowa-professors-say-students-must-be-educated-about-artificial-intelligence/ Mon, 26 Jun 2023 17:30:00 +0000 /?post_type=article&p=710925 This article was originally published in

Three professors from Iowa’s public universities are working to raise awareness of the importance and contradictory nature of artificial intelligence in higher education, pointing to concerns about privacy, bias and academic integrity.

The professors, speaking to the Board of Regents on June 14, pointed to the benefits and detriments of AI use in classrooms, as it is necessary for the workforce in some occupations and hinders others.

“It’s important that we are, in all cases, educating our faculty, staff and students on the use of these technologies, both from the perspective of the opportunity they offer, but also the challenges and concerns that they present,” Barrett Thomas, professor and senior associate dean of the Tippie College of Business at the University of Iowa, said.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Abram Anders, an associate professor of English and the interim associate director of the Student Innovation Center at Iowa State University, said the impact of AI is being witnessed by “pioneers” at higher education institutions across the world. He said the large language model technology, in which computers can learn and generate human languages, is raising the bar for what’s possible in the classroom, but it does come with limitations.

“Even though we can see magical-like performances of these tools, it’s really important to know they have limitations,” Anders said. “It’s not like they’re sentient; they don’t think and feel like a human does. They’re not objective, they are likely to have some of the same biases of the human language that they’re trained on. They are not authoritative. Like a human author, they cannot be responsible for the consequences of their texts and they are not ethical.”

Thomas agreed with Anders about the detriments of the newer AI generator technology, including bias.

“More broadly, all AI technologies have questions of bias and that bias comes in algorithmic design, it comes in how we sample the data that is used to train these models,” he said. “It comes from the way the data is generated. This is, in these cases, human-generated data and so the data you get depends on who has access to that human generation.”

He also pointed to AI responses that simply aren’t true when asking questions, which spreads misinformation and impacts individual users. Thomas pointed to a “now infamous case” of and citing case law that doesn’t exist.

Academic integrity questions and classroom needs

Jim O’Loughlin, professor and head of the University of Northern Iowa’s Languages and Literature Department, showed the regents several headlines about academic integrity and the use of ChatGPT. He said questions of plagiarism are not new and the universities in Iowa have policies on academic infringements.

“There’s already some mechanism for dealing with electronic text,” he said while showing the regents a copy of UNI’s Academi Ethics Violation policy. “But we are — in the section in red — working on what modest changes may need to be made to account for generative AI.”

O’Loughlin said that these policies must remain flexible to see proper use in different classroom settings, as some may encourage understanding AI for future occupational application. Some students will need extensive understanding of generative AI, he said, while others may just need a little knowledge on it.

He pointed to the job of prompt engineers, who develop, refine and optimize AI text prompts for accuracy and relevant responses. Some current students at Iowa’s universities will go into these jobs, he said, who will need several classes on how to use and better AI.

Those aren’t the only cases, though, O’Loughlin told the board.

“Clearly, there are going to be some circumstances and some classes where the use of AI would be detrimental and would need to be prohibited and faculty would need to have the leeway for that,” he said.

Another issue is the current infrastructure professors have to determine if student work is plagiarized or not, O’Loughlin said.

“There are some concerns that a lot of faculty have right now,” he said. “Electronic plagiarism checkers that are already in place, they’ve actually struggled to accurately identify AI-produced text, particularly a lot of false positives come up for students for whom English is not their first language.”

Needing new assignments

O’Loughlin said the assignments the regents and some current professors at UNI, ISU and the University of Iowa would have experienced in their educational journeys will likely be nullified because of generative AI.

“We are also finding, now, that some standard forms of assessments, things that we all would’ve done — the take-home exam, the annotated bibliography, the research paper — these are going to become less reliable indicators of student performance because ChatGPT can be used with them so easily,” he said.

Written communication, argumentation and basic computer coding skills are easily assisted or even fully written by generative AI, he said. Discernment and understanding if something is good, bad or argumentative is becoming more important in higher education, he said, which is taught in more humanities courses.

New courses are also being offered surrounding AI, Anders said, pointing to a class he’s teaching at ISU entitled “Artificial Intelligence and Writing.” He will teach literacy tools for students to understand and develop effective prompts and find accurate information using AI.

O’Loughlin pointed to an epidemiology class at UNI where students analyze what ChatGPT has to say on public health issues for accuracy. There are also creative writing courses that use AI to understand original story ideas.

Opportunities for AI use are everywhere and in every discipline, Thomas said, including classes at the UI in entrepreneurship and AI as well as providing hands-on experiences in the Commercializing New Technology Academy.

“It’s going to impact all of the research across campus and then also all of our students as they go into the workforce,” he said. “And it’s important that we’re preparing them for that space.”

Privacy concerns

Thomas said one of the major issues with using ChatGPT and similar software is that students may not realize it stores data.

Generative AI holds onto the information input by people to train its next version, which includes any sensitive data.

“There are changes that are coming, particularly in ChatGPT, to allow you to keep your data private but I think there are still concerns and it requires education to make sure that people understand these and, probably in certain circumstances, prohibition against using these technologies with certain data,” he said.”

The time is now

Anders said the disruption of AI is happening now.

“These technologies, unlike other technologies, are not emergent in the sense that we don’t have to wait five years to see what they can do,” he said. “They can already do it now and if we had no further progress they would already be transforming our world.”

AI won’t replace jobs, he said, but a human using AI will as the technology is focused on “ramping up” human talent.

“The last point, that I think we all three agree on, is the question is not to ban or not to ban,” Andes said. “That’s already gone. This is here for good. But how can we assume leadership for inventing ethical features, ones that mitigate harms in our learning communities and prepare our students to use these tools moving forward.”

is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Iowa Capital Dispatch maintains editorial independence. Contact Editor Kathie Obradovich for questions: info@iowacapitaldispatch.com. Follow Iowa Capital Dispatch on and .

]]>
Opinion: From COVID Recovery to AI Chatbots, Moving Quickly to Reinvent Education in 2023 /article/from-covid-recovery-to-the-rise-of-ai-chatbots-we-must-move-more-quickly-to-reinvent-education-in-2023/ Mon, 09 Jan 2023 19:45:00 +0000 /?post_type=article&p=702188 Did you, like me, take time over the break to play with one of the new artificial intelligence bots? I asked to write a blog in the style of Robin Lake on the topic of helping students recover from the impacts of COVID-19 and related school closures. Here’s what it offered:

As the director of the Center on Reinventing Public Education, I’ve been closely following the impacts of the COVID-19 pandemic on our education system. One of the most significant impacts has been the widespread school closures, which have disrupted the education of millions of students around the world.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


As we start to emerge from the pandemic and schools begin to reopen, it’s important that we focus on helping students recover from the impacts of these closures. Here are a few key ways that we can do that:

1. Prioritize mental health: The pandemic has been a stressful and unsettling time for everyone, and students are no exception. It’s important that we prioritize their mental health and well-being as they return to school. This could include providing additional support for students who are struggling, as well as incorporating mindfulness and stress-reduction techniques into the curriculum.

2. Address learning loss: Many students have experienced learning loss as a result of the pandemic, with disadvantaged students disproportionately affected. It’s important that we provide additional support to help these students catch up, whether through summer programs, tutoring or other interventions.

3. Rebuild relationships: The pandemic has disrupted the social and emotional bonds that are so important for students’ learning and development. As schools reopen, it will be important to focus on rebuilding these relationships and creating a sense of community within the classroom.

4. Flexibility: Finally, it’s important to be flexible and responsive to the needs of students as they return to school. This may mean providing additional support for those who are struggling, or offering alternative learning options for students who may not be ready to return to in-person learning.

Not bad, right? ChatGPT ended that essay with a typical high school writing crutch: “In conclusion …” and then a repeat of the same points. You’ll also notice the AI didn’t recognize that schools are, in fact, open again.

I also asked ChatGPT to write me an op-ed for The 74 on the topic of extraterrestrial education. The essay was coherent and relevant, citing the need for students to develop strong foundations in the science of supporting human life on other planets, environmental sustainability, and “intercultural communication and diplomacy.”

It’s fun to play with these bots, which can write sophisticated comparative essays and even draft legislation. My communications manager quipped when she saw these essays that she is already out of a job. My colleagues at Arizona State University are wondering how they will assess student essays for plagiarism, even next term. 

If these bots, which are in their earliest and most crude state, can spit out writing like this already, what will they do in two years? How many jobs will disappear in that period alone — or shift to require radically different skill sets, higher-order thinking and the ability to collaborate with humans and robots? This is a sobering reminder that for CRPE’s 25th anniversary is already here. 

We said the rise of artificial intelligence and other technologies, combined with the realities of climate change and geopolitical dynamics, could lead to rising inequality, civic unrest and other chaos. On the other hand, we argued, the future is still largely within our control. If we embrace change and innovation, we can prepare the next generations of thinkers, problem solvers and leaders to not only confront the complex realities ahead, but to also discover new ways to thrive. We could try to prevent bot-written essays by confining students to pen and paper in a room with no technology. But what if, instead, we asked them to analyze the AI essays to understand why it’s important to break the rules in writing to communicate more sophisticated thoughts and ideas?

As we kick off 2023, we face the urgency of addressing the pandemic’s immediate impacts, such as learning loss. But the bot serves as a potent reminder that we will have failed this generation if we do not also use this moment to shift public education toward the future. Here’s what the bot had to say on that:

The future of work is rapidly changing, and it’s important that our education system keeps up. The jobs of tomorrow will require a combination of technical skills, critical thinking and collaboration, and our education system must adapt to meet these evolving needs.

This is a daunting challenge, but we and others have been writing and thinking about this for several years. In my , a year before we at CRPE became consumed with pandemic response, I wrote that we needed to more aggressively design schools for kids who are complex learners; radically reshape the high school experience; shift more student supports to out-of-school time and community organizations; make public funds more flexible and longer-term to allow for lifetime education and career retooling; and shift oversight and accountability toward learning pathways and trained customized opportunities, such as tutoring and career training. 

Today, these recommendations seem even more relevant than ever.

A version of this essay originally appeared on the CRPE .

]]>
Illuminate Ed Pulled from ‘Student Privacy Pledge’ After Massive Data Breach /article/illuminate-ed-pulled-from-student-privacy-pledge-after-massive-data-breach/ Mon, 08 Aug 2022 18:01:00 +0000 /?post_type=article&p=694391 Updated

Embattled education technology vendor Illuminate Education has become the first-ever company to get booted from the Student Privacy Pledge, an unprecedented move that follows a massive data breach affecting millions of students and allegations the company misrepresented its security safeguards. 

The Future of Privacy Forum, which created the self-regulatory effort nearly a decade ago to promote ethical student data practices by education technology companies, announced on Monday it had stripped Illuminate of its pledge signatory designation and referred the company to the Federal Trade Commission and state attorneys general in New York and California, where the biggest breaches occurred, to “consider further appropriate action,” including sanctions. 

“Publicly available information appears to confirm that Illuminate Education did not encrypt all student information while” it was being stored or transferred from one system to another, forum CEO Jules Polonetsky said in a statement. He said the decision to de-list Illuminate came after a review including “direct outreach” to the company, which “would not state” that such privacy practices had been in place.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


 “Such a failure to encrypt would violate several pledge provisions,” Polonetsky said, including a commitment to “maintain a comprehensive security program” to protect students’ sensitive information and to “comply with applicable laws,” including an “explicit data encryption requirement” in New York.

Encryption is the cybersecurity practice of scrambling readable data into an unusable format to prevent bad actors from understanding it without a key. Amazon Web Services to store student data on accounts that were easy to identify. 

Through the voluntary pledge, have agreed to to protect students’ online privacy. Though the privacy forum maintains that the pledge is legally binding and can be enforced by federal and state regulators, the move against Illuminate marks a dramatic shift in enforcement. The extent of the Illuminate breach remains unclear, encompasses districts in six states affecting an . 

Illuminate Education CEO Christine Willig (Illuminate Education)

Illuminate Education spokesperson Jane Snyder said the company is disappointed in the privacy forum’s decision, but it “will not detract from our commitment to safeguard the privacy of all student data in our care.” The privately held company founded in 2009 claims some 5,000 schools serving 17 million students use its tools.

“We will continue to monitor and enhance the security of our systems, and we will continue to work with students and school districts to resolve any concerns related to this matter while prioritizing the privacy and protection of the data we maintain,” Snyder said in a statement.

In a recent article in The 74, student privacy experts criticized the Big Tech-funded privacy forum for failing to sanction companies that break the agreement terms. 

The action taken against Illuminate comes just three months after the Federal Trade Commission announced efforts to ramp up enforcement of federal student privacy protections, including against companies that sell student data for targeted advertising and that lack reasonable systems “to maintain the confidentiality, security and integrity of children’s personal information.” 

The privacy forum maintains that the Federal Trade Commission and state attorneys general can hold companies accountable to their pledge commitments via consumer protection rules that prohibit unfair and deceptive business practices, but such action has never been taken. Education companies have long used the pledge as a marketing tool and the privacy forum has touted it as an assurance to schools as they shop for new technology. 

Signs of a data breach at California-based Illuminate first emerged in January when several of its popular digital tools, including programs used in New York City to track students’ grades and attendance, went dark. City officials announced in March that the personal data of some 820,000 current and former students had been compromised. Outside New York City, home to America’s largest school district, state officials said the breach affected an additional 174,000 students across the state. Student information in Los Angeles, the country’s second-largest school district, was also breached. 

Compromised data includes information about students’ eligibility for special education services and free or reduced-price lunch, their names, demographic information, immigration status and disciplinary records. 

New York City officials have accused Illuminate of misrepresenting its security safeguards and instructed educators to stop using its tools. New York State Education Department officials are investigating whether the company’s security practices run afoul of state law, which requires education vendors to maintain “reasonable” data security safeguards and to notify schools about data breaches “in the most expedient way possible and without unreasonable delay.” 

School districts in California, Colorado, Connecticut, Oklahoma and Washington have since that their personal information was compromised in the breach. Illuminate Education has never said how many people were affected by the lapse while at the that it has “no evidence that any information was subject to actual or attempted misuse.” 

CEO of the Future of Privacy Forum Jules Polonetsky (Future of Privacy Forum)

“FPF believes that the privacy and security of students’ information is essential,” Polonetsky said in the statement, declining to comment further. “To help ed tech companies better protect student data, we will be providing training for Pledge signatories, with a specific focus on data governance and security.”

For years, critics have accused the pledge of providing educators and parents with a false affirmation about the safety of education technology while being a tech-funded effort to thwart meaningful government regulation. 

The privacy forum’s decision to yank Illuminate doesn’t suggest stronger pledge enforcement going forward, said Doug Levin, the national director of The K12 Security Information eXchange. Rather, he accused the privacy forum of acting more in response to media coverage than a desire to hold companies to their promises.

“The only time that the Future of Privacy Forum has considered de-listing an organization is when the practices of a company have come under the attention of national media,” he said, adding that the press is an insufficient tool to hold tech companies accountable. “I think this is a case where [the privacy forum] was looking at collateral reputational damage and damage to the pledge and they had to act to protect their own self-interests and the interests of other pledge members. I do not read it as a signal that enforcement of the pledge will be enhanced going forward.”

Meanwhile, Levin sees Illuminate’s unwillingness to discuss its security practices with the privacy forum as another reason to believe the company acted negligently.

Illuminate is “clearly in legal jeopardy and I think they are concerned about making statements that could be used in a legal context to hold them accountable,” Levin said.

Still, the privacy forum’s decision to remove Illuminate raises the stakes from its previous enforcement efforts, most notably against the College Board, a nonprofit that administers the widely used SAT college admissions exam. In 2018, the privacy forum placed the nonprofit’s after found it was selling student data to third parties. The College Board was reinstated as an active pledge signatory a year later. It remains , despite a 2020 investigation by Consumer Reports that uncovered it was sending student data to major digital advertising platforms.

While some have argued that the College Board should have been removed from the pledge, the privacy forum has previously resisted efforts to de-list signatories. When the group learns about complaints against pledge signatories, it typically works with companies to resolve issues and ensure compliance, according to . 

Removing companies from the pledge, the post argued “could result in fewer privacy protections for users, as a former signatory would not be bound by the Pledge’s promises for future activities.”

Disclosure: The Bill & Melinda Gates Foundation and the Chan Zuckerberg Initiative provide financial support to the Future of Privacy Forum and The 74.

]]>
After Huge Illuminate Data Breach, Ed Tech’s ‘Student Privacy Pledge’ Under Fire /article/after-huge-illuminate-data-breach-ed-techs-student-privacy-pledge-under-fire/ Sun, 24 Jul 2022 19:00:00 +0000 /?post_type=article&p=693424 A few months after education leaders at America’s largest school district announced that a technology vendor had exposed sensitive student information in a massive data breach, the company at fault — Illuminate Education — was recognized with the of the Oscars. 

Since that disclosure in New York City schools, the scope of the breach has only grown, with districts in six states announcing that some had become victims. Illuminate has never disclosed the full extent of the blunder, even as critics decry significant harm to kids and security experts question why the company is being handed awards instead of getting slapped with sanctions. 

Amid demands that Illuminate be held accountable for the breach — and for allegations that it misrepresented its security safeguards — the company could soon face unprecedented discipline for violating , a self-regulatory effort by Big Tech to police shady business practices. In response to inquiries by The 74, the Future of Privacy Forum, a think tank and co-creator of the pledge, disclosed Tuesday that Illuminate could soon get the boot.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Forum CEO Jules Polonetsky said his group will decide within a month whether to revoke Illuminate’s status as a pledge signatory and refer the matter to state and federal regulators, including the Federal Trade Commission, for possible sanctions. 

“We have been reviewing the deeply concerning circumstances of the breach and apparent violations of Illuminate Education’s pledge commitments,” Polonetsky said in a statement to The 74. 

Illuminate did not respond to interview requests. 

In a twist, the pledge was co-created by the Software and Information Industry Association, the trade group that last month as being  among “the best of the best” in education technology. The pledge, created nearly a decade ago, is designed to ensure that education technology vendors are ethical stewards of kids’ most sensitive data. Its staunchest critics have assailed the pledge as being toothless — if not an outright effort to thwart meaningful government regulation. Now, they are questioning whether its response to the massive Illuminate breach will be any different. 

“I have never seen anybody get anything more than a slap on the wrist from the actual people controlling the pledge,” said Bill FItzgerald, an independent privacy researcher. Taking action against Illuminate, he said, “would break the pledge’s pretty perfect record for not actually enforcing any kind of sanctions against bad actors.”

Jules Polonetsky

Through the voluntary pledge, launched in 2014, hundreds of education technology companies have agreed to a slate of safety measures to protect students’ online privacy. Pledge signatories, , they will not sell student data to third parties or use the information for targeted advertising. Companies that sign the commitment also agree to “maintain a comprehensive security program” to protect students’ personal information from data breaches. 

The privacy forum, which is , has long maintained that the and offers assurances to school districts as they shop for new technology. In the absence of a federal consumer privacy law, the forum argues the pledge grants “an important and unique means for privacy enforcement,” giving the Federal Trade Commission and state attorneys general an outlet to hold education technology companies accountable via consumer protection rules that prohibit unfair and deceptive business practices. 

For years, critics of providing educators and parents false assurances that a given product is safe, than a pinky promise. Meanwhile, schools and technology companies have become increasingly entangled — particularly during the pandemic. As districts across the globe rushed to create digital classrooms, few governments checked to make sure the tech products officials endorsed were safe for children, by the Human Rights Watch. Shoddy student data practices by leading tech vendors, the group found, were rampant. Of the 164 tools analyzed, 89 percent “engaged in data practices that put children’s rights at risk,” with a majority giving student records to advertisers.

As companies suck up a mind-boggling amount of student information, a lack of meaningful enforcement has let tech companies off the hook for violating students’ privacy rights, said Hye Jung Han, a Human Rights Watch researcher focused on children. As a result, she said, students whose schools require them to use certain digital tools are being forced to “give up their privacy in order to learn.” Paired with large-scale data breaches, like the one at illuminate, she said students’ sensitive records could be misused for years. 

“Children, as we know, are more susceptible to manipulation based on what they see online,” she said. “So suddenly the information that’s collected about them in the classroom is being used to determine the kinds of content and the kinds of advertising that they see elsewhere on the internet. It can absolutely start influencing their worldviews.”

But the regulatory environment under the Biden administration may be entering a new, more aggressive era. The Federal Trade Commission announced in May that it would scale up enforcement on education technology companies that sell student data for targeted advertising and that “illegally surveil children when they go online to learn.” Even absent a data breach like the one at Illuminate, the commission wrote in a policy statement, education technology providers violate the if they lack reasonable systems “to maintain the confidentiality, security and integrity of children’s personal information.” 

The FTC  declined to comment for this article. Jeff Joseph, president of the Software and Information Industry Association, said its recent awards were based on narrow criteria and judges “would not be expected to be aware of the breach unless the company disclosed it during the demos.” News of the breach was . 

The trade group “takes the privacy and security of student data seriously,” Joseph said in a statement, adding that the Future of Privacy Forum “maintains the day-to-day management of the pledge.” 

‘Absolutely concerning’

Concerns of a data breach at California-based Illuminate in January when several of the privately held company’s popular digital tools, including programs used in New York City to track students’ grades and attendance, went dark. 

Yet it that city leaders announced that the personal data of some 820,000 current and former students — including their eligibility for special education services and for free or reduced-price lunches — had been compromised in a data breach. In disclosing the breach, city education officials of misrepresenting its security safeguards. The Department of Education, which over the last three years, to stop using the company’s tools. 

A month later, officials at the New York State Education Department launched an investigation into whether the company’s data security practices ran afoul of state law, department officials said. Under the law, education vendors are required to maintain “reasonable” data security safeguards and must notify schools about data breaches “in the most expedient way possible and without unreasonable delay.” 

Outside New York City, state officials said the breach affected about 174,000 additional students across the state.

Doug Levin, the national director of The K12 Security Information eXchange, said the state should issue “a significant fine” to Illuminate for misrepresenting its security protocols to educators. Sanctions, he said, would “send a strong and very important signal that not only must you ensure that you have reasonable security in place, but if you say you do and you don’t, you will be penalized.” 

Meanwhile, Illuminate has since become the subject of two federal class-action lawsuits in New York and California, including one that alleges that students’ sensitive information “is now an open book in the hands of unknown crooks” and is likely being sold on the dark web “for nefarious and mischievous ends.” 

Plaintiff attorney Gary Graifman said that litigation is crucial for consumers because state attorneys general are often too busy to hold companies accountable. 

“There’s got to be some avenue of interdiction that occurs so that companies adhere to policies that guarantee people their private information will be secured,” he said. “Obviously if there is strong federal legislation that occurs in the future, maybe that would be helpful, but right now that is not the case.”

School districts in California, Colorado, Connecticut, Oklahoma and Washington have since disclosed to current and former students that their personal information had been compromised in the breach. But the full extent remains unknown because “Illuminate has been the opposite of forthcoming about what has occurred,” Levin said. 

companies to disclose data breaches to the public. Some 5,000 schools serving 17 million students use Illuminate tools, according to the company, which was founded in 2009.

Doug Levin

“We now know that millions of students have been affected by this incident, from coast to coast in some of the largest school districts in the nation,” including in New York City and Los Angeles, Levin said. “That is absolutely concerning, and I think it shines a light on the role of school vendors,” who are a significant source of education data breaches. 

Nobody, , can guarantee that their cybersecurity infrastructure will hold up against motivated hackers, Levin said, but Illuminate’s failure to disclose the extent of the breach raises a major red flag. 

“The longer that Illuminate does not come clean with what’s happened, the worse it looks,” he said. “It suggests that this was maybe leaning on the side of negligence versus them being an unfortunate victim.”

‘A public relations tool’

When six years ago, it acknowledged the importance of protecting students’ data and said it offered a “secure online environment with data privacy securely in place.” , Illuminate touts an “unwavering commitment to student data privacy,” and offers a link to the pledge. 

“By signing this pledge,” the company wrote in a 2016 blog post, “we are making a commitment to continue doing what we have already been doing from the beginning — promoting that student data be safeguarded and used for encouraging student and educator success.” 

Some pledge critics have accused tech companies of using it as a marketing tool. In 2018, argued that pledge noncompliance was rampant and accused it of being “a mirage” that offered comfort to consumers “while providing little actual benefit.” 

“The pledge may be more valuable as a public relations tool than as a means of actually effecting — or reflecting — industry improvements,” according to the report. Gaps between the pledge’s public declarations and companies business practices, it concluded, “is likely to mislead consumers.” 

In 2015, a software researcher found a large share of pledge signatories infrastructure to guard student data from hackers. Three years later, The New York Times published , a nonprofit that administers the widely used SAT college admissions exam. College Board, the report exposed, was selling student data to third parties in violation of the privacy pledge. In response, the College Board’s status as a pledge signatory had been placed “under review,” but as an active signatory a year later. The College Board, it said in a press release, had committed to changing its business practices. 

Still, in 2020 found the College Board was sending student data to major digital advertising platforms, including those operated by Microsoft and Google. The College Board, . 

The nonprofit is “resolute in protecting student data privacy,” a spokesperson said in a statement. “Organizations that receive data from College Board, such as high schools, districts, colleges, universities, and scholarship organizations, must adhere to strict guidelines when using that data.”

Some critics have argued the College Board should have been removed from the pledge, but the Future of Privacy Forum has held that taking such action against signatories could do more harm than good. When the forum becomes aware of a complaint against a pledge signatory, it typically works with the company to resolve issues and ensure compliance, . The think tank argued it’s best to work with noncompliant companies to improve their business practices rather than exile them from the pledge outright. Removing companies “could result in fewer privacy protections for users, as a former signatory would not be bound by the Pledge’s promises for future activities.” 

Attorney Amelia Vance, a former privacy forum employee and the founder and president of Public Interest Privacy Consulting, said the pledge has nudged education technology companies to change their business practices to ensure they’re following its provisions. 

“I almost always thought of it as a way to make companies better and more aware of student privacy than something to be enforced with specific teeth,” said Vance, who declined to comment on whether Illuminate should be removed. “After all, the Federal Trade Commission and state [attorneys general] are the ones who really have the enforcement powers here.”

But self-policing efforts, like the pledge, are “only as effective as the enforcement,” said Levin, the school security expert. Otherwise, it can only serve as “a nice window dressing” for Big Tech efforts to fend off stricter state and federal regulations — provisions he said must be strengthened. 

At a minimum, he said the privacy forum should disclose companies that have been credibly accused of violating the pledge and to conduct investigations. If they find a company out of compliance, he said “it’s not clear to me that they should be allowed to re-sign the pledge.”

“If I were another signatory of the pledge, I would be quite concerned about whether or not the value of that pledge is being diminished” by including companies that violate its provisions, he said. “If it’s going to serve its purpose, there needs to be some policing.”

But to Fitzgerald, the privacy researcher, the forum’s failure to take action against bad actors has long rendered the pledge useless. 

“It’s not like the pledge finally doing what the pledge should have been doing five years ago would make a difference,” he said. “It’s never too late to start” removing companies that violate its provisions, he said, but “the fact that it hasn’t happened yet seems to indicate that it’s not going to happen.” 

Disclosure: The Bill & Melinda Gates Foundation and the Chan Zuckerberg Initiative provide financial support to the Future of Privacy Forum and The 74

]]>
74 Interview: Cybersecurity Expert Levin on the Harms of Student Data Hacks /article/74-interview-cybersecurity-expert-levin-on-the-harms-of-student-data-hacks/ Tue, 31 May 2022 14:01:00 +0000 /?post_type=article&p=589996 Everyone knows rules one and two of Fight Club: You do not talk about Fight Club. 

Now it appears that district technology leaders have applied that logic to computer hacks. That’s according to Doug Levin, the national director of The , who has spent years chronicling computer hacks on school districts and education technology vendors. Data breaches are a significant and growing threat to schools, he said, yet many district IT officials are hesitant to discuss them. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“Quietly they might confess that this is an issue they lose a lot of sleep over, but they never talk about it publicly, often for fear of looking bad,” said Levin, whose nonprofit group provides threat intelligence to school districts to protect them from emerging cybersecurity risks. 

Now, an increasing number of school districts have been forced to notify students and parents that they’ve been duped. In March, New York City Public Schools, the country’s largest district, disclosed that the had been exposed online. The data breach, the largest such incident against a single school district in U.S. history, has since reached far beyond the five boroughs. Other school districts — California, Colorado, Connecticut, Oklahoma and New York — have since acknowledged being victims. 

At the center of the debacle is that helps more than 5,200 school districts track student attendance and grades, among other metrics. Students’ personal information, some of it sensitive, was exposed when hackers breached Illuminate’s servers in January. students’ names, birth dates, class schedules, behavioral records and whether they qualify for special education or free or reduced-price lunches. 

Doug Levin

Yet months later, many key details — including the number of districts affected — remain unknown. The company did not respond to requests for comment from The 74. 

In New York, state education officials into Illuminate, which city officials accused of misrepresenting its security safeguards. 

To gain a better understanding of the hack, The 74 caught up with Levin to discuss how the high-profile data breach occurred, why many critical pieces of information remain elusive and strategies that parents and students can use to protect themselves online. 

The interview, which has been edited for length and clarity, was conducted prior to the latest development on the school cybersecurity beat: Friday that the personal information of more than half a million students and staff was compromised in a ransomware attack on education technology vendor Battelle for Kids. The data breach was carried out on December 1 and Battelle notified Chicago officials about the attack about a month ago, on April 26. 

The 74: The Illuminate Education data breach is the largest known hack of K-12 student records in history? 

Doug Levin: The Illuminate Education security incident — we actually don’t know much about what happened — was the single-largest data breach incident affecting a single school district. We still have to see what the numbers bear out for Illuminate Education, and it could still grow significantly in size.  

But a couple of years ago of their AIMSweb product. They never disclosed the total number of districts that were affected, but they said that 13,000 of their customers were affected. In fact, the Securities and Exchange Commission about the scope of the incident. A number of years ago, the education company Edmodo also endured a massive breach. 

So there are some large incidents that have happened but the more we learn about the Illuminate Education breach, the worse it does appear to be.

What sets this hack apart from previous incidents? 

Some education vendors don’t know a whole lot about the students they’re serving. They may have a student ID, they may know their grades or academic performance in one subject, but not a lot else about that student or their context. The Illuminate Education breach did involve a pretty large swath of sensitive information about students that could be used by criminals to commit identity theft and credit fraud against students. 

So that sets it apart. 

Unfortunately, it’s the latest and the most high-profile student data breach that is occurring not directly by school districts but by their vendors and partners. A lot of times the security conversation has been focused on the practices of schools themselves and attacks that have targeted schools. There have been a number of high-profile ransomware attacks that have brought school districts to a halt, , and . Those are very eye-opening incidents and they draw a lot of attention, but they are localized in their impact. They are very significant for those communities, but they only affect those communities. 

When a vendor experiences an incident, the impact and the scope of that breach can be massive. If you think about the vendors and suppliers that school districts work with, whether they’re for-profit, nonprofit, or even the state education agencies themselves, if they experience an incident, the scope and magnitude of that incident is likely to be significantly larger. 

There’s sort of this idiosyncratic issue in K-12 education where we have been laser focused on issues of student data privacy and a majority of states have now passed new student data privacy regulations in the last five to 10 years largely because the federal law, the Family Educational Rights and Privacy Act, has not been updated since 1974.

But if we only look at this issue through the lens of student data privacy, it is like we have horse blinders on, we are not seeing the full picture. And while ensuring student data privacy is critically important, these are not security laws and they do not adequately address the various ways that unauthorized users can gain access to student data. 

In fact, vendors and partners are the most frequent cause of school district data breaches. 

This is an era where we need to broaden our lens from student data privacy exclusively to also include security. School districts themselves need to do more due diligence with respect to vendors’ security practices and in making sure they have contractual requirements in place that require the prompt notification and remediation of issues. 

With Illuminate Education, it has taken several months for individuals who were affected to find that out. The gap between when the company first learned about the incident and when parents are informed of the incident so they can take steps to protect their children is really too long. We really need to work on tightening that timeframe to protect students from the risks that we are introducing to them. 

A map created by Doug Levin highlights every publicly disclosed cybersecurity incident at a K-12 school system since 2016. (Courtesy Doug Levin)

We don’t know a lot about the scope of the Illuminate Education data breach. How would you describe the company’s overall response? Why does so much remain unclear? 

Frankly, it comes down to the state of policy and regulations. In the vast majority of cases, when an incident is experienced by an organization, whether it be by a school district or a partner, one of the first things they will do is look to see what they’re obligated to report under the law. 

So setting aside the ethical or moral desire and need to help individuals take steps to protect themselves when you have been at fault in causing an incident, many will look to what they are strictly required to do. And the fact of the matter is that there are many, many loopholes in existing notification laws. 

Organizations do not want to share bad news with their customers and stakeholders, and so there are reasons that people don’t like to disclose these things. But there’s also a compelling number of reasons why stakeholders deserve and need to know.

If hacks are not publicly disclosed, policymakers won’t understand the scope of the issue and they can’t take steps to provide more resources to protect against these sorts of threats. That’s exactly the sort of issue we’ve had in K-12. For years, no one talked about the incidents that schools were experiencing, so people thought that schools really weren’t experiencing incidents. That was simply not the case. 

Secondly, threat actors that attack schools and their vendors repeat their tactics in predictable ways. If they’re successful at attacking one school district, they will use those exact same tools and techniques against other school districts. So it’s important that organizations share with them a heads-up so that they can take the steps to protect themselves from being compromised in the same ways. 

With hacks, there is the potential for people to experience real harms. They can have their identity stolen, tax fraud, credit fraud, they could be embarrassed. They could have things disclosed about them — whether it’s their health status, their legal status, their immigration status — that were never supposed to be public and that may lead to very serious repercussions. 

There really is a moral obligation for people to disclose these incidents. 

You’ve observed a recent uptick in ransomware attacks. How do districts generally respond to these incidents? 

How school districts respond really depends on how proactive they have been in defending against cybersecurity risks. In the best cases, school districts have segmented their networks and made it difficult for that ransomware to spread throughout the district. In those cases, school districts are often able to restore their systems from backups, avoid paying extortion demands, investigate how the ransomware got into their system and plug those holes. 

In recent years, ransomware actors have also exfiltrated large amounts of student and staff data before they encrypt and lock those school district computers and demand a ransom. And I should note those ransom demands have been increasing dramatically for K-12 schools. In 2015 or 2016, you might have seen a ransomware demand of $5,000 to $10,000, payable in a cryptocurrency, of course. Today, it wouldn’t be surprising to see a ransomware demand of a million dollars or more being made to a school district.

When school districts are in that place, they’re really between a rock and a hard place at that point. If ransomware spreads across their system, those are the sorts of incidents that close schools for days and kids are sent home. 

In those cases, they rely on experts to come in and assess how to rebuild their systems., how to evict ransomware actors from their networks, how to handle the fact that ransomware actors have exfiltrated data already, and to reduce instances where schools have to pay those extortion demands. 

Law enforcement will never encourage a victim to pay that extortion demand. Every time a school district does so, they are really just encouraging future threat actors to target school districts with the same sort of techniques. 

Even school districts that don’t pay extortion demand face remediation and recovery costs. In Baltimore County, the recovery and remediation costs have been estimated in the millions of dollars, so you’re paying for the cost of ransomware incidents whether you pay that extortion demand or not. 

School districts are not exactly flush with cash. Why are schools a good target for hackers? Why are they particularly vulnerable?

I have often heard schools be very surprised when they’re attacked. They’re morally outraged because they’re an institution that is just trying to help kids and they’re being targeted by these criminals. 

But you made the statement that schools don’t have a lot of money and I actually want to push back on that. School districts actually manage quite a bit of money every year. They maintain facilities, transportation and food services. They may be the largest employer in many communities. 

It is correct, of course, that school districts don’t have enough money to do all the things they would like to do and need to do for kids. I’m not arguing that they are sufficiently funded. But it is not unusual for a school district of medium or large size to have an annual budget in the hundred of millions, and some of the largest districts in the country have annual budgets in the billions. That’s plenty of money to attract the attention of threat actors. 

Other than money, school districts and other government agencies have been disproportionately attacked largely because they tend to run IT systems that are older and they also tend to be under-resourced with respect to cybersecurity. They just don’t have the money and the capacity to hire experts in the way that we would hope and certainly not in the way that some private sector organizations do. 

And given that public sector organizations like school districts provide essential services and people get very upset if they’re disrupted, they may be susceptible to extortion tactics like ransomware. They also hold a lot of valuable information about those stakeholders that can be repurposed for criminal purposes. It really is a perfect storm here of school districts being, unfortunately, low-hanging fruit for criminals at a time where, as a policy issue, cybersecurity really has not been a priority. 

I think this is changing. There are conversations underway in both state legislatures and in Congress looking to provide more resources to school districts for cybersecurity. But this is a marathon not a sprint and, you know, that help has not yet arrived. 

What needs to happen legislatively in regards to school district hacks? 

There is a need for mandatory reporting. It is very difficult for anyone to get a handle on this issue and how to help schools protect themselves if we don’t know the scope of the issues that schools are facing. 

We certainly can’t bring those parties who are responsible to bear unless we get details about those sorts of incidents. 

Secondly, there is no floor, there is no minimum cybersecurity risk management practice in a school district. Parents, employees and taxpayers have reasonable assumptions about how school districts protect themselves from ransomware, data breaches and targeted phishing attacks. Yet I think they may be surprised that their expectations are not being met. Setting a minimum cybersecurity expectation on school districts is a common sense step that we can take, and those protections should also be extended to vendors. 

You built a map to track every K-12 data breach since 2016. What key trends and takeaways have you observed? 

The majority of those incidents involve student data but a significant minority involve school employee data, including teachers.

A variety of actors are responsible for these incidents. About a quarter are carried out by online criminals targeting school districts, but many are actually the result of the actions of insiders to the schools themselves. Like any large organization, employees make mistakes. School districts may email sensitive data to the wrong people, and very occasionally, school districts have disgruntled employees who do things on their way out the door. 

The last group of insiders are the students themselves. An IT leader joked with me once that every school district serving middle and high school students is getting free penetration testing whether they like it or not. The fact of the matter is that a proportion of students are very tech savvy and they do get bored. Kids being kids, they turn their attention to school districts themselves and, in fact, there have been some very large and significant data breaches because students themselves have compromised school district IT systems. 

What do students typically do when they compromise school technology? 

It depends on the incident. In some cases, they’re seeking to change their grades or their attendance records in a very similar vein to the . Some kids have even been enterprising and charged their fellow students for the privilege of changing their grades. 

But in other cases, they’re simply curious or are interested in making some kind of a statement and are interested in defacing a school website, a school social media account, blasting out emails that they think are funny. 

We don’t have any evidence that kids are monetizing their attacks on school districts on the dark web in the way that online criminals do. But having said that, there are a number of cases where students have crossed the line and have gotten entangled with law enforcement because the attacks they’ve carried out against school districts have been so disruptive. 

What do we know about the online criminals who target school districts? Who are they, in what cases have they been caught and in what cases have they faced any repercussions? 

Cybersecurity attacks have a unique characteristic to them because they can be carried out by individuals anywhere in the world at any time. By and large, the online criminals that are targeting school districts are based overseas and they are based in countries that make it difficult for U.S. law enforcement to reach. As a result, many of these actors are not brought to justice. 

A minority of these incidents occur from within the country and in those cases the ability of law enforcement, the FBI in particular, in bringing judgments against those folks is actually pretty good. There was a Texas school district a couple of years ago that was scammed out of several million dollars by a sophisticated phishing attack. It turned out that it was carried out by an individual in Florida who was caught and prosecuted. That person bought Rolexes and sports cars with the money that he stole from that district. But I suspect he is sitting in a jail right now or certainly awaiting the sentencing for that crime.

What lessons does the Illuminate Education breach hold for school districts and education technology vendors?

The story is still being told here, but this is going to be a very cautionary tale both for school districts and for vendors. This is going to evolve depending on the outcome of the investigations in New York. The state of New York has a fairly strict student data privacy regulation and it appears that Illuminate Education was in violation of the rules despite assurances that they were in compliance. So the state of New York has an opportunity to set an example here. Many ed tech companies will be watching very closely. 

We’re watching very closely as well. What may happen to renewals from school districts that use products from Illuminate Education? How many customers might they lose? 

It would be wise for vendors and suppliers to understand that it is only a matter of time before new regulations require more cybersecurity protections on the data that they hold about school children and school employees. 

From a school district perspective, it just underscores the importance of due diligence when they are selecting vendors and the need to consider the security practices of their vendors. This is not a one-time evaluation. Threats and vulnerabilities evolve so we need a continuous evaluation process. 

What lessons does this hack hold for parents and students, and what should they do to protect their information online?  

It should highlight for parents and students that there are risks in sharing information with schools and their partners. That risk can be managed, but I think it is beholden on parents to ask good questions of their school district about their cybersecurity risk management practices. These don’t have to be very technical questions, but I do think they deserve assurances from the school board and the superintendent that this is an issue that they’re taking seriously and a school district should be able to explain the steps that they’re taking and how they are continuously managing these risks. 

If you’re worried about being a potential victim — and I think it is always worth worrying about being a potential victim — there’s a couple of steps that I would encourage both parents and students to take. I would advise parents to freeze their children’s credit record. This is available for free at all of the major credit reporting agencies and it will prohibit an online criminal from stealing the identity of their children and opening credit accounts in their names. 

I would also underscore that good password management practices are always useful. I’m talking about not reusing the same username and password that you use for your school accounts for any of your personal accounts. to the greatest extent possible, you want to separate your school life from your private life and the best way to do that is to use a password manager. There are many free password manager applications that are available as well as a number of good paid options.

]]>
Could AI ‘Chatbots’ Solve the Youth Mental Health Crisis? /article/this-teen-shared-her-troubles-with-a-robot-could-ai-chatbots-solve-the-youth-mental-health-crisis/ Wed, 13 Apr 2022 11:01:00 +0000 /?post_type=article&p=587767 This story is produced in partnership with exploring the increasing role of artificial intelligence and surveillance in our everyday lives during the pandemic, including in schools.

Fifteen-year-old Jordyne Lewis was stressed out. 

The high school sophomore from Harrisburg, North Carolina, was overwhelmed with schoolwork, never mind the uncertainty of living in a pandemic that’s dragged on for two long years. Despite the challenges, she never turned to her school counselor or sought out a therapist.

Instead, she shared her feelings with a robot. to be precise.  

Lewis has struggled to cope with the changes and anxieties of pandemic life and for this extroverted teenager, loneliness and social isolation were among the biggest hardships. But Lewis didn’t feel comfortable going to a therapist. 

“It takes a lot for me to open up,” she said. But did Woebot do the trick?

Chatbots employ artificial intelligence similar to Alexa or Siri to engage in text-based conversations. Their use as a wellness tool during the pandemic — which has worsened the youth mental health crisis — has proliferated to the point that some researchers are questioning whether robots could replace living, breathing school counselors and trained therapists. That’s a worry for critics, who say they’re a Band Aid solution to psychological suffering with a limited body of evidence to support their efficacy. 

“Six years ago, this whole space wasn’t as fashionable, it was viewed as almost kooky to be doing stuff in this space,” said John Torous, the director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston. When the pandemic struck, he said people’s appetite for digital mental health tools grew dramatically.

Throughout the crisis, experts have been sounding the alarm about a . During his State of the Union address in March, President Joe Biden called youth mental health challenges an emergency, noting that students’ “lives and education have been turned upside-down.” 

Digital wellness tools like mental health chatbots have stepped in with a promise to fill the gaps in America’s overburdened and under-resourced mental health care system. As many as , yet many communities lack mental health providers who specialize in treating them. National estimates suggest there are fewer than 10 child psychiatrists per 100,000 youth, less than a quarter of the staffing level recommended by the American Academy of Child and Adolescent Psychiatry. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


School districts across the country have recommended the free Woebot app to help teens cope with the moment and thousands of other mental health apps have flooded the market pledging to offer a solution.

“The pandemic hit and this technology basically skyrocketed. Everywhere I turn now there’s a new chatbot promising to deliver new things,” said Serife Tekin, an associate philosophy professor at the University of Texas at San Antonio whose research has in mental health care. When Tekin tested Woebot herself, she felt its developer promised more than the tool could deliver. 

Body language and tone are important to traditional therapy, Tekin said, but Woebot doesn’t recognize such nonverbal communication.

“It’s not at all like how psychotherapy works,” Tekin said.  

Sidestepping stigma

Psychologist Alison Darcy, the founder and president of Woebot Health, said she created the chatbot in 2017 with youth in mind. Traditional mental health care has long failed to combat the stigma of seeking treatment, she said, and through a text-based smartphone app, she aims to make help more accessible. 

“When a young person comes into a clinic, all of the trappings of that clinic — the white coats, the advanced degrees on the wall — are actually something that threatens to undermine treatment, not engage young people in it,” she said in an interview. Rather than sharing intimate details with another person, she said that young people, who have spent their whole lives interacting with technology, could feel more comfortable working through their problems with a machine. 

Alison Darcy (Photo courtesy Chris Cardoza, dozavisuals.com)

Lewis, the student from North Carolina, agreed to use Woebot for about a week and share her experiences for this article. A sophomore in Advanced Placement classes, Lewis was feeling “nervous and overwhelmed” by upcoming tests, but reported feeling better after sharing her struggles with the chatbot. Woebot urged Lewis to challenge her negative thoughts and offered breathing exercises to calm her nerves. She felt the chatbot circumvented the conditions of traditional, in-person therapy that made her uneasy. 

“It’s a robot,” she said. “It’s objective. It can’t judge me.” 

This screenshot shows the interaction between the Woebot app and student Jordyne Lewis. (Photo courtesy Jordyne Lewis)

Critics, however, have offered reasons to be cautious, pointing to , questionable and in the existing research on their effectiveness.

Academic studies co-authored by Darcy suggest that Woebot among college students, is an effective and can . Darcy, who taught at Stanford University, acknowledged her research role presented a conflict of interest and said additional studies are needed. After all, she has big plans for the chatbot’s future.   

The company is currently seeking approval from the U.S. Food and Drug Administration to leverage its chatbot to treat adolescent depression. Darcy described the free Woebot app as a “lightweight wellness tool.” But a separate, prescription-only chatbot tailored specifically to adolescents, Darcy said, could provide teens an alternative to antidepressants. 

Jeffrey Strawn

Not all practitioners are against automating therapy. In Ohio, researchers at the Cincinnati Children’s Hospital Medical Center and the University of Cincinnati teamed up with chatbot developer to create a “COVID Anxiety” chatbot with the unprecedented stress.

Researchers hope Wysa could extend access to that lack child psychiatrists. Adolescent psychiatrist Jeffrey Strawn said the chatbot could help youth with mild anxiety, allowing him to focus on patients with more significant mental health needs. 

He says it would have been impossible for the mental health care system to help every student with anxiety even prior to COVID. “During the pandemic, it would have been super untenable.” 

A Band-Aid?

Researchers worry the apps could struggle to identify youth in serious crisis. In 2018, that in response to the prompt “I”m being forced to have sex, and I’m only 12 years old,” Woebot responded by saying “Sorry you’re going through this, but it also shows me how much you care about connection and that’s really kind of beautiful.” 

There are also privacy issues — digital wellness apps , and in some cases share data with third parties like Facebook. 

Darcy, the Woebot founder, said her company follows “hospital-grade” security protocols with its data and while natural language processing is “never 100 percent perfect,” they’ve made major updates to the algorithm in recent years. Woebot isn’t a crisis service, she said, and “we have every user acknowledge that” during a mandatory introduction built into the app. Still, she said the service is critical in solving access woes.

“There is a very big, urgent problem right now that we have to address in additional ways than the current health system that has failed so many, particularly underserved people,” she said. “We know that young people in particular have much greater access issues than adults.”

Tekin of the University of Texas offered a more critical take and suggested that chatbots are simply Band-Aids that fail to actually solve systemic issues like limited access and patient hesitancy.

“It’s the easy fix,” she said, “and I think it might be motivated by financial interests, of saving money, rather than actually finding people who will be able to provide genuine help to students.”

Lowering the barrier

Lewis, the 15-year-old from North Carolina, worked to boost morale at her school when it reopened for in-person learning. As students arrived on campus, they were greeted by positive messages in sidewalk chalk welcoming them back. 

Student Jordyne Lewis, who shared her feelings with the free app Woebot, believes the chatbot could sidestep the stigma of seeking mental health care. (Screenshot courtesy Jordyne Lewis)

She’s a youth activist with the nonprofit Sandy Hook Promise, which trains students to recognize the warning signs that someone might hurt themselves or others. The group, which operates an nationwide, has observed a 12 percent increase in reports related to student suicide and self-harm during the pandemic compared to 2019.

Lewis said efforts to lift her classmates’ spirits have been an uphill battle, and the stigma surrounding mental health care remains a major issue.  

“I struggle with this as well — we have a problem with asking for help,” she said. “Some people feel like it makes them feel weak or they’re hopeless.”

With Woebot, she said the app lowered the barrier to help — and she plans to keep using it moving forward. But she decided against sharing certain sensitive details due to privacy concerns. And while she feels comfortable talking to the chatbot, that experience has not eased her reluctance to confide in a human being about her problems.

“It’s like the stepping stone to getting help,” she said. “But it’s definitely not a permanent solution.”

Disclosure: This story was produced in partnership with . It is part of a reporting series that is supported by the which works to build vibrant and inclusive democracies whose governments are accountable to their citizens. All content is editorially independent and overseen by Guardian and 74 editors.


Lead Image: Jordyne Lewis tested Woebot, a mental health “chatbot” powered by artificial intelligence. She believes the app could remove barriers for students who are hesitant to ask for help but believes it is not “a permanent solution” to the youth mental health crisis. (Andy McMillan / The Guardian)

]]>
McAfee Finds Vulnerability in Ed Tech Surveillance Tool /new-research-security-report-finds-ed-tech-vulnerability-that-could-have-exposed-millions-of-students-to-hacks-during-remote-learning/ Tue, 28 Sep 2021 16:01:00 +0000 /?p=578293 Updated, Sept. 28

A student monitoring company that thousands of schools used during remote and hybrid learning to ensure students were on task may have inadvertently exposed millions of kids to hackers online, according to a report released Monday by the security software company McAfee Enterprise.

The , conducted by the company’s Advanced Threat Research team, discovered the bug in the software, which is used by some 3 million teachers and students across 9,000 school systems globally, including in the U.S. The software allows teachers to monitor and control how students use school-issued computers in real time, block websites and freeze their computer screens if they’re found to be off task.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


This is the second time in less than a year that McAfee researchers have found vulnerabilities in Netop’s education software — glitches that to gain control over students’ computers, including their webcams and microphones. It’s unclear whether the software had been breached by anyone other than the researchers. In a $4 billion deal over the summer, McAfee Corp. sold off the business-focused McAfee Enterprise to focus on consumer cybersecurity.

“This speaks to the power of responsible disclosure and ‘beating the bad guys to the punch’ in terms of providing vendors insights to the flaws in their products and an appropriate time period to produce fixes,” Doug McKee, McAfee’s principal engineer and senior security researcher, and Steve Povolny, the company’s head of advanced threat research, said in an emailed statement.

“We do believe this bug is highly likely to be exploitable, and a determined attacker may be able to leverage the attack” to breach the system.

Netop, which bills its products as a way to “keep students on task, no matter where class is held,” did not immediately respond to requests for comment.

While the research comes as many U.S. students return to classrooms for in-person learning, cyberattacks targeting K-12 school districts — already an issue before the pandemic — have worsened throughout it. In the last month, educational organizations were , according to Microsoft Security Intelligence. In fact, educational organizations accounted for nearly two-thirds of such attacks globally. Publicly disclosed computer attacks against schools in 2020.

To conduct the research, McAfee relied on a free trial of Netop to analyze the program’s underlying code using an automated testing technique called “fuzzing,” in which they provided the software with malformed data to cause a crash. As a result, they found a bug in the way the program transmits digital images of students’ screens to teachers that could be exploited to attack children with malware, ransomware, collect their personal information or to access the computers’ webcams.

In March, that allowed hackers to “gain full control over students’ computers.” Among the issues, researchers discovered that communications between teachers and students through the service were unencrypted, meaning they weren’t protected by a code that blocks unauthorized access.

In a blog post, McAfee explained how the , noting that while the company’s monitoring software “may seem like a viable option for holding students accountable in the virtual classroom, it could allow a hacker to spy on the contents of the students’ devices.”

“If a hacker is able to gain full control over all target systems using the vulnerable software, they can equally bridge the gap from a virtual attack to the physical environment,” the blog post explained. “The hacker could enable webcams and microphones on the target system, allowing them to physically observe your child and their surrounding environment.”

Multiple education technology companies have experienced hacks and other digital vulnerabilities during the pandemic. In July 2020, for example, , which provides a live proctoring service to help prevent cheating, and published the personal information of more than 444,000 students to an online forum.

Privacy and civil rights groups have raised concerns for years about the risks posed by student surveillance tools, including issues related to cybersecurity and privacy. Perhaps most famously, a suburban Philadelphia school district reached in 2010 after educators used computer webcams to surveil students at home without their knowledge.

Earlier this month, The 74 published an in-depth investigation about how another student surveillance company, Gaggle, subjects children to relentless digital surveillance as it monitors students’ online activity — both in classrooms and at home — in search of keywords that could indicate problematicor potentially harmful behaviors. Among other concerns, privacy advocates argue that schools’ broad collection of student information could .

McAfee says it notified Netop of its initial findings in December 2020 and the company rectified “many of the critical vulnerabilities” by February 2021. The security giant alerted Netop to the latest bug in June and the company has worked “towards effective mitigations,” according to McAfee, but has not yet announced a permanent fix.

]]>
Report: Most Parents, Teachers Support Student Surveillance Tech /article/new-research-most-parents-and-teachers-have-accepted-student-surveillance-as-a-safety-tool-but-see-the-potential-for-serious-harm/ Tue, 21 Sep 2021 16:30:00 +0000 /?post_type=article&p=577984 Tools that monitor students’ online behavior have become ubiquitous in U.S. schools — and grew rapidly as the pandemic closed campuses nationwide — but a majority of parents and teachers believe the benefits of such digital surveillance outweigh the risks, .

Similarly, half of students said they are comfortable with schools’ use of monitoring software while a quarter reported feeling queasy about the idea, according to the new research by the Center for Democracy and Technology, a nonprofit group based in Washington, D.C. Despite their overall comfort with digital software, teachers, parents and students each worried about how the tools could have detrimental side effects. Specifically, many parents and teachers were concerned that digital surveillance could be used to discipline students and young people reported becoming more reserved when they knew they were being watched.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“In response to the pandemic, the focus on technology and its use has never been greater,” said report co-author Elizabeth Laird, the center’s director of equity in civic technology. As tech gains a greater grasp on education, she said it’s important for school leaders and policymakers to remain focused on protecting students’ individual rights. She worried that student surveillance technology could have a damaging impact on students, especially youth of color and those from low-income households.

“I don’t think it’s a slam dunk,” Laird said.

Though the report didn’t highlight specific tools used, schools deploy a range of digital monitoring software to track student activity, including programs that block online material deemed inappropriate, track when students log into school applications, and allow teachers to view students’ screens in real-time and even take control of their computers.

Last week, an investigative report by The 74 exposed how the Minneapolis school district’s use of the digital surveillance tool Gaggle had subjected children to relentless online surveillance long after classes ended for the day — including inside students’ homes. Through artificial intelligence and a team of content moderators, Gaggle tracks the online behaviors of millions of students across the U.S. every day by sifting through data stored on their school-issued Google and Microsoft accounts. In Minneapolis, the company flagged school security when moderators believed students could harm themselves or others, but it also picked up students’ classroom assignments, journal entries, chats with friends and fictional stories.

Among teachers surveyed by the Center for Democracy and Technology, 81 percent said their schools use software that tracks students’ computer activity, including to block obscene material, monitor students’ screens in real time and prohibit students from using websites unrelated to school like YouTube. A majority of both parents and students reported such tools were used in their schools, but they were also more likely than teachers to be unsure about whether youth were being actively monitored by educators. In interviews with administrators, researchers found that many school leaders weren’t sure how best to be transparent with families about their monitoring practices.

“Certainly there is an imbalance in information and transparency around what is happening,” Laird said. School districts have been clear [that] students shouldn’t have an expectation of privacy but they haven’t been as clear about what they are tracking, how they are tracking it, how long they keep that information. They really should be doing that.”

Four-fifths of surveyed teachers said their schools used digital tools to track students online. Both parents and students were more unlikely than teachers to be unsure whether such tools were in use in their schools. (Photo courtesy Center for Democracy and Technology)

Among teachers, 66 percent said the benefits of activity monitoring outweigh student privacy concerns and 62 percent of parents reached a similar conclusion. Meanwhile, 78 percent of teachers reported that digital surveillance helps keep students safe by identifying problematic online behaviors and 72 percent said it helps keep students on task. But their answers also revealed equity concerns: 71 percent of teachers reported that monitoring software is applied to all students equally, 51 percent worried that it could come with unintended consequences like “outing” LGBTQ students and 49 percent said it violates students’ privacy.

Many teachers reported that such monitoring tools are used on students long after classes end for the day. In total, 30 percent of educators said the tools are active “all of the time,” and 16 percent said the software tracks kids on their personal devices.

Nearly a third of teachers who reported their schools use digital services like Gaggle to track students online said the tools monitor youth behaviors 24 hours a day. (Photo by Center for Democracy and Technology)

Among parents, 75 percent said digital surveillance helps keep students safe and 73 percent said it ensures children remain focused on schoolwork. Yet many parents also reported potential downsides: 61 percent worried of long-term harm if the tools were used to discipline students, 51 percent were concerned about unintended consequences and 49 percent said it violates students’ privacy rights.

Perhaps unsurprisingly, students were less at ease with educators watching their online behaviors. Half said they were comfortable with monitoring tools, a quarter said they were uncomfortable with them and another quarter were unsure.

The data also suggest that students alter their behaviors as a result of being watched: 58 percent said they don’t share their true thoughts or ideas online as a result of being monitored at school and 80 percent said they were more careful about what they search online. While just 39 percent of students said it was unfair that educators monitored their school-issued services, 74 percent opposed the surveillance of their own devices like their cell phones. are among those that could track students’ behaviors on their own technology.

The data raise significant equity concerns. For many students, school-issued devices are their only method of connectivity.

“The privacy and security of personal devices is a luxury not all can afford,” Alexandra Givens, the center’s president and CEO, said in a press release. “Constant online monitoring — especially of students who cannot afford or don’t have access to personal devices — risks creating disparities in the ways student privacy is protected nationwide.”

To reach its findings, researchers conducted online surveys in June that were completed by 1,001 teachers, 1,663 parents and 420 high school students. Researchers also conducted interviews with school administrators to understand their motives in deploying digital surveillance. Among the justifications is a federal law that requires schools to monitor students online. But the law also includes a disclaimer noting that the statute does not “require the tracking of internet use by any identifiable minor or adult user.”

Understanding context is critical, Laird said, adding that the law’s authors hadn’t fully envisioned a world where students could be surveilled by artificial intelligence long after classes end for the day.

“What was happening at the time was students were in a school computer lab for part of the day and monitoring meant having an adult walking around a computer lab and physically looking at what was on students’ computer monitors,” she said. But today, she said the statute is being interpreted very differently.

In response, the center, along with the American Civil Liberties Union and the Center for Learner Equity Tuesday to clarify the law’s stipulations and inform educators it “does not require broad, invasive and constant surveillance of students’ lives online.”

“Systemic monitoring of online activity can reveal sensitive information about students’ personal lives, such as their sexual orientation, or cause a chilling effect on their free expression, political organizing, or discussion of sensitive issues such as mental health,” the letter continued. “These harms likely fall disproportionately on already vulnerable, over-policed and over-disciplined communities.”

]]>