Google – The 74 America's Education News Source Fri, 13 Mar 2026 22:11:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Google – The 74 32 32 Proposal for NYC AI-Focused Public High School Sparks Pushback /article/proposal-for-nyc-ai-focused-public-high-school-sparks-pushback/ Mon, 16 Mar 2026 18:30:00 +0000 /?post_type=article&p=1029829 This article was originally published in

New York City students with a passion for STEM — and an interest in artificial intelligence — may soon have a high school dedicated to training “the next generation of technology professionals.”

But families in Manhattan’s District 2 are pushing back against for , a new screened admissions high school that would take the place of the tiny, girls-only Urban Assembly School of Business for Young Women. Next Generation would be the first city public school to focus its curriculum on AI and computer science.

As details of the two proposals emerged over the last month, so have dual tensions: What should fill the space left by Young Women in Business, and how private technology companies and their artificial intelligence products could shape the curriculum at Next Generation.

Much of the opposition to Next Generation has come from families at a middle school also in the Broadway building, Lower Manhattan Community School. Also known as LMC, parents at the school have called on the department for years to expand enrollment from grades 6-8 up to grade 12.

The Panel for Educational Policy, the board that votes on new schools and closures, is expected to consider the proposals for Next Generation and Business for Young Women at its April 29 meeting.

The Education Department released both proposals on March 6, the day after the city’s eighth graders received their high school acceptance offers. If approved, Next Generation would welcome its first class of ninth graders in the fall. (The plan to close Business for Young Women in June is not contingent on Next Generation’s approval.)

Despite not having the green light yet, Next Generation has already held three virtual open houses. Its states the school is “set to open” in fall 2026, noting that applications would open March 19.

Parents ask: ‘Why this school and why here?’

Manhattan High Schools Superintendent Gary Beidleman introduced the idea for Next Generation Technology High School at a .

Panel for Educational Policy members and families of the three co-located schools at 26 Broadway — in addition to LMC and Business for Young Women, Richard R. Green High School of Teaching shares the building — said that meeting was the first time the district school community had been notified of the proposed STEM- and technology-focused screened high school.

At the Feb. 25 announcement, Beidleman said Next Generation grew out of his experience as a summer 2024 , and that Google and OpenAI are part of the planning team for the school. One of the school’s goals, he said, is to “expand pathways connected to high-growth technology careers” and provide advanced STEM and technology programming for NYC students. Next Generation also plans to offer a summer internship program with Carnegie Mellon University.

Caleb Haraguchi-Combs, founding principal and project director of Next Generation High School, said in an information session that the school would utilize . How much of this AI-powered, AI-focused Google coursework would comprise the curriculum is still in flux, according to the proposal’s .

The school’s academic description includes similar or identical language as found on the Google Skills website: Next Generation’s “special access to technology industry mentors,” “technology certifications,” and “curriculum that adapts to the dynamic changes in the technology field” are offerings advertised on the homepage of the Google Skills site.

Officials and families question new school proposal process

The community and Panel for Educational Policy members have asked questions about the fast proposal process, speaking to uncertainty around admissions for the coming school year.

in a letter to the Panel for Educational Policy that the proposal seemingly came out of nowhere, and families were not provided adequate engagement opportunities before its release. Panel Chair Greg Faulkner said he has received hundreds of similar letters from parents since the community learned of the incoming proposal in late February.

High school offers were released March 5, ahead of the panel’s vote and months before the proposed school would open. It remains unclear how the Education Department would handle screening requirements — such as interviews or assessments — after the main admissions cycle has concluded. The Office of District Planning did not respond to questions about how enrollment would work for this fall.

of the school, created by the Next Generation’s founding principal and program director on March 8, had under 100 signatures at the time of publishing.

A public hearing is scheduled for April 14, two weeks before the panel’s vote.

“I would love more transparency around why the department chooses certain schools to go in certain places,” said Sarah Calderon, a parent at Lower Manhattan Community School. “When we asked the superintendent, ‘Why this school and why here?’ he said he had no data on district demand.”

Beidelman told parents at the Feb. 25 District 2 meeting that expanding Lower Manhattan Community “was not an idea that was on the table.”

The Education Department receives many proposals each year, including some from outside New York City, said Sean Rux of the Office of New School Development.

“This was the proposal that spoke to us,” Rux said.

Families push to expand Lower Manhattan Community School

The plan to close the underenrolled Business for Young Women school has been percolating for a few years — with just 91 students this year, it’s the smallest district high school in the city, said Education Department officials.

Families at Lower Manhattan Community School say they have pushed for years to expand into a 6–12 model, and would like to move into the space used by Business for Young Women, if closed.

“A proposal to expand LMC could potentially open up sixth grade admissions to applicants citywide, but we have not been given the opportunity to even submit a proposal,” said Anne Hager, a parent of a sixth grader at Lower Manhattan School.

At a PTA meeting with Education Department staff on Wednesday, LMC’s Student Leadership Team presented its case to expand the school instead of opening Next Generation.

A new 6-12 would eliminate the need for LMC students to go through a second, onerous application process, something that students with disabilities would especially benefit from, they said. The presentation also cited Department of Education data from 2024 that showed 6-12 schools have nearly three times higher demand than their 6-8 middle school counterparts.

compared with citywide averages.

The department’s proposal focuses largely on space at the Broadway campus, estimating that Next Generation would serve roughly 450 students by its fourth year. All three schools can comfortably co-locate, according to the proposal, though its capacity calculations do not allot for significant expansion for either Richard R. Green High School or LMC.

Debate over AI timing and oversight

Next Generation’s proposal arrives amid over artificial intelligence in schools.

The school initially marketed itself in information sessions and on social media as an “AI school,” though DOE officials later clarified that students would learn about artificial intelligence rather than be taught by it.

“Students need to be creators, not consumers, of technology,” Beidleman said at the Feb. 25 meeting. “Lessons learned from the past show us that new tech in place creates an opportunity.”

Some parents have argued that broad use of an AI platform in public schools should not be allowed before comprehensive guidelines have been released by the city.

Greg Faulkner, who chairs the Panel for Educational Policy, said he first learned of the proposal after receiving Next Generation’s last month. Since then, the panel has received hundreds of letters from parents opposing the plan and raising concerns about the lack of community engagement so far.

“I have two major hesitations with this: We don’t know what kind of AI involvement there will be. The development team has not provided a playbook for how that will look,” Faulkner said. “And in reading the response letters from District 2 parents, I see that proper engagement and process was not done.”

At a District 2 town hall on March 5, Chancellor Kamar Samuels said the Education Department expects to release AI guidance in the coming weeks and will provide a 45-day window for community feedback once it’s published.

Five Community Education Councils have passed resolutions calling for a two-year moratorium on artificial intelligence use in schools. But calls for broad AI guidelines implemented at the city level are nothing new; of an AI-powered reading program in 2024 after former Comptroller Brad Lander called for a citywide playbook.

“I think the question of teacher capacity and teacher shortages, the research on kids and AI, is still nascent, and the DOE’s lack of its own AI policy leads me to question the timing of any AI school,” said Calderon, the parent at Lower Manhattan Community.

Chalkbeat is a nonprofit news site covering educational change in public schools. This story was originally published by Chalkbeat. Sign up for their newsletters at .

]]>
Exclusive: New Google Partnership a ‘Sizable Investment’ in AI for Teachers /article/exclusive-new-google-partnership-a-sizable-investment-in-ai-for-teachers/ Mon, 23 Feb 2026 12:01:00 +0000 /?post_type=article&p=1028964 A top professional organization for teachers has inked a three-year deal with Google to offer AI training to “all six million K-12 teachers and higher education faculty” in the U.S., an audacious undertaking by the tech giant that could reach millions of students and dwarf previous tech forays into education.

“While Google’s been offering educational products for 20 years, this is a different moment for us,” said Chris Phillips, Google’s vice president and general manager of education.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


He called the effort the largest for Google in two decades of working with teachers and students. Phillips didn’t immediately offer a price tag, but said it’s “a sizable investment.”

Chris Phillips

The training, offered through the ed tech-focused group , will include hands-on experience with Google’s Gemini and NotebookLM tools, offering certificates and digital badges.

“We have just heard so much feedback from teachers that are just saying, ‘We are not prepared,’” said Richard Culatta, ISTE+ASCD’s CEO. “‘We don’t have the training, we don’t have the background that we need for the realities of teaching in an AI world, both teaching in the classroom and also, secondarily, but equally as important, preparing students for the world that they’re going to be in.’”

It’s the latest in a series of large-scale teacher training initiatives over the past few months. In July, the American Federation of Teachers, the nation’s second-largest teachers union, announced its own $23 million , partnering with Microsoft, OpenAI, and Anthropic to train up to 400,000 educators.

At the time, AFT President Randi Weingarten said the academy was a way to ensure that teachers, not technology, remain in control of the classroom.

But AFT’s partnership with OpenAI and Anthropic drew sharp criticism from educators and researchers, who questioned whether tech companies with products to sell and market share to protect are the right architects for teacher training. Education technology critic Audrey Watters called AFT’s academy “a gigantic public experiment that no one has asked for,” while ed tech analyst Alex Sarlin said tech companies were in a “land-grab moment.” 

Microsoft has also launched its own community-based platform, Microsoft Elevate for Educators, offering free courses, live training sessions and credentials. 

Google itself in 2024 committed $25 million through its philanthropic arm to several nonprofits, including ISTE+ASCD, 4-H, and aiEDU, with particular attention to reaching underserved communities. Its goal at the time was to reach more than half a million K-12 and college students, as well as educators.

ISTE+ASCD — the group is a combination of two that merged in 2023 — was the beneficiary of $10 million of the $25 million, saying it would collaborate with several other groups, including the National Education Association and the Computer Science Teachers Association.

Though Google has its own AI platform, Culatta insisted that the work won’t be about pushing specific tools, saying that kids need enduring AI skills as the tools change. 

Richard Culatta

In 2023 ISTE+ASCD introduced its own AI chatbot built on educator-focused content and trained solely on materials developed or approved by the organization. The chabot tapped into curated databases in a bid to give teachers routine access to high-quality research. 

In some ways, efforts like those of AFT and others reflect a lack of leadership at the federal level. The Trump administration, through an , has backed efforts to expand AI in schools, but has also eliminated the Office of Educational Technology, which long focused on making access to technology until Trump last spring.

Culatta, who ran the office under President Obama, said it’s important that organizations like ISTE+ASCD “step up when there are key needs that may not be filled at the federal level. And we just want to make sure that, regardless of where we would like some things to happen, at this point we just have to do all-hands-on-deck and make sure we’re supporting kids and teachers.”

‘Massive undertaking’ or waste of time?

The sheer scale of Monday’s announcement underscores how urgently educators see the need to learn about AI: RAND Corp. last spring found that the number of school districts training teachers on AI from 2023 to 2024, from 23% to 48%. Researchers predicted that as many as three-fourths of districts would be in the AI training business by the end of 2025. 

Robin Lake, director of the at Arizona State University, said the new partnership is “a massive undertaking that is urgently needed right now. I hope it includes a research component so we can learn from it because much more is needed.”

Google’s Phillips said the company has “multiple arms of research happening all around the world” and “will start to produce some of those and share them publicly where we’re doing studies” in classrooms.

“We’ll see how the results land, but ultimately we want to improve learning outcomes,” he said. “We want to help change. We want to bend the curves on proficiency.”

Robin Lake (CRPE)

Lake, who has long urged schools to take AI readiness seriously, said school principals, district leaders and teachers-in-training “also need to be AI literate, as do students and families. We can’t rely only on private companies with an interest in AI products to fund and lead AI readiness.”

Others were more sharply critical of the new partnership.

Justin Reich, an associate professor of digital media at MIT and host of the podcast , said industry-sponsored professional development is, at its core, a “customer acquisition” campaign. Since ISTE+ASCD is historically both a membership-driven teacher organization and an industry trade association, he asked, “How can it be an honest broker to those two constituencies, while also launching an enormous initiative that privileges the products of one particular vendor?”

Google’s past educator certification programs, he said, “focused more on tool use and adoption than on learning,” with no substantive evidence that improved student outcomes followed.

Phillips said its research is ongoing, but noted that its app is allowing students to self-pace lessons. “Where they struggle, they can dive deeper and learn more and get more up-to-date,” he said. Among several unpublished findings, Phillips said, is one that found students spend more time on topics they’re struggling with and end up learning these topics more deeply. 

Culatta admitted that Google would of course like to see its products in the hands of teachers. But he said he and his colleagues “want to make sure that if there are products going to schools — and they already are — that they’re being used in ways that are really impactful.”

He added, “If it was going to just be, ‘Here’s how to use Gemini,’ Google actually doesn’t need us. We are coming in because Google is looking for somebody who can say, ‘What are really the best practices for learning with AI, not necessarily learning about AI?’”

Google’s Phillips said teachers and students “can choose other products in the market and so forth, but this program does come with using our products so that we can help teachers really get started, get going.” 

He noted a “super-generous free tier” to make the tools widely accessible, and the training to use it. “But schools, districts, teachers themselves have choice, and I think that’s perfectly fine, but we want to play a role with not just providing tools, giving people access, but actually helping them apply it and use it” to jumpstart “safe, appropriate use of AI.”

Justin Reich

MIT’s Reich said his deeper concern is what he said is the near-total absence of evidence underlying AI professional development, either to teach educators how to use AI in their classrooms or simply to teach them how AI and large language models work.

“Literally no one on the planet understands how [AI] works,” he said. “The best computer scientists in the world cannot explain why LLMs generate plausible sounding text in a convincing theoretical framework.”

Reich recounted asking engineers at a Google DeepMind event in November whether they knew how to train junior engineers to use AI tools effectively in their work. “Every single person I talked to said, ‘No,’” he said. “If Google doesn’t know how to effectively use AI to write code, what is this business about teaching people AI literacy? We just don’t know.”

Benjamin Riley, a well-known AI skeptic who founded the think tank , was more blunt, casting the Google partnership as part of an ongoing process making ISTE+ASCD a “shill” for Big Tech.

“I admit I’m fascinated to see the major Big Tech companies competing so vigorously to control ‘the education market,’” Riley said. “OpenAI is giving away their premium model to teachers (until they won’t), and now Google is doing whatever this is.”

Benjamin Riley

In the past, Riley has questioned whether offering teachers and students skills such as “AI literacy” and “AI readiness” are effective, even as many others warn that they’ll be essential.

“I guess I’d credit their clairvoyance a tad more if ISTE+ASCD had not claimed, as recently as just a few years ago, that ‘the future’ would also demand that everyone . Oops!”

Riley, who also founded the cognitive science advocacy and research group , predicted that much of the training will end up wasting teachers’ time, Google’s money and ISTE+ASCD’s relevance. 

“Human beings have evolved to learn from each other in the context of our relationships. This is the superpower of our species, and the kids who’ve grown up in the past 20 years are increasingly disgusted by what tech has done to them personally, and society more broadly. They are not happy about the world we’ve given them, and their voices are growing ever louder.”

Culatta, for his part, said AI “is not going away. Does learning happen with people connected with each other? Sure. It’s not the only way learning happens, but it’s a very important way. And we actually think AI can help make those human-to-human learning experiences much better.”

]]>
AI Trailblazer Google Doesn’t Want Schools to ‘Bypass the Human’ /article/ai-trailblazer-google-doesnt-want-schools-to-bypass-the-human/ Mon, 02 Feb 2026 11:30:00 +0000 /?post_type=article&p=1027968 In 1999, the Indian computer scientist and educational theorist Sugata Mitra created a small, if audacious, learning experiment: He and colleagues at the National Institute of Information Technology in a street-level wall of their New Delhi office building and mounted an Internet-connected personal computer, usable by anyone who passed by. No instructions, no suggestions, no lesson plans. Just access.

Within hours, Mitra would later write, children from a nearby slum appeared “and glued themselves to the computer.” They learned how to use the mouse, download games and music, play videos and surf the Web, all by teaching themselves.

The experiment in what Mitra called “minimally invasive education” was . It became in the ed tech world, evidence that children simply need access to tools to be successful.

Dr Sugata Mitra in front of his ‘hole in the wall’ experiment.

But don’t mention Mitra too enthusiastically to Ben Gomes, the computer scientist who co-leads Google’s education efforts. While the “hole in the wall” experiment is a hopeful, charming story, he’d say, it’s missing a key element: teachers.

People are fundamental in the learning process. People learn from other people, and people learn because of other people.

Ben Gomes, Google

“We are paying attention to pedagogy, and we’re working with the teachers,” he said. “We’re not saying we just want a thousand flowers to bloom randomly.”

As AI becomes more ubiquitous in schools, Gomes maintains that Google has a duty to train teachers not just how to use its products but also how to help them move students from taking shortcuts to using AI for deeper, often independent learning.

That strategy could dull longstanding complaints that ed tech more broadly is focused on replacing teachers with tech tools that don’t .

“It’s a belief backed by science, to a large extent, that people are fundamental in the learning process,” Gomes said, “that people learn from other people, and people learn because of other people.”

Children certainly can and do learn independently, but deep conceptual understanding and literacy require guidance — especially now, nearly three decades after Mitra’s hole in the wall, with many developers looking for ways to replace teachers with AI.

“Teachers are critical in this process,” Gomes said. “We don’t want to bypass the human.”

AI as ‘thought partner’

In a recent , Gomes and a handful of colleagues explored how AI could reverse declining global learning, largely through supporting teachers and turbocharging personalization. In mid-January, Google said it was on AI in the classroom, offering its AI-driven Gemini app to more educators and students for free, making tools such as available and partnering with Khan Academy to power a writing coach tool.

The search giant has put a former NASA trainer in charge of much of the effort. Julia Wilkowski, a neuroscientist, has also taught sixth-grade math and science. She began her career at an outdoor environmental school, where she recalled hiking trips in which she’d ask students to figure out the velocity of a stream using only an orange, a length of string and a stopwatch.

Wilkowski now spends “pretty much 100% of my time” focused on ensuring that Google’s AI for students rests on sound learning science.

In interviews over the past few weeks, Gomes and Wilkowski spoke openly about their work, in several instances admitting that much of it amounts to helping teachers find ways to get students to stop outsourcing their thinking.

“Teachers have the opportunity to teach their students how to use these tools ethically and effectively that don’t bypass those critical thinking skills,” said Wilkowski.

As an example, she said, she has worked with English teachers to help them instruct  students on how to use AI as “a thought partner” in essay writing, not as the writer itself.

These teachers, she said, have succeeded by breaking down essay writing into its component parts and openly discussing its goals. They use AI to help students brainstorm essay topics, refine thesis statements, help generate first drafts and offer feedback on them, giving students “guidance and guardrails” without allowing them to turn in AI-written essays.

The work, stretching back a year and a half, “has really informed my optimism about how AI can be used successfully,” she said.

Guided learning

Both Wilkowski and Gomes spoke often of “guided learning,” saying students learn best when they move beyond simple answers to develop their own ideas and think critically. To get them to do so, teachers must guide them with carefully designed questions.

There's no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.

Amanda Bickerstaff, AI for Education

Perhaps unsurprisingly, Google has for that, a section of Gemini that acts much as a private tutor or guide, offering students a taste of “productive struggle” that engages but also challenges them without offering answers (at least not immediately). Rather, it steers them to the answer through a series of questions.

Gomes said the principle is working its way into most of Google’s AI products, including a newer one called , which uses the technology to help students learn topics in interactive, more appealing ways most textbooks can’t: as a text with quizzes, a narrated slideshow, an audio lesson and a “mind map” that lays out related ideas in connected graphics.

At its root, Gomes said, the dilemma over AI and cheating stems from motivation. “If I look back at my own childhood, there are certainly cases where I was just interested in getting something done for tomorrow,” he said. “And there are other cases where I was curious and I wanted to read more.”

The ratio between how much time students spend in one state vs. the other varies, he said, “but getting more people into the state where they are motivated, I think, is the goal.”

But Amanda Bickerstaff, co-founder and CEO of , a training and policy organization, said the reasons students turn to AI are “far more complicated than lack of motivation.” 

Students are dealing with “perfectionism, high-stakes assessments that prioritize grades, skill and language gaps,” among other dilemmas. “Framing this primarily as a motivation issue oversimplifies what’s actually happening in classrooms.”

She said Google’s shift toward Socratic reasoning “sounds promising, but there’s a fundamental problem: There’s no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.”

The chatbots are “sycophantic by nature,” Bickerstaff said, offering answers and completing tasks even when not explicitly asked to. “That’s the opposite of productive struggle.”

And most young people, she said, don’t have sufficient AI literacy to use these tools strategically. “Without that foundation, chatbots become for schoolwork rather than a learning tool. You can’t solve that problem through interface design alone.”

More, better feedback

For her part, Wilkowski said much of the struggle over AI comes down to feedback: How much should students get, how often, and what should it look like?

Wilkowski said her daughter is in high school and was required to write an essay for a final exam in December. When Wilkowski spoke to The 74 in early January, she said the essay still hadn’t been graded. 

“I would rather have AI-generated feedback,” she said. “Give the first draft, and then the teacher [can] review it, of course, before giving it to the students.”

Teachers have the opportunity to teach their students how to use these tools ethically and effectively that don't bypass those critical thinking skills.

Julia Wilkowski, Google

More broadly, she said, AI could soon change how students are assessed altogether, helping teachers move away from tools such as multiple-choice tests, whose problems are well-known in the testing world: They’re easy to create, administer and grade, and they’re reliable. But they also allow students to guess rather than show understanding, and they encourage students to learn by rote memorization rather than deeper engagement with material. 

Multiple-choice tests also can’t evaluate higher-order thinking skills, creativity, student writing or the ability to construct arguments. If AI can make essays or long-form questions or even projects easier to grade, wouldn’t that put the multiple-choice test out of business?

“Let’s say you’re in physics class and you’re studying acceleration-versus-time graphs and you ride your bike home,” Wilkowski said. “An AI tool might pop up and say, ‘Hey, here’s your acceleration-versus-time graph of your bike ride home. What did you notice about your velocity? How did it change as you changed acceleration? Was there a hill that you had to overcome?’” 

More relevant assignments and assessments, she said, could get students to think more critically, incorporating school into their real life in deeper ways. “It goes back to the heart of what excited me as a teacher: those excited, hands-on lessons. I’m seeing a way that … AI can facilitate those in the future.”

AI for Education’s Bickerstaff said it’s encouraging to see Google working to create more “fit-for-purpose tools” for student use. 

“The education sector desperately needs companies to move beyond general-purpose chatbots and build tools that actually support cognitive work rather than replace it,” she said. “But there’s still a lot of work to do — and a lot of research that needs to happen — before we can know if these tools are effective learning guides.”

]]>
AI Tutors, With a Little Human Help, Offer ‘Reliable’ Instruction, Study Finds /article/ai-tutors-with-a-little-human-help-offer-reliable-instruction-study-finds/ Wed, 03 Dec 2025 11:30:00 +0000 /?post_type=article&p=1024317 An AI-powered tutor, paired with a human helper and individual-level data on a student’s proficiency, can outperform a human alone, with near-flawless results, a new study suggests. 

The results could open a new front in the evolving discussion over how to use AI in schools — and how closely humans must watch it when it’s interacting with kids.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


In a involving 165 British secondary school students, ages 13–15, the ed-tech startup put a small group of expert human tutors in charge of a , or LLM, offered by Google’s . As it tutored students on math problems via Eedi’s platform, it drafted replies when students needed help. Before the messages went out, the human tutors got a chance to revise each one to the point where they’d feel comfortable sending it themselves.

Students didn’t know whether they were talking to a human or a chatbot, but they had longer conversations, on average, with the “supervised” AI/human combination than simply with a human tutor, said Bibi Groot, Eedi’s chief impact officer. 

In the end, students using the supervised AI tutor performed slightly better than those who chatted online via text with human tutors — they were able to solve new kinds of problems on subsequent topics successfully 66.2% of the time, compared to 60.7% with human tutors.

The AI, researchers concluded, was “a reliable source” of instruction. Human tutors approved about three out of four drafted messages with few to no edits.

Students who got both human and AI tutoring were able to correct misconceptions and offer correct answers over 90% of the time, compared to just 65% of the time when they got a “static, pre-written” response to their questions.

And the AI only “hallucinated,” or offered factual errors, 0.1% of the time — in 3,617 messages, that amounted to just five hallucinations. It didn’t produce any messages that gave the tutors pause over safety.

The results suggest that “pedagogically fine-tuned” AI could play a role in delivering effective, individualized tutoring at scale, researchers said. Interestingly, students who received support from the AI were more likely to solve new kinds of problems on subsequent topics. 

The key to the AI’s success, said Groot, was that researchers gave it access to detailed, “extremely personalized” information about what topics students had covered over the previous 20 weeks. That included the topics they’d struggled with and those they’d mastered. 

“We know what topics they’re covering in the next 20 weeks — we know the curriculum. We know the other students in the classroom. We know whether they’re putting effort into their questions. We know whether they’re watching videos or not — we know so much about the student without passing any personally identifiable information to the AI.”

Bibi Groot

That guided the AI’s strategy about whether students needed an extra push or just more support — something an “out-of-the-box, vanilla LLM” can’t do, she said.

“They don’t know anything about what the teacher is teaching in the classroom,” Groot said. “They don’t know what misconceptions or what topics the students are struggling with and what they’ve already mastered, so they’re not able to dynamically change how they address the topic, as a human tutor would.”

Human tutors, she said, generally have “a really good sense of where the student struggles, because they have some sort of ongoing relation with a student most of the time. An LLM tutor generally doesn’t.”

All the same, even master tutors typically don’t go into a session knowing a student’s comprehensive history in a course, including their misconceptions about the material. “All of that is too much information for a human tutor to read up on and deal with while they’re having one conversation” with a student, Groot said.

And they’re under pressure to respond quickly “so that the student is not left waiting. And that’s quite an intensive experience for tutors that leads to a bit of cognitive overload,” she said. The AI doesn’t suffer from that. It needs less than a millisecond to read all of those contexts and come up with that first question.”

Even with their personal connection to students, human tutors can’t be available 24/7. Groot said Eedi employs about 25 tutors across several time zones who are available to students from 9 a.m. to 10 p.m. every day, but to give students broader access would require hiring “an army of tutors,” she said.

The new findings could encourage schools to use AI as a kind of “front line” tutor, with humans intervening when a student is “derailing the conversation, or they have such a persistent misconception that the AI can’t deal with it,” said Groot. “We think that would be an interesting way to collaborate between the AI and the human, because there is still a really important role for a human tutor. But our human tutors just cannot have conversations with thousands of students at once.”

The new study, published last week on Eedi’s site and scheduled to appear in a peer-reviewed journal next year, differed in one important way from recent studies that looked at AI tutoring. Researchers at in October 2024 examined AI-assisted human tutoring, in which tutors primarily drove the conversation. But in that case, the AI acted as a kind of assistant, providing suggestions behind the scenes. In the Eedi study, it was the other way around, with AI driving the conversation and humans overseeing it.

Robin Lake, director of the at Arizona State University, said the study is important in and of itself, but also in the context of broader findings elsewhere suggesting that, with proper training and guidance, “AI can be an incredibly powerful tool — and certainly has a potential to take tutoring to scale in ways that we’ve never seen before.”

Under controlled circumstances, she said, it’s also “outperforming humans — that’s really important.”

AI can be an incredibly powerful tool — and certainly has a potential to take tutoring to scale in ways that we've never seen before.

Robin Lake, Center on Reinventing Public Education

Lake noted a from Harvard researchers that examined results from 194 undergraduates in a large physics class. They presented identical material in class and via an AI tutor and found that students learned “significantly more in less time” using the tutor. They also felt more engaged and motivated about the material.

Liz Cohen, vice president of policy for 50CAN and author of the recent book , said the study provides “valuable evidence” about new kinds of tutoring. 

But one of its limitations, she said, is that it relied on 13-to-15-year-olds. “So immediately I have a lot of questions about if the findings are applicable for younger students, especially using a chat based model,” which may not be a good one for such students.

I still mostly think that entirely AI tutoring programs are biased towards students who want to do the work or are interested in learning.

Liz Cohen, 50CAN

She also noted that there are many questions around student persistence with AI tutors, including what happens when students get frustrated or aren’t sufficiently engaged in the work? 

“I still mostly think that entirely AI tutoring programs are biased towards students who want to do the work or are interested in learning,” Cohen said, “and it’s pretty easy to see that students who aren’t bought in or are frustrated are going to give up more readily with an AI tutor.”

She noted that her 12-year-old daughter has experienced problems persisting in an AI-powered math tutoring program. “She gets frustrated if she can’t get the answer and then she doesn’t want to do it anymore, so I think we need to figure out that piece of it.”

]]>
Will New AI Academy Help Teachers or Just Improve Tech’s Bottom Line? /article/will-new-ai-academy-help-teachers-or-just-improve-techs-bottom-line/ Mon, 04 Aug 2025 10:30:00 +0000 /?post_type=article&p=1018966 Washington, D.C. 

Mariely Sanchez spent the last school year using generative artificial intelligence nearly every day in her classroom.

The Miami fourth-grade teacher began each morning by asking a chatbot — teachers in Miami-Dade have access not only to ChatGPT, but to Google’s Gemini and Microsoft’s Co-Pilot — to comb through Florida state standards and create reading passages for students. She’d also ask the AI to produce multiple-choice and short-response quizzes to test how well students understood the reading. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


The assignments, she said, weren’t easy for students. She built them by using “difficult standards that students need more practice with” and prompting the AI to create materials.

Sanchez is spending her summer break learning more about AI, including its ethics, and helping colleagues do the same, warning:

We know it's not going to go away — it's here to stay, but we want to make sure we use it the right way.

Mariely Sanchez, fourth grade teacher

That effort got a big boost earlier last month, when the American Federation of Teachers that it would open an AI training center for educators in New York City, with $23 million in funding from OpenAI, Anthropic and Microsoft, three of the leading players in the generative AI marketplace.

AFT says it’ll open the National Academy for AI Instruction in Manhattan this fall, offering hands-on workshops for teachers. Over five years, it said, the academy will train 400,000 educators, or one in 10 U.S. teachers, effectively reaching the more than 7.2 million students they teach. 

When she announced the academy in early July, AFT President Randi Weingarten said teachers face “huge challenges,” including navigating AI wisely, ethically and safely. “The question was whether we would be chasing it — or whether we would be trying to harness it.”

‘It’s the Wild West’

AFT, the nation’s second largest teachers’ union, envisions the academy working much like those that train carpenters, electricians and construction workers,“where the companies, where the corporations actually come to the union to create the kind of standards that are needed” for success, Weingarten said. 

Microsoft, for example, has said it plans to give more than $4 billion in cash and technology services to train millions of people to use AI, underwriting efforts at schools, community colleges, technical colleges and nonprofits. The tech giant already boasts an AI to train members of the larger AFL-CIO labor union, of which AFT is a member. And it’s creating a new training program, , to help 20 million people earn certificates in AI.

Rob Weil — AFT’s director of research, policy and field programs — said the new academy will bring high-quality training to a profession that so far has seen uneven opportunity for it.

“It’s the Wild West,” he said in an interview during a training session at the union’s annual conference in July. “It’s all over the place. You have some school districts that are out front, and they’re doing a lot of pretty good work.” But others are banning AI or simply ignoring it, he said, leaving teachers to fend for themselves at a time when students need them perhaps more than ever.

“We have to make our instruction better. We have to be better on engagement. We have a crisis of engagement in our schools, and these tools can help with that.”

AFT’s move has been met with equal parts cautious optimism and weary skepticism.

Writing in her , ed-tech critic and AI skeptic Audrey Watters called  AFT’s partnership with the tech companies “a gigantic public experiment that no one has asked for.”

Unions, she wrote, “should be one of the ways in which workers resist, rather than acquiesce to … the tech industry’s vision of the future.” By joining forces with big tech, she said, AFT is implicitly endorsing its products. “Teaching teachers how to use a suite of Microsoft tools does not help students as much as it helps Microsoft. Teaching teachers how to use a suite of Microsoft tools is not so much an ‘academy’ as a storefront.”

Benjamin Riley, who has also about generative AI in education, said observers should “100% worry” that the new partnerships represent a play for market share. 

“It’s very obvious from a product standpoint that they see education as one of, if not the primary, place to go with their product,” said Riley. “And the fact that AFT is willing to say, ‘Cool, let’s get some of that money and we’ll build a training center to help teachers use it,’ I can see why OpenAI would jump all over that.”

But he questioned whether AI training is what AFT members really want. He suggested instead that the union should recommit to helping teachers more deeply understand how learning works. “They haven’t been opposed to it,” he said, noting that it has long run an “” column in the magazine it mails to members. “But in reality it just hasn’t been a priority. Improving pedagogy hasn’t really been, to my eyes, a union priority for a long time.”

Riley, who in 2024 founded the think tank to explore AI issues, said an organization like AFT should ideally be thinking about whether embracing AI will lead to better outcomes for children — or whether it could “potentially erode and devalue the work of human teaching” while opening up schools as customers for AI companies. 

Representatives of OpenAI and Anthropic did not immediately respond to requests for comment, but in an email, Microsoft’s Naria Santa Lucia said, “This isn’t about Microsoft’s technology, our focus is on making AI broadly accessible, so everyone has a fair shot at the future. If we collectively get this right, AI becomes a bridge to opportunity — not a barrier.”

During the academy’s unveiling, Chris Lehane, OpenAI’s chief global affairs officer, said AI technology “is coming — it is going to drive productivity gains. Can we ensure that those productivity gains are democratized so as many people as possible participate in them? And there is no better place to begin that work than in the classroom.”

OpenAI has noted that many of its users are students. In February, it said that of college-aged young adults in the U.S. use ChatGPT, with one in four of their queries related to learning and school work.

While a few observers said the tech giants are making a play for market share among the nation’s K-12 students, they noted that the companies are also filling an important role. 

“It’s welcome news that technology companies are bidding against each other — to outdo each other — to invest in public education,” said Zarek Drozda, executive director of , a coalition of groups advancing data science education. “I think that’s exciting at a time when federal investment in education is uncertain. Seeing industry step up is quite meaningful.”

But he said he’s concerned that the training might stop short after teaching teachers — and by extension students — simply how to use AI. “Training needs to go beyond use,” he said. “If we want to train a generation of students to be AI-ready, internationally competitive, they have to understand how these tools work under the hood, when and why the tool might be wrong, and how they can customize LLMs [Large Language Models] or other models for their own pursuits, versus simply taking what’s given.”

He’s also concerned that the AFT has laid out a vision spanning just five years. “We want there to be a deep investment in upskilling teachers for the skills that they will need to adapt to, not just AI, but what is the AI model five years from now?” he said. “What is the next emerging technology that the field should be ready to adapt to?”

More than just a commitment to training, Drozda said, the union and its partners should commit to a long-term sustainability plan for teacher training to attract new, young career professionals to the field.

Ami Turner Del Aguila (left, standing) coaches Melina Espiritu-Azocar (center) and Monique Boone during a recent AI training sponsored by the American Federation of Teachers. Both former teachers, Espiritu-Azocar and Boone now lead local AFT chapters in Texas. (Greg Toppo)

Alex Kotran, founder and CEO of the , agreed that investing in teacher training is worthwhile. “That’s a very big rock that needs to be moved.” But the reported $23 million commitment from the three tech giants “is a bit of a drop in the bucket” considering their valuations, “symbolic at best.”

That said, AFT’s involvement could make the training more palatable for many school district leaders, he noted, since one of the uncertainties in training efforts typically is whether unions will allow members to attend under contract rules. By taking the lead in developing the training academy, “the unions have planted a flag and said, ‘PD [professional development] is important.’”

All the same, tech companies are in the business of selling their products, making them imperfect messengers for AI literacy, he said. “They’re deeply incentivized on one side, and it isn’t necessarily for the benefit of students.” 

Other industry watchers fear the partnership could be viewed as a high-profile bid for market share at a critical time in the AI industry’s history. 

“This is a land-grab moment,” said Alex Sarlin, co-host of the podcast. “I mean, this technology is only three years old. There are already three or four major players in it, if you don’t count China, and they all want to be the one left standing.”

For its part, Google has said its suite of Gemini educational AI tools would for free to all educators with Google Workspace for Education accounts.

While it was the only major player not included in the AFT announcement, Sarlin said Google is, in some ways, “playing the incumbent in this because in K-12, they’re already there.” Given the dominance of Chromebook laptops, the management tool and its programs, the search giant is “embedded in K-12,” he said. “Open AI and Anthropic, they’re basically consumer products that are being used by teachers.”

‘Oh yeah, what could go wrong?’

Matt Miller, an Indiana high school Spanish teacher, educational consultant and for teachers, said his colleagues are hungry for high-quality, classroom-tested training, but that what they often get from AI companies is over-the-top talk about “how much the world is going to change and how we’re revolutionizing education,” with promises to help teachers work more efficiently.

Trainings typically skim over the fact that most students are simply using generative AI for “cognitive offloading,” Miller said, avoiding critical thinking and skill development  “and letting AI do it for them.” Many teachers, meanwhile, are searching for ways to “AI-proof” their classrooms. 

The sessions typically all end the same way, he said: “It all sort of funnels back to their product.” 

Miller, whose latest book, in 2023, was , said the AFT/OpenAI/Anthropic partnership “scares the crap out of me.”

“Whenever you get that marriage between an organization and big companies, we just keep asking ourselves, ‘Oh, yeah, what could go wrong?’”

Money means influence, Miller said, so will the curriculum be “tool-agnostic? Is it going to be about the technology? Is it going to be about pedagogy? Or is it going to be a customized tutorial of how you can use our tool to do X, Y and Z?”

AFT’s Weil said those concerns are understandable but short-sighted. AI developers, he said, “don’t get to engage with us if you’re not going to be agnostic about the tools.” The academy’s directors talk openly to the developers “about how we have to have a practical, real relationship. This can’t be about product selling.”

More broadly, the partnerships are a way to exert influence upon how AI operates in schools and classrooms.

The only way we have a profession is if we control the profession.

Rob Weil, AFT’s director of research, policy and field programs

During the academy’s unveiling, Weingarten said its lessons will be “as open-source as possible,” not just for the union’s 1.8 million members but more broadly through its free platform.

For his part, Weil said AI is “not going to go away. Nobody’s going to put AI back in the bottle. It’s here. The young people, for them to be successful in their jobs in the future, are going to have to know how to effectively and efficiently and safely use these tools. So why wouldn’t the education system help with that process?”

That’s likely the message that union leaders have been getting from members, said Sarlin, the podcast co-host. “There was probably a moment a couple years ago where they were sort of teetering, where they could have gone anti-AI,” he said. “But I think at this point that’s not where the puck is headed.”

]]>
University of Nebraska-Google Career Certificates Partnership Opens This Week /article/university-of-nebraska-google-career-certificates-partnership-opens-this-week/ Tue, 18 Jun 2024 16:30:00 +0000 /?post_type=article&p=728646 This article was originally published in

Enrollment opens this week for the University of Nebraska’s new partnership to offer Google Career Certificates in a variety of fields.

Beginning Wednesday, June 19, NU students, alumni and Nebraskans at large can begin to for a variety of self-paced, noncredit courses. Interim NU President Chris Kabourek said that since announcing the in April, with little marketing, more than 1,000 people had already .

Melissa Lee, NU’s chief communication officer, said 1,247 people had registered as of Friday. Of those registrants, 20% are current NU students and 40% are alumni, meaning hundreds of Nebraskans who might have no connections to NU are interested in more education.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“I just think it solidifies what we thought, that Nebraskans are yearning for more skill sets and more education,” Kabourek told the Nebraska Examiner.

An email last week to pre-registrants the week of June 10 from Ana Lopez Shalla, lead for NU’s microcredentials, offered to completing the minicourses. She wrote that registrants were “helping to drive impact not only in your own career, but in our regional workforce, too.”

The Google Career Certificates will be offered in three cycles in the next year, with 2,500 seats available for each session. They begin in August, December and April. Enrollment will be open through July 31; courses in the first session will begin the next day.

In April, NU announced a special first-year rate of $20 per enrollment.

Kabourek said at the time that the partnership is designed for opportunities, not revenue, and that funds would be used to cover costs and any associated technological needs.

will be offered:

  • Cybersecurity
  • IT support
  • Data analytics
  • Digital marketing and e-commerce
  • Project management
  • User experience (UX) design
  • IT automation with python.Advanced data analytics
  • Business intelligence.

Kabourek, who will return to his sole role as NU’s chief financial officer come July 1, said one of his priorities as interim president has been to help the university reconnect with Nebraskans, which will include getting out to visit high schools in the fall.

As a rural Nebraskan from David City, Kabourek said, he knows every Nebraskan can find a place within NU.

“We never want your ability to go get your education or develop your skill sets or enhance your resume to be limited by your family situation or your location,” Kabourek said.

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Nebraska Examiner maintains editorial independence. Contact Editor Cate Folsom for questions: info@nebraskaexaminer.com. Follow Nebraska Examiner on and .

]]>
NU, Google to Offer Career Certificates to Students, Alumni and All Nebraskans /article/nu-google-to-offer-career-certificates-to-students-alumni-and-all-nebraskans/ Thu, 18 Apr 2024 15:30:00 +0000 /?post_type=article&p=725540 This article was originally published in

LINCOLN — The University of Nebraska and Google are entering a new partnership designed to further Nebraskans’ education and support state workforce needs.

Interim NU President Chris Kabourek announced Tuesday that the university will soon offer in a variety of fields. is open now on a first-come, first-served basis and will begin with the 2024-25 academic year. Three cycles will be offered — in August, December and April — with 2,500 seats available in each.

Kabourek said “it’s a win” when more education is brought directly to Nebraskans and students.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“As a native of rural Nebraska myself, I believe strongly that every Nebraskan should have access to quality, affordable educational opportunities no matter where they live or what their personal circumstances are,” Kabourek said in a statement.

The goal of the partnership is to provide opportunity, not make money, Kabourek added in a text. NU will retain all revenue raised through enrollment in the certificates, which will cover administrative costs and any associated technological needs.

Learn at their own pace

Google experts teach the programs, which are vetted by leading employers. NU students, alumni and Nebraska residents can get a special first-year rate of $20 per enrollment.

Students learn at their own pace over three to six months of part-time study in multiple courses:

  • Cybersecurity
  • IT support
  • Data analytics
  • Digital marketing and e-commerce
  • Project management
  • User experience (UX) design

Advanced certifications are also available, tailored for learners with multiple years of experience or as a next step after completing an entry-level certificate:

  • IT automation with python
  • Advanced data analytics
  • Business intelligence

U.S. Rep. Mike Flood, R-Neb., endorsed the partnership as providing affordable access to education and as “yet another pathway for Nebraskans to pursue their dreams and expand their career horizons.” He said he looks forward to seeing the positive impact it will have.

“Developing Nebraskans to take the jobs of the future is one of the cornerstones of growing Nebraska’s economy,” Flood said in a statement.

A 2023 report from the American Association of Colleges and Universities found that employers are generally in strong support of these “microcredentials.” In the report, two-thirds said they would prefer college graduates with microcredentials for entry-level positions.

More than 250,000 people in the United States have earned a Google certificate, 75% of whom had a positive career impact, such as a new job, promotion or raise, according to Google.

“We’re committed to investing in Nebraskans to ensure that they have the tech and other job ready skills to enter the workforce and reach their full economic potential,” said Lisa Gevelber, founder of Grow with Google.

More postsecondary credentials

Kabourek said the new partnership advances a 2022 legislative goal, which NU supported, to increase the percentage of Nebraskans with postsecondary credentials by 2030 to 70%.

State Sen. Lynne Walz of Fremont, who was then chair of the Legislature’s Education Committee, shepherded the 2022 and through the Legislature .

Tim Jares, dean of the University of Nebraska at Kearney’s College of Business and Technology, described the new partnership as “terrific” and said it adds to the work faculty are doing to support students and alumni “amplify their marketability.”

“From our perspective, the more opportunities for education we provide, the better,” Jares said. “I’m proud that the University of Nebraska is playing a leadership role in creating access for Nebraskans and growing a skilled workforce for our state.”

Other leading U.S. institutions already offer career certificates, including Syracuse University, the University of Texas system and two fellow Big Ten members — the University of California-Los Angeles and Rutgers.

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Nebraska Examiner maintains editorial independence. Contact Editor Cate Folsom for questions: info@nebraskaexaminer.com. Follow Nebraska Examiner on and .

]]>
A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor /article/a-cautionary-ai-tale-why-ibms-dazzling-watson-supercomputer-made-a-lousy-tutor/ Tue, 09 Apr 2024 13:30:00 +0000 /?post_type=article&p=724698

With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits. 

In the annals of artificial intelligence, Feb. 16, 2011, was a watershed moment.

That day, IBM’s Watson supercomputer finished off a three-game shellacking of Jeopardy! champions Ken Jennings and Brad Rutter. Trailing by over $30,000, Jennings, now the show’s host, wrote out his Final Jeopardy answer in mock resignation: “I, for one, welcome our computer overlords.”

A lark to some, the experience galvanized Satya Nitta, a longtime computer researcher at IBM’s Watson Research Center in Yorktown Heights, New York. Tasked with figuring out how to apply the supercomputer’s powers to education, he soon envisioned tackling ed tech’s most sought-after challenge: the world’s first tutoring system driven by artificial intelligence. It would offer truly personalized instruction to any child with a laptop — no human required.

YouTube

“I felt that they’re ready to do something very grand in the space,” he said in an interview. 

Nitta persuaded his bosses to throw more than $100 million at the effort, bringing together 130 technologists, including 30 to 40 Ph.D.s, across research labs on four continents. 

But by 2017, the tutoring moonshot was essentially dead, and Nitta had concluded that effective, long-term, one-on-one tutoring is “a terrible use of AI — and that remains today.”

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

Some of the leading lights of ed tech, from to , are trying to pick up where Watson left off, offering AI tools that promise to help teach students. Sal Khan, founder of Khan Academy, last year said AI has the potential to bring “probably the ” that education has ever seen. He wants to give “every student on the planet an artificially intelligent but amazing personal tutor.”

A 25-year journey

To be sure, research on high-dosage, one-on-one, in-person tutoring is : It’s interventions available, offering significant improvement in students’ academic performance, particularly in subjects like math, reading and writing.  

But traditional tutoring is also “breathtakingly expensive and hard to scale,” said Paige Johnson, a vice president of education at Microsoft. One school district in West Texas, for example, recently spent in federal pandemic relief funds to tutor 6,000 students. The expense, Johnson said, puts it out of reach for most parents and school districts. 

We missed something important. At the heart of education, at the heart of any learning, is engagement.

Satya Nitta, IBM Research’s former global head of AI solutions for learning

For IBM, the opportunity to rebalance the equation in kids’ favor was hard to resist. 

The Watson lab is legendary in the computer science field, with and six Turing Award winners among its ranks. It’s where modern was invented, and home to countless other innovations such as barcodes and the magnetic stripes on credit cards that make . It’s also where, in 1997, Deep Blue beat Garry Kasparov, essentially inventing the notion that AI could “think” like a person.

Chess enthusiasts watch World Chess champion Garry Kasparov on a television monitor as he holds his head in his hands at the start of the sixth and final match May 11, 1997 against IBM’s Deep Blue computer in New York. Kasparov lost this match in just 19 moves. (Stan Honda/Getty)

The heady atmosphere, Nitta recalled, inspired “a very deep responsibility to do something significant and not something trivial.”

Within a few years of Watson’s victory, Nitta, who had arrived in 2000 as a chip technologist, rose to become IBM Research’s global head of AI solutions for learning. For the Watson project, he said, “I was just given a very open-ended responsibility: Take Watson and do something with it in education.”

Nitta spent a year simply reading up on how learning works. He studied cognitive science, neuroscience and the decades-long history of “intelligent tutoring systems” in academia. Foremost in his reading list was the research of Stanford neuroscientist Vinod Menon, who’d put elementary schoolers through a 12-week math tutoring session, collecting before-and-after scans of their brains using an MRI. Tutoring, he found, produced nothing less than an increase in neural connectivity. 

Nitta returned to his bosses with the idea of an AI-powered cognitive tutor. “There’s something I can do here that’s very compelling,” he recalled saying, “that can broadly transform learning itself. But it’s a 25-year journey. It’s not a two-, three-, four-year journey.”

IBM drafted two of the highest-profile partners possible in education: the children’s media powerhouse Sesame Workshop and Pearson, the international publisher.

One product envisioned was a voice-activated Elmo doll that would serve as a kind of digital tutoring companion, interacting fully with children. Through brief conversations, it would assess their skills and provide spoken responses to help kids advance.

One proposed application of IBM’s planned Watson tutoring app was to create a voice-activated Elmo doll that would be an interactive digital companion. (Getty)

Meanwhile, Pearson promised that it could soon allow college students to “dialogue with Watson in real time.”

Nitta’s team began designing lessons and putting them in front of students — both in classrooms and in the lab. In order to nurture a back-and-forth between student and machine, they didn’t simply present kids with multiple-choice questions, instead asking them to write responses in their own words.

It didn’t go well.

Some students engaged with the chatbot, Nitta said. “Other students were just saying, ‘IDK’ [I don’t know]. So they simply weren’t responding.” Even those who did began giving shorter and shorter answers. 

Nitta and his team concluded that a cold reality lay at the heart of the problem: For all its power, Watson was not very engaging. Perhaps as a result, it also showed “little to no discernible impact” on learning. It wasn’t just dull; it was ineffective.

Satya Nitta (left) and part of his team at IBM’s Watson Research Center, which spent five years trying to create an AI-powered interactive tutor using the Watson supercomputer.

“Human conversation is very rich,” he said. “In the back and forth between two people, I’m watching the evolution of your own worldview.” The tutor influences the student — and vice versa. “There’s this very shared understanding of the evolution of discourse that’s very profound, actually. I just don’t know how you can do that with a soulless bot. And I’m a guy who works in AI.”

When students’ usage time dropped, “we had to be very honest about that,” Nitta said. “And so we basically started saying, ‘OK, I don’t think this is actually correct. I don’t think this idea — that an intelligent tutoring system will tutor all kids, everywhere, all the time — is correct.”

‘We missed something important’

IBM soon switched gears, debuting another crowd-pleasing Watson variation — this time, a touching throwback: It engaged in . In a televised demonstration in 2019, it went up against debate champ Harish Natarajan on the topic “Should we subsidize preschools?” Among its arguments for funding, the supercomputer offered, without a whiff of irony, that good preschools can prevent “future crime.” Its current iteration, , focuses on helping businesses build AI applications like “intelligent customer care.” 

Nitta left IBM, eventually taking several colleagues with him to create a startup called . It uses voice-activated AI to safely help teachers do workaday tasks such as updating digital gradebooks, opening PowerPoint presentations and emailing students and parents. 

Thirteen years after Watson’s stratospheric Jeopardy! victory and more than one year into the Age of ChatGPT, Nitta’s expectations about AI couldn’t be more down-to-earth: His AI powers what’s basically “a carefully designed assistant” to fit into the flow of a teacher’s day. 

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

These notions aren’t news to those who do tutoring for a living. , which offers live and online tutoring in 500 school districts, relies on AI to power a lesson plan creator that helps personalize instruction. But when it comes to the actual tutoring, humans deliver it, said , chief institution officer at , which operates Varsity.

”The AI isn’t far enough along yet to do things like facial recognition and understanding of student focus,” said Salcito, who spent 15 years at Microsoft, most of them as vice president of worldwide education. “One of the things that we hear from teachers is that the students love their tutors. I’m not sure we’re at a point where students are going to love an AI agent.”

Students love their tutors. I'm not sure we're at a point where students are going to love an AI agent.

Anthony Salcito, Nerdy

The No. 1 factor in a student’s tutoring success is consistently, research suggests. As smart and efficient as an AI chatbot might be, it’s an open question whether most students, especially struggling ones, would show up for an inanimate agent or develop a sense of respect for its time.

When Salcito thinks about what AI bots now do in education, he’s not impressed. Most, he said, “aren’t going far enough to really rethink how learning can take place.” They end up simply as fast, spiffed-up search engines. 

In most cases, he said, the power of one-on-one, in-person tutoring often emerges as students begin to develop more honesty about their abilities, advocate for themselves and, in a word, demand more of school. “In the classroom, a student may say they understand a problem. But they come clean to the tutor, where they expose, ‘Hey, I need help.’”

Cognitive science suggests that for students who aren’t motivated or who are uncertain about a topic, only will help. That requires a focused, caring human, watching carefully, asking tons of questions and reading students’ cues. 

Jeremy Roschelle, a learning scientist and an executive director of Digital Promise, a federally funded research center, said usage with most ed tech products tends to drop off. “Kids get a little bored with it. It’s not unique to tutors. There’s a newness factor for students. They want the next new thing.” 

There's a newness factor for students. They want the next new thing.

Jeremy Roschelle, Digital Promise

Even now, Nitta points out, research shows that big commercial AI applications don’t seem to hold users’ attention as well as top entertainment and social media sites like YouTube, Instagram and TikTok. dubbed the user engagement of sites like ChatGPT “lackluster,” finding that the proportion of monthly active users who engage with them in a single day was only about 14%, suggesting that such sites aren’t very “sticky” for most users.

For social media sites, by contrast, it’s between 60% and 65%. 

One notable AI exception: , an app that allows users to create companions of their own among figures from history and fiction and chat with the likes of Socrates and Bart Simpson. It has a stickiness score of 41%.

As startups like offer “your child’s superhuman tutor,” starting at $29 per month, and publicly tests its popular Khanmigo AI tool, Nitta maintains that there’s little evidence from learning science that, absent a strong outside motivation, people will spend enough time with a chatbot to master a topic.

“We are a very deeply social species,” said Nitta, “and we learn from each other.”

IBM declined to comment on its work in AI and education, as did Sesame Workshop. A Pearson spokesman said that since last fall it has been ​​beta-testing AI study tools keyed to its e-textbooks, among other efforts, with plans this spring to expand the number of titles covered. 

Getting ‘unstuck’

IBM’s experiences notwithstanding, the search for an AI tutor has continued apace, this time with more players than just a legacy research lab in suburban New York. Using the latest affordances of so-called large language models, or LLMs, technologists at Khan Academy believe they are finally making the first halting steps in the direction of an effective AI tutor. 

Kristen DiCerbo remembers the moment her mind began to change about AI. 

It was September 2022, and she’d only been at Khan Academy for a year-and-a-half when she and founder Khan got access to a beta version of ChatGPT. Open AI, ChatGPT’s creator, had asked Microsoft co-founder Bill Gates for more funding, but he told them not to come back until the chatbot could pass an Advanced Placement biology exam.

Khan Academy founder Sal Khan has said AI has the potential to bring “probably the biggest positive transformation” that education has ever seen. He wants to give every student an “artificially intelligent but amazing personal tutor.” (Getty)

So Open AI queried Khan for sample AP biology questions. He and DiCerbo said they’d help in exchange for a peek at the bot — and a chance to work with the startup. They were among the first people outside of Open AI to get their hands on GPT-4, the LLM that powers the upgraded version of ChatGPT. They were able to test out the AI and, in the process, become amateur AI before anyone had even heard of the term. 

Like many users typing in queries in those first heady days, the pair initially just marveled at the sophistication of the tool and its ability to return what felt, for all the world, like personalized answers. With DiCerbo working from her home in Phoenix and Khan from the nonprofit’s Silicon Valley office, they traded messages via Slack.

Kristen DiCerbo introduces users to Khanmigo in a Khan Academy promotional video. (YouTube)

“We spent a couple of days just going back and forth, Sal and I, going, ‘Oh my gosh, look what we did! Oh my gosh, look what it’s saying — this is crazy!’” she told an audience during a recent at the University of Notre Dame. 

She recounted asking the AI to help write a mystery story in which shoes go missing in an apartment complex. In the back of her mind, DiCerbo said, she planned to make a dog the shoe thief, but didn’t reveal that to ChatGPT. “I started writing it, and it did the reveal,” she recalled. “It knew that I was thinking it was going to be a dog that did this, from just the little clues I was planting along the way.”

More tellingly, it seemed to do something Watson never could: have engaging conversations with students.

DiCerbo recounted talking to a high school student they were working with who told them about an interaction she’d had with ChatGPT around The Great Gatsby. She asked it about F. Scott Fitzgerald’s famous , which scholars have long interpreted as symbolizing Jay Gatsby’s out-of-reach hopes and dreams.

“It comes back to her and asks, ‘Do you have hopes and dreams just out of reach?’” DiCerbo recalled. “It had this whole conversation” with the student.

The pair soon tore up their 2023 plans for Khan Academy. 

It was a stunning turn of events for DiCerbo, a Ph.D. educational psychologist and former senior Pearson research scientist who had spent more than a year on the failed Watson project. In 2016, Pearson that Watson would soon be able to chat with college students in real time to guide them in their studies. But it was DiCerbo’s teammates, about 20 colleagues, who had to actually train the supercomputer on thousands of student-generated answers to questions from textbooks — and tempt instructors to rate those answers. 

Like Nitta, DiCerbo recalled that at first things went well. They found a natural science textbook with a large user base and set Watson to work. “You would ask it a couple of questions and it would seem like it was doing what we wanted to,” answering student questions via text.

But invariably if a student’s question strayed from what the computer expected, she said, “it wouldn’t know how to answer that. It had no ability to freeform-answer questions, or it would do so in ways that didn’t make any sense.” 

After more than a year of labor, she realized, “I had never seen the ‘OK, this is going to work’ version” of the hoped-for tutor. “I was always at the ‘OK, I hope the next version’s better.’”

But when she got a taste of ChatGPT, DiCerbo immediately saw that, even in beta form, the new bot was different. Using software that quickly predicted the most likely next word in any conversation, ChatGPT was able to engage with its human counterpart in what seemed like a personal way.

Since its debut in March 2023, Khanmigo has turned heads with what many users say is a helpful, easy-to-use, natural language interface, though a few users have pointed out that it sometimes .

Surprisingly, DiCerbo doesn’t consider the popular chatbot a full-time tutor. As sophisticated as AI might now be in motivating students to, for instance, try again when they make a mistake, “It’s not a human,” she said. “It’s also not their friend.”

(AI's) not a human. It’s also not their friend.

Kristen DiCerbo, Khan Academy

Khan Academy’s shows their tool is effective with as little as 30 minutes of practice and feedback per week. But even as many startups promise the equivalent of a one-on-one human tutor, DiCerbo cautions that 30 minutes is not going to produce miracles. Khanmigo, she said, “is not a solution that’s going to replace a human in your life,” she said. “It’s a tool in your toolbox that can help you get unstuck.”

‘A couple of million years of human evolution’

For his part, Nitta says that for all the progress in AI, he’s not persuaded that we’re any closer to a real-live tutor that would offer long-term help to most students. If anything, Khanmigo and probabilistic tools like it may prove to be effective “homework helpers.” But that’s where he draws the line. 

“I have no problem calling it that, but don’t call it a tutor,” he said. “You’re trying to endow it with human-like capabilities when there are none.”  

Unlike humans, who will typically do their best to respond genuinely to a question, the way AI bots work —by digesting pre-existing texts and other information to come up with responses that seem human — is akin to a “statistical illusion,” writes Harvard Business School Professor . “They’ve just been well-trained by humans to respond to humans.”

Researcher Sidney Pressey’s 1928 Testing Machine, one of a series of so-called “teaching machines” that he and others believed would advance education through automation.

Largely because of this, Nitta said, there’s little evidence that a chatbot will continuously engage people as a good human tutor would.

What would change his mind? Several years of research by an independent third party showing that tools like Khanmigo actually make a difference on a large scale — something that doesn’t exist yet.

DiCerbo also maintains her hard-won skepticism. She knows all about the halting early decades of computers a century ago, when experimental, punch-card operated “teaching machines” guided students through rudimentary multiple-choice lessons, often with simple rewards at the end. 

In her talks, DiCerbo urges caution about AI revolutionizing education. As much as anyone, she is aware of the expensive failures that have come before. 

Two women stand beside open drawers of computer punch card filing cabinets. (American Stock/Getty Images)

In her recent talk at Notre Dame, she did her best to manage expectations of the new AI, which seems so limitless. In one-to-one teaching, she said, there’s an element of humanity “that we have not been able to — and probably should not try — to replicate in artificial intelligence.” In that respect, she’s in agreement with Nitta: Human relationships are key to learning. In the talk, she noted that students who have a person in school who cares about their learning have higher graduation rates. 

But still.

ChatGPT now has 100 million weekly users, according to . That record-fast uptake makes her think “there’s something interesting and sticky about this for people that we haven’t seen in other places.”

Being able to engineer prompts in plain English opens the door for more people, not just engineers, to create tools quickly and iterate on what works, she said. That democratization could mean the difference between another failed undertaking and agile tools that actually deliver at least a version of Watson’s promise.

An early prototype of IBM’s Watson supercomputer in Yorktown Heights, New York. In 2011, the system was the size of a master bedroom. (Wikimedia Commons)

Seven years after he left IBM to start his new endeavor, Nitta is philosophical about the effort. He takes virtually full responsibility for the failure of the Watson moonshot. In retrospect, even his 25-year timeline for success may have been naive.

“What I didn’t appreciate is, I actually was stepping into a couple of million years of human evolution,” he said. “That’s the thing I didn’t appreciate at the time, which I do in the fullness of time: Mistakes happen at various levels, but this was an important one.”

]]>
Exclusive: For Busy Teachers, AI Could Crack Open the Dense World of Ed Research /article/exclusive-phonics-learning-styles-teachers-confounded-by-education-research-may-soon-turn-to-new-ai-chatbots-for-help/ Wed, 06 Sep 2023 11:15:00 +0000 /?post_type=article&p=714153 As students across the U.S. enter their first full school year with access to powerful AI tools like ChatGPT and Bard, many educators remain skeptical of their usefulness — and preoccupied with their potential to .

But this fall, a few educators are quietly charting a different course they believe could change everything: At least two groups are pushing to create new AI chatbots that would offer teachers unlimited access to sometimes confusing and often paywalled peer-reviewed research on the topics that most bedevil them. 

Their aspiration is to offer new tools that are more focused and helpful than wide-ranging ones like ChatGPT, which tends to stumble over research questions with competing findings. And like many kids faced with questions they can’t answer, it has a frustrating tendency to make things up.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Tapping into curated research bases and filtering out lousy results would also make the bots more reliable: If all goes according to plans, they’d cite their sources.

The result, supporters say, could revolutionize education. If their work takes hold, millions of teachers for the first time could routinely access high-quality research and make it part of their everyday workflow. Such tools could also help stamp out adherence to stubborn but ill-supported fads in areas from “learning styles” to reading instruction.

So far, the two groups are each feeling their way around the vast undertaking, with slightly different approaches.

In June, the International Society for Technology in Education introduced , a tool built on content vetted by ISTE and the Association for Supervision and Curriculum Development. (The two groups merged in 2022.) ISTE has made it available in to selected users. All of the chatbot’s content is educator-focused, and it’s trained solely on materials developed or approved by the two organizations. 

Richard Culatta

Now its creators say that within about six months, they expect that the tool will also be able to scour outside, peer-reviewed education research and return “pretty understandable, pretty meaningful results” from vetted journals, said Richard Culatta, ISTE’s CEO.

“There’s this big gap between what we know in the research and what happens in practice,” he said. One reason: Most research is published in a format that “is just totally inaccessible to teachers.”

Case in point: A set of by the Jefferson Education Exchange, a nonprofit supported by the University of Virginia’s Curry School of Education, found that while educators prefer research they can act on — and that’s presented in a way that applies to their work — only about 16% of teachers actually use research to inform instruction.

So he and others are building a digital tool, “purpose-built for educators by educators,” that can translate research into practice, using “very practical language that teachers understand.”

For instance, a teacher could ask the chatbot, “What does the research say about creating a healthy school culture?” or “What’s the evidence for teaching phonics to developing readers?” One could also ask it to suggest activities that are appropriate for middle school students learning about digital citizenship.

Joseph South, ISTE’s chief learning officer, said teachers want the latest research, but are up against formidable obstacles. “They have to find the article in the journal that happens to relate to the thing that they want to do,” he said. “They have to somehow understand academic-speak. They have to have the time to read this, and they have to translate it into something useful.”

While ChatGPT can comb through journals it has access to, translate and summarize the research, he said, it’s not reliable. The typical chatbot — and thus the typical end user — doesn’t know whether the results are from a credible, peer-reviewed journal or not, and it may not necessarily care.

Joseph South

“We do, though,” he said. “So we can do that filtering and let the AI do its magic.”

As with its beta version, the new chatbot will also cite the sources used to generate each response. And it’ll let users know when it simply doesn’t have enough information to return a reliable response.

Developers are still in the early stages of deciding what academic journals to include. For now, they’re experimenting with a handful of key research articles, but will expand the chatbot’s range if initial prototypes prove helpful to educators.

Culatta and South, both veterans of the U.S. Department of Education, have spent years working on the research-to-practice problem, offering, in effect, translation services for research findings. “We’ve spent so much work trying to figure out how to do it and it’s just never really worked,” he said. “It’s just always been a struggle. And we actually think that this could be the first for-real, sustainable, scalable approach to taking research and getting it into language that actually could be used by teachers.”

Daniel Willingham

, a professor of psychology at the University of Virginia and a well-known translator of education research, said his limited experience with ChatGPT has shown that when asked about a subject where there’s general consensus, such as “What is the effect of sleep on memory?” it produces helpful results. But it isn’t very good at synthesizing conflicting findings.

It’s also inconsistent in its willingness to reveal, in Willingham’s words, that “‘I really don’t know anything about that.’ And so it, you know, just .”

A paid ChatGPT subscriber, Willingham said he gets “really useful” results only about 20% of the time. “But it requires plenty of verification on my part. And this is all within my area of expertise, so it’s not very hard for me to verify.”

Tapping ‘What Works’

ISTE isn’t the only organization pushing to make education research more widely accessible via chatbot. The Learning Agency, a Washington, D.C.-based consulting firm, is also testing a of a bot designed to offer answers to education research queries.

Unlike ISTE’s, the agency’s tool taps an already existing, if finite, resource: the U.S. Department of Education’s What Works Clearinghouse, or more specifically its , a curated collection of materials developed by the department’s Institute of Education Sciences.

“We were inspired to basically create a special version of ChatGPT that was exposed to more high-quality educational data and research evidence on what works,” said Perpetual Baffour, the group’s research director.

In a sense, she said, much of the work had already been done, since the library, though limited, exists to translate research findings into more digestible forms for educators. The result is a prototype that offers what Baffour calls “evidence-based education advice” on topics from adolescent literacy to dropout prevention and school turnarounds.

Perpetual Baffour

Baffour noted that the app development tool allowed the agency to create a “simple but robust” prototype within a day.

At the moment, the version they’re testing is “very basic,” Baffour said. The agency is still considering what it’ll look like in the future. 

“It only knows as much as the Doing What Works Library tells it,” she said. So queries about topics that are in the library return rich results. But asking it about topics that aren’t can be problematic. 

For example, ask it about myths around the aforementioned learning styles and it defaults to a more general knowledge base scraped from Wikipedia articles, transcripts of recorded conversations and materials from “different corners of the Internet,” Baffour said. “And as you can imagine, those sources might not have the most up-to-date and accurate information about education,” Baffour said.

Indeed, a query about the topic returns this: “The myth concerning learning styles is that there is one specific style that works best for everyone. This is not true, as different people learn in different ways and have different preferences for how they learn. Additionally, there is no evidence that suggests that focusing on one particular learning style is more effective than focusing on multiple styles.”

Not exactly accurate or helpful.

In the first place, the widely believed “myth” holds that people with different learning styles learn best when their preferred mode of learning is indulged — not that one style works best for everyone. At a more basic level, while many people may express preferences for ways to take in new information and study — receiving instruction verbally, for example, instead of via pictures — scientists have yet to find good evidence that material tuned to these preferences . 

Unfortunately, at the moment the agency’s bot doesn’t confess whether it knows a lot or little about a topic. Baffour said they want to change that soon. For now, however, that’s just an aspiration.

“I think you’re more likely to get a confident chatbot producing inaccurate information than you are to get a self-aware chatbot admitting its false and incomplete knowledge,” she said. 

Willingham, the UVA researcher, said a useful education-focused chatbot would not just have to incorporate reliable findings, but put them in context. For example, an answer to a query about the evidence for phonics instruction would properly note that, while the record is fairly strong, a lot of mediocre research and “hyperbolic claims” made in support of alternative methods serve to cloud the overall picture — a delicate but accurate detail.

“How is an aggregator going to negotiate that?” he said. 

Asked if he thought a chatbot might soon replace him, Willingham, the author of and a that translate learning science into plain English, said he wouldn’t make any predictions. 

“I was never much of a futurist, but I hocked my crystal ball 15 years ago,” he said.

]]>
Teen Mental Health Crisis Pushes More School Districts to Sue Social Media Giants /article/teen-mental-health-crisis-pushes-more-school-districts-to-sue-social-media-giants/ Fri, 31 Mar 2023 12:30:00 +0000 /?post_type=article&p=706803 The teen mental health crisis has so taxed and alarmed school districts across the country that many are entering legal battles against the social media giants they say have helped cause it, including TikTok, Snap, Meta, YouTube and Google.

At least eleven school districts, one county, and one California county system that oversees 23 smaller districts have filed suits this year, representing roughly 469,000 students. 

Two others in Arizona are considering their own complaints, one superintendent told The 74. Eleven districts in voted to pursue similar litigation, as did . Many others across the country are on the verge of doing the same, according to a lawyer representing a New Jersey district.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“Schools, states, and Americans across the country are rightly pushing back against Big Tech putting profits over kids’ safety online,” Sen. Richard Blumenthal, co-sponsor of the , bipartisan Kids Online Safety Act, told The 74. “These efforts, proliferated by harrowing stories from families amid a worsening youth mental health crisis, underscore the urgency for Congress to act.” 

Algorithms and platform design have “exploited the vulnerable brains of youth, hooking tens of millions of students across the country into positive feedback loops of excessive use and abuse of Defendants’ social media platforms,” Seattle Public Schools claimed in the first suit filed this January.

Districts in Washington, Oregon, Arizona, New Jersey, and , , as well as say tech companies intentionally , exacerbating depression, anxiety, tech addiction and self-harm, straining learning and district finances. 

But the legal fight, whether tried or settled, will not be easy, outside counsel and at least one district leader said. 

“We don’t think that this is a slam dunk case. We think it’s going to be an uphill battle. But our board and I believe that this is in the best interest of our students to do this,” said Andi Fourlis, superintendent of Arizona’s largest district, Mesa Public Schools. “It’s about making the case that we need to do better for our kids.” 

Just how badly Mesa’s teens are hurting is laid out in detail in court filings: More than a third are chronically absent, 3,500 more were involved in disciplinary incidents in 2021-22 than in 2019-20 and the district has seen a “surge” in suicidal ideation and anxiety. 

Buried in the 111-page lawsuit, a high school senior’s video essay illustrates the painful impacts of social media addiction: risky or self-destructive behavior, disconnection from friends.

Simultaneously, and lawmakers are proposing bills to make platforms safer. Senate are underway, featuring parents whose children died by suicide. TikTok’s CEO this month to address concerns about exposure to harmful content. President Joe Biden flagged “,” in his last State of the Union Address.

Both legislative and legal efforts are after similar goals: changing the algorithms and product design believed to be hurting and kids. Through lawsuits, districts also seek financial compensation for the increased mental health services and training they’ve “” to establish. 

“The harms caused by social media companies have impacted the districts’ ability to carry out their core mission of providing education. The expenditures are not sustainable and divert resources from classroom instruction and other programs,” said Michael Innes, partner with Carella Byrne, Cecchi, Olstein, Brody & Agnello, a firm representing New Jersey schools.

Previous complaints against opioid and e-cigarette companies, which levied public nuisance and negligence claims as districts’ social media filings do, resulted in multimillion dollar settlements. 

But some legal experts say there’s a key distinction in this case: Big Tech companies aren’t the ones producing content on these platforms, individuals are. Companies have some hefty . 

“School districts are not in the business of suing people … the threshold for initiating litigation is very high,” said Dean Kawamoto, a lawyer for Keller Rohrback, the Seattle-based firm representing four districts, and thousands of others in Juul litigation. 

“I do think it says something that you’ve got a group of schools that have filed now, and I think more are going to join them,” Kawamoto added. 

Some outside counsel are . 

“I think there are questions about whether the litigation system is even a coherent way to go about this,” First Amendment scholar and Harvard Law professor Rebecca Tushnet told The 74. “It’s very hard to use individual litigation to get systemic change, excepting in particular circumstances.” 

The exceptions, she added, have clear visions and specific outcomes, like requiring a doctor on-call for safer prison conditions. Those kinds of metrics are difficult to name when it comes to algorithms and mental health. 

What precedent (or lack thereof) tells us

Social media companies’ lawyers are likely to assert free speech protections early and often, including in initial motions to dismiss.

“The conventional wisdom is that if motions to dismiss are denied in cases like this, [companies] are much more likely to settle … reality is actually a little more mixed,” Tushnet said, adding if the claims come after business models, companies fight harder. 

An added challenge is proving causal harm — that social media companies have caused student depression, anxiety, eating disorders or self-harm. The link is one that neuroscientists and researchers are , though experts say there’s an urgent need. 

“This is a watershed moment where schools can really roll up their sleeves and do something because — not that they haven’t been in the past — but because it’s so obvious. It’s right in front of them. It’s impacting students’ education,” said Jerry Barone, chief clinical officer at Effective School Solutions, which brings mental health care to schools. 

About 13.5% of teen girls say Instagram makes thoughts of suicide worse; 17% of teen girls say it makes eating disorders worse, according to Meta’s leaked internal research, first revealed in a via .

Even if districts are able to provide proof, they may not ever see a judgment made. 

Public nuisance claims in tobacco and opioid mass torts were more successful in “inducing settlements, rather than in courthouse outcomes,” according to Robert Rabin, tort expert and professor at Stanford University. 

While he’s not “dismissive” of districts’ efforts, “the precedents don’t supply clear-cut support for the claims here.”’

The interim

As lawyers work out the details, students are left in the balance. Some are skeptical the suits will amount to anything at all, at least in their adolescence. 

“Why do you guys waste so much time on these useless things that you know get nowhere, when you can do it with things that you know will get somewhere?” said Angela Ituarte, a sophomore at a Seattle high school. 

Many young people interviewed by The 74 described their social media use like a double-edged sword: affirming, a place where they learned about mental health or found community, particularly for queer students of color; and simultaneously dangerous, a place where they connected with adults when they were 14 and saw dangerous diets promoted.

Social media, Ituarte said, makes it seem like self-harm and disordered eating, “are the solution to everything. And it’s hard to get that out of those algorithms — even if you block the accounts or say you’re not interested it still keeps popping up. Usually it’s when things are bad, too.”

In a late February letter to senators, Meta touted a promising initiative to on one for extended periods. Only 1 in 5 teens actually moved to a new topic during a weeklong trial. 

To curb cyberbullying, users now get warnings for potentially offensive comments. People only edit or delete their message 50% of the time, according to the company’s responses to Senate inquiries. 

Meta, YouTube and Google did not respond to requests for comment. TikTok told The 74 they cannot comment on ongoing litigation. The company has just started requiring users who say they are under 18 to enter a password after scrolling for an hour.

In a statement to The 74, Snap said they “are constantly evaluating how we continue to make our platform safer.” Snap has partnered with mental health organizations to launch an in-app support system for users who may be experiencing a crisis, and acknowledged that the work may never be done. 

The process has only just begun. If the suits move to trial, some districts will be chosen as bellwethers to represent the many plaintiffs, tasked with regularly contributing to a lengthy trial. 

Still, there’s no doubt in Fourlis’s mind. 

“Sometimes you have to be the first to step forward to take a bold leap so that others can follow,” she said. “Being the superintendent of the largest school district in Arizona, what we do often sets precedents, and I have to be very strategic about that responsibility.”

Disclosure: Campbell Brown, Meta’s vice president of media partnerships, is a co-founder and member of the board of directors of The 74. She played no role in the editing of this article.

]]>