data privacy – The 74 America's Education News Source Wed, 04 Mar 2026 21:17:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png data privacy – The 74 32 32 California Agency Fines Company For Violating Ed Tech Privacy Law /article/california-agency-fines-company-for-violating-ed-tech-privacy-law/ Thu, 05 Mar 2026 19:30:00 +0000 /?post_type=article&p=1029424 This article was originally published in

This story was originally published by . for their newsletters.

Before they could attend school football games or school plays, high school students across California had to give their personal information over to a ticketing platform, GoFan, which then sold that data to advertisers, state privacy regulators said. The parent company PlayOn, which has contracted with roughly 1,400 California schools, repeatedly violated state privacy law in 2023 and 2024, according to a January filed by the state’s privacy protection agency.

The California Privacy Protection Agency, sometimes known as CalPrivacy, announced the order Tuesday, saying it is fining PlayOn $1.1 million for failing to give students and families a way to opt out of their data collection.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


PlayOn offers a slew of online products that coordinate ticket and merchandise sales for schools and youth sports organizations, along with other services, such as fundraising and streaming. Its subsidiaries include GoFan, MaxPreps, and NFHS Network, which are used by school districts stretching from Los Angeles and San Diego to Modoc, Mono, and Sierra counties, the order says. The company’s annual gross revenue is over $26 million.

When users tried to access tickets for school events through one of PlayOn’s platforms, GoFan, a pop-up appeared, prompting the ticket-holder to agree to the company’s privacy policy, which allowed the sale of personal data. There was no way to say no, the order said: The pop-up obscured the screen so that it was impossible to access the ticket without agreeing to the company’s terms.

“Students trying to go to prom or a high school football game shouldn’t have to leave their privacy rights at the door,” said Michael Macko, CalPrivacy’s head of enforcement, in . “You couldn’t attend these events without showing your ticket, and you couldn’t show your ticket without being tracked for advertising. California’s privacy law does not work that way. Businesses must ensure they offer lawful ways for Californians to opt-out, particularly with captive audiences.”

PlayOn “does not admit liability for any violation” of state law, according to the disciplinary order, which effectively functions as a settlement agreement. The order also notes that the company significantly changed its privacy policy in December 2024, allowing users to opt out of data collection, bringing the company into compliance with the state law. These data privacy matters have been “fully resolved” since then, said James Dickinson, the company’s senior vice president of marketing, in an email.

The fine is the first time that the state privacy agency has gone after a company for violating the rights of students and schools, according to the press release. The agency formed in 2020 when voters backed calling for increased enforcement of data privacy laws.

Exceptions to California’s privacy law

California has some of the strongest data privacy laws in the country, including a landmark 2018 law that requires large for-profit companies to give users a relatively easy way to opt out of data collection or delete their data.

Enforcing the law can prove tricky though. Last year, found that more than 30 companies made it difficult for customers to exercise their privacy rights. While the companies were technically abiding by the law, which requires them to give customers a way to delete their information, they used special code to hide that information from Google search results.

The 2018 law also has a number of exceptions, including for non-profit organizations and for companies that buy, sell or share data from less than 100,000 California residents or households.

The state privacy agency is responsible for enforcing the law. In the past 12 months, the agency has found violations by the menswear company , the rural supply retailer . and the automaker , each resulting in fines ranging from $345,000 to $1.35 million. In January, the state said in that it fined Datamasters, a data broker, for selling the names, addresses, phone numbers, and email addresses of “millions of people with Alzheimer’s disease, drug addiction, bladder incontinence, and other health conditions for targeted advertising.” The broker also traded data on individuals’ perceived race, political views and banking activity.

California has additional protections regarding the collection and sale of students’ data, but those laws do not necessarily include apps and services used outside of the classroom, even when that technology is a de facto requirement for participation in school sports or extracurriculars. Assemblymember , a San Luis Obispo Democrat, introduced a bill this year that would expand the number of tech companies who need to abide by California education privacy rules, but the laws could still leave out many popular student services, last month.

PlayOn did not respond to questions about its compliance with California school privacy law. The PlayOn says it doesn’t collect personal information from “minors under the age of 16 without proper consent” but it doesn’t mention anything about students who are age 16 or 17.

California law prohibits companies from selling all K-12 students’ data, regardless of their age.

This article was and was republished under the license.

]]>
Online Censorship in Schools Is ‘More Pervasive’ than Expected, New Data Shows /article/schools-use-of-web-filtering-subjective-and-unchecked/ Thu, 23 Jan 2025 13:30:00 +0000 /?post_type=article&p=738793 This article was originally published in

Aleeza Siddique, 15, was in a Spanish class earlier this year in her Northern California high school when a lesson about newscasts got derailed by her school’s internet filter. Her teacher told the class to open up their school-issued Chromebooks and explore a list of links he had curated from the Spanish language broadcast news giant Telemundo. The students tried, but every single link turned up the same page: a picture of a padlock. 

“None of it was available to us,” Aleeza said. “The site was completely blocked.” 

She said her teacher scrambled to pivot and fill the 90-minute class with other activities. From what she recalls, they went over vocabulary lists and independently clicked through online quizzes from Quizlet — a decidedly less dynamic use of time. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


 by the D.C.-based Center for Democracy & Technology shows just how often some of that blocking happens nationwide. The nonprofit digital rights advocacy organization conducted its fifth annual survey of middle and high school teachers and parents as well as high school students about a range of tech issues. About 70% of both teachers and students this year said web filters get in the way of students’ ability to complete their assignments. 

Virtually all schools use some type of web filter to comply with the Children’s Internet Protection Act, which requires districts taking advantage of the federal E-rate program for discounted internet and telecommunications equipment to keep kids from seeing graphic and obscene images online. A , which is now a part of CalMatters, discovered far more expansive blocking by school districts than federal law requires, some of it political, mirroring culture war battles over what students have access to in school libraries. That investigation found school districts blocking access to sex education and LGBTQ+ resources, including suicide prevention. It also found routine blocking of websites students seek out for academic research. And because school districts tend to set different restrictions for students and staff, teachers can be  because of how they complicate lesson planning.

Web filtering is  ‘subjective and unchecked’

Elizabeth Laird, director of equity in civic technology for the center and lead author of the report, said The Markup’s reporting helped inspire additional survey questions to better understand how schools are using filters as a “subjective and unchecked” method of restricting students’ access to information. 

“The scope of what is blocked is more pervasive and value-laden than I think we initially even knew to ask last year,” Laird said. 

While past surveys have revealed how often students and teachers report disproportionate filtering of content related to reproductive health, LGBTQ+ issues and content about people of color, the center asked respondents this year if they thought content associated with or about immigrants was more likely to be blocked. About one-third of students said yes. 

Aleeza would have said yes, after her experience with Telemundo. The California teen said how often she runs into blocks depends on how much research she’s trying to do and how much of it she has to do on her school computer. When she was taking a debate class, she ran into the blocks regularly while researching controversial topics. An article in Slate magazine about LGBTQ+ rights gave her a block screen, for example, because the entire news website is blocked. She said she avoids her school Chromebook as much as possible, doing homework on her personal laptop away from school Wi-Fi whenever she can. 

Sign-up for the School (in)Security newsletter.

Get the most critical news and information about students' rights, safety and well-being delivered straight to your inbox.

Nearly one-third of teachers surveyed by the Center for Democracy & Technology said their schools block content related to the LGBTQ+ community. About half said information about sexual orientation and reproductive health is blocked. And Black and Latino students were more likely to say content related to people of color is disproportionately blocked on their school devices.

For students like Aleeza, the blocking is frustrating in practice as well as principle. 

“The amount that they’re policing is actively interfering with our ability to have an education,” she said. Often, she has no idea why a website triggers the block page. Aleeza said it feels arbitrary and thinks her school should be more transparent about what it’s blocking and why. 

“We should have a right to know what we’re being protected from,” she said.

Audrey Baime, Olivia Brandeis, and Samantha Yee, all members of the CalMatters Youth Journalism Initiative, contributed reporting for this story.

This was originally published on .

]]>
AI Tools and Student Privacy: 9 Tips for Teachers /article/ai-tools-and-student-privacy-9-tips-for-teachers/ Wed, 01 Jan 2025 17:30:00 +0000 /?post_type=article&p=737439 This article was originally published in

Since the release of ChatGPT to the public in November 2022, the number of AI tools has skyrocketed, and there are now many advocates for the potential changes AI can cause in education.

But districts have not been as fast in providing teachers with training. As a result, many are experimenting without any guidance, an .

To learn about how teachers and other educators can protect student data and abide by the law when using AI tools, Chalkbeat consulted documents and interviewed specialists from school districts, nonprofits, and other groups. Here are nine suggestions from experts.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Consult with your school district about AI

Navigating the details about the privacy policies in each tool can be challenging for a teacher. Some districts list tools that they have vetted or with which they have contracts.

Give preference to these tools, if possible, and check if your district has any recommendations about how to use them. When a tool has a contract with a school or a district, they are supposed to protect students’ data and follow national and state law, but always check if your district has any recommendations on how to use the tool. Checking with your school’s IT or education technology department is also a good option.

It is also essential to investigate if your school or district has guidelines or policies for the general use of AI. These documents usually review privacy risks and ethical questions.

Check for reviews about AI platforms’ safety

Organizations like and review ed-tech tools and provide feedback on their safety.

Be careful when platforms say they comply with laws like the Family Educational Rights and Privacy Act, or FERPA, and the Children’s Online Privacy Protection Rule. According to the law, the school is ultimately responsible for children’s data and must be aware of any information it shares with a third party.

Study the AI platform’s privacy policy and terms

The privacy policy and the terms of use should provide some answers about how a company uses the data it collects from you. Make sure to read them carefully, and look for some of the following information:

  • What information does the platform collect?
  • How does the platform use the collected data? Is it used to determine which ads it will show you? Does it share data with any other company or platform?
  • For how long does it keep the collected data?
  • Is the data it collects used to train the AI model?

The list of questions that Common Sense Media uses for their privacy evaluations is .

You should avoid signing up for platforms that collect a broad volume of data or that are not clear in their policies. One potential red flag: vague claims about “retaining personal information for as long as necessary” and “sharing data with third parties to provide services.”

Bigger AI platforms can be safer

Big companies like OpenAI, Google, Meta, and others are under more scrutiny: NGOs, reporters, and politicians tend to investigate their privacy policies more frequently. They also have bigger teams and resources that allow them to invest heavily in compliance with privacy regulations. For these reasons, they tend to have better safeguards than small companies or start-ups.

You still have to be careful. Most of these platforms are not explicitly intended for educational purposes, making them less likely to create specific policies regarding student or teacher data.

Use the tools as an assistant, not a replacement

Even though these tools provide better results when you input more information, try to use them for tasks that don’t require much information about your students.

AI tools can help provide suggestions on how to ask questions about a book, set up document templates, like an Individualized Educational Program plan or a behavioral assessment, or create assessment rubrics.

But even tasks that can seem mundane can increase risks. For example, providing the tool with a list of students and their grades on a specific assignment and asking it to organize it in alphabetical order could represent a violation of student privacy.

Turn on maximum privacy settings for AI platforms

Some tools allow you to adjust your privacy settings. Look online for tutorials on the best private settings for the tool that you are using and how to activate them. , for example, allows users to stop it from using your data to train AI models.

Doing this does not necessarily make AI tools completely safe or compliant with student privacy regulations.

Never input personal information to AI platforms

Even if you take all the steps above, do not input student information. Information that is restricted can include:

  • Personal information: a student’s name, Social Security number, education ID, names of parents or other relatives, address and phone number, location of birth, or any other information that can be used to identify a student.
  • Academic records: reports about absences, grades, and student behaviors in the school, student work, and teachers’ feedback on and assessments of student work.

This may be harder than it sounds.

If teachers upload student work to a platform to get help with grading, for example, they should remove all identification, including the student’s name, and replace it with an alias or random number that can’t be traced back to the student. It’s also wise to ensure the students haven’t included any personal information, like their place of birth, where they live or personal details about their families, friends, religious or political inclination, sexual orientation, and club affiliations.

One exception is for platforms approved by the school or the district and holding contracts with them.

Be transparent with others about using AI

Communicate with your school supervisors, principal, parents, and students about when and how you use AI in your work. That way, everyone can ask questions and bring up concerns you may not know about.

It is also a good way to model behavior for students. For example, if teachers ask students to disclose when they use AI to complete assignments, being transparent with them in turn about how teachers use AI might foster a better classroom environment.

If uncertain, ask AI platforms to delete information

In some states, the law says platforms must delete users’ information if they request it. And some companies will delete it even if you aren’t in one of these states.

Deleting the data may be challenging and not solve all of the problems caused by misusing AI. Some companies may take a long time to respond to deletion requests or find loopholes in order to avoid deleting it.

The tips listed above come from the , published by the American Federation of Teachers; the report by the U.S. Department of Education’s Office of Educational Technology; and the used by Common Sense Media to carry out its privacy evaluations.

Additional help came from Calli Schroeder, senior counsel and global privacy counsel at the Electronic Privacy Information Center; Brandon Wilmart, director of educational technology at Moore Public Schools in Oklahoma; and Anjali Nambiar, education research manager at Learning Collider.

This story was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at . 

]]>
Stolen Providence School District Data May Be Making Its Way Online /article/stolen-providence-school-district-data-may-be-making-its-way-online/ Sun, 13 Oct 2024 13:00:00 +0000 /?post_type=article&p=733980 This article was originally published in

Providence public school officials last Friday were about to finalize a credit monitoring agreement to provide protection for district teachers and staff after a recent ransomware attack on the district’s network.

Then over the weekend, a video preview of selected data allegedly stolen from the Providence Public School Department (PPSD) showed up on a regular website. The site is accessible via any internet browser — what’s sometimes called the “clearnet” — unlike the dark web ransom page where cybercriminal group Medusa first alleged to .

While a forensic analysis of the breach continues, the credit monitoring agreement with an unspecified vendor was finalized as of Thursday and the district was drafting a letter to go out to the staff “very soon” with information on how to access those services, spokesperson Jay G. Wégimont said in an email.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“First and foremost, the safety and security of our staff members is of utmost importance, and the District continues to make decisions with that in mind,” Wégimont said.

“We will also continue to explore any additional services we can offer to protect the security of our staff members and students.”

Meanwhile, the data breach has yet to be formally reported to the Rhode Island Attorney General’s office, said spokesperson Brian Hodge. requires any municipal or government agency to inform the AG’s office, credit reporting agencies, and people affected by a breach within 30 days of the breach’s confirmation.

PPSD first used the wording “unauthorized access” to describe the breach in a Sept. 25 letter from Superintendent Javier Montañez, although the Providence School Board had used the term “breach” in a public statement on Sept. 18.

Providence Mayor Brett Smiley was “encouraged” the district was advising potentially affected staff and finalizing the credit monitoring agreement, spokesperson Anthony Vega said in a statement emailed Tuesday to Rhode Island Current.

The Providence City Council declined to comment, said spokesperson Roxie Richner in an email. Gov. Dan Mckee’s office did not respond to a request for comment.

‘Robert’ makes a video

Ransomware group Medusa first took public credit for the pirated PPSD data on Sept. 16, when it demanded a $1 million ransom to be paid by the morning of Sept. 25.

Rhode Island Current previously reported that the alleged ransom landing page did not provide access to files, but did show file and folder names, as well as partially obscured screenshots of the allegedly stolen data.

The clearnet-hosted leak includes a 24-minute screen recording in which someone clicks through an assortment of the allegedly leaked files and folders on an otherwise empty Windows desktop. The post sports a disclaimer that its author is “not engaged in illegal activities” and showcases leaks only for “possible information security problems.”

The author signs off: “Traditional thanks to The Providence Public School Department for the provided data. Do not skimp on information security. Always yours. Robert.”

While the uploader does not explicitly brand themself as affiliated with Medusa, the “Robert” source appears to share all the same leaks Medusa does, and both sources use the same encrypted messaging address, according to threat researchers at Bitdefender.

Ransomware attacks, and Medusa’s methodology as well, have long been associated with social engineering — like getting people to click phishing links in emails. But it’s becoming more common that outdated hardware or software are to blame, said Bill Garneau, vice president of operations at CMIT Solutions in Cranston.

“What we’ve started to see in terms of ransomware is, it’s not only business email compromise,” Garneau said. “Threat actors out there are really pursuing systems that are out of compliance.”

That could mean equipment at the end of its manufacturer-supported lifespan, or software that needs to be patched. Garneau’s company uses a crafted by the National Institute of Standards and Technology. One of its standards is to patch devices within 30 days of the patch release, before threat actors can exploit the vulnerabilities patches are meant to fix.

“If there’s a patch available, it’s because there’s a bad guy out there that knows that there’s a vulnerability, and there’s somebody that’s knocking on doors trying to find it,” Garneau said.

To insure or not to insure?

Cyber insurance policies can cover some costs incurred by attacks. But they can’t prevent future threats or suddenly make insecure networks better, Garneau noted.

“Insurance is great, right? But that’s not going to solve any problem,” Garneau said.

PPSD has not responded to requests about whether the district has cyber insurance. According to Lauren Greene, a spokesperson for the Rhode Island League of Cities and Towns, no public entity would disclose that information anyway. “As you can understand, it poses a security risk for municipalities to disclose if and what type of cybersecurity insurance that they have,” Greene said in an email.

“Municipalities continue to prioritize training for their staff in order to mitigate risk and draw awareness to the constantly evolving threats,” Greene added, and noted that a community’s IT staff may work across multiple areas or departments like public safety and schools.

A released Monday, however, showed that states-level IT officials and security officers are not feeling confident about the budgets for their states’ IT infrastructure.

“The attack surface is expanding as state leaders’ reliance on information becomes increasingly central to the operation of government itself,” Srini Subramanian, principal of Deloitte & Touche LLP, said in an with States Newsroom. “And CISOs (chief information security officers) have an increasingly challenging mission to make the technology infrastructure resilient against ever-increasing cyber threats.”

Those challenges were reflected in the survey numbers, which found almost half of respondents did not know their state’s budget for cybersecurity. Roughly 40% of state IT officers said they did not have enough funds to comply with regulations or other legal requirements.

That finding echoes a , which scores and analyzes municipal bonds. “While robust cybersecurity practices can help reduce exposure, initiatives that are costly and require a shift in resources away from core services are a credit challenge,” wrote Gregory Sobel, a Moody’s analyst and assistant vice president, in the report.

Moody’s also noted that one survey showed 92% of local governments had cyber insurance, a twofold increase over five years. But that popularity came with higher rates: One county in South Carolina went from paying a $70,000 premium in 2021 to a $210,000 premium in 2022. Those higher costs are also in addition to stricter stipulations on risk management practices before a policy will pay out, like better firewalls, consistent data backups and multi-factor authentication.

Douglas W. Hubbard, the CEO of consulting firm Hubbard Decision Research and coauthor of “How to Measure Anything in Cybersecurity Risk,” told Rhode Island Current in an email that schools should exhaust the low-cost, shared or free resources available to help them manage cyber risk. Examples include (CISA) or a by the Federal Communications Commission for K-12 schools.

“For specific cybersecurity recommendations…there are a few things that are so fundamental that administrators don’t really even need a risk analysis to get started,” Hubbard said. They include training staff and students on best practices including strong passwords or avoiding mysterious links. Multi-factor authentication is “probably the single most effective technology a school could implement,” even if it involves an upfront cost, Hubbard said.

“The fundamental responsibilities of the schools should include at least using the resources which have been made available to them through the programs I mentioned,” Hubbard said. “If they aren’t doing at least that, there is room for blame.”

This article was corrected to show that Rhode Island state law requires municipal agencies to notify affected parties and the state Attorney General within 30 days of a data breach. The article originally stated 45 days, which is the timeframe required for individuals to report a breach. 

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Rhode Island Current maintains editorial independence. Contact Editor Janine L. Weisman for questions: info@rhodeislandcurrent.com. Follow Rhode Island Current on and .

]]>
Data Privacy Advocates Raise Alarm Over NYC’s Free Teen Teletherapy Program /article/data-privacy-advocates-raise-alarm-over-nycs-free-teen-teletherapy-program/ Thu, 12 Sep 2024 12:30:00 +0000 /?post_type=article&p=732707 This article was originally published in

New York City’s free online therapy platform for teens may violate state and federal laws protecting student data privacy, lawyers from the New York Civil Liberties Union and advocates charged in a letter Tuesday to the city’s Education and Health Departments.

, a $26 million partnership between the city Health Department and teletherapy giant Talkspace launched in late 2023, connects city residents between ages 13 and 17 with free therapists by text, phone, or video chat.

In less than a year, roughly 16,000 students have signed up, Health Department officials said. Sign-ups disproportionately came from youth who identified as Black, Latino, Asian American and female and live in some of the city’s lowest-income neighborhoods, .


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Information shared with a therapist is subject to stringent protections under the federal Health Insurance Portability and Accountability Act, or HIPAA. But before connecting with a therapist through Teenspace, teens go through a registration process that asks for personal information like their name, school, mental health history, and gender identity. Advocates are concerned such information is being improperly collected and could be misused.

For one, teens enter the registration information before securing parental consent – a possible violation of federal student privacy laws, the letter contends.

And families don’t get a chance to review the privacy policy – which discloses that registration information can be used to “tailor advertising” and for marketing purposes – before entering the registration information, advocates allege. There’s an option for teens to request that their data be deleted from the company’s platform, but it’s hard to find, according to advocates.

“It’s all very invasive,” said Shannon Edwards, a parent and founder of AI For Families, an organization that seeks to help families navigate artificial intelligence, who co-authored the letter along with NYCLU and the Parent Coalition for Student Privacy. “It’s also very unclear that parents understand what they’re getting themselves into.”

Advocates also pointed to the risk of a potential data breach – something the city has in recent years.

Advocates say similar about have been circulating for years and questioned whether city officials did sufficient due diligence or built in enough additional privacy safeguards before inking the contract.

“It’s the opacity of the relationship here, and the failure to make manifest what the city is doing to ensure there isn’t this data accumulation and sharing for inappropriate purposes,” said Beth Haroules, a senior attorney at the NYCLU who co-authored the letter.

Health Department spokesperson Rachel Vick said the agency has “taken additional steps to protect the data of Teenspace users and ensure information is not collected for personal gain, including stipulations that require all client data to remain confidential during and after the completion of the city’s contract and barring use of data for any purpose other than providing the services included in the contract.”

Client data is destroyed after 30 days if a teen doesn’t connect with a therapist, officials said.

A spokesperson for Talkspace referred questions to the Health Department.

The extent to which Teenspace is subject to state and federal laws governing student privacy in educational settings is somewhat murky, given that the contract is with the city’s Health Department, not its Education Department.

But NYCLU attorneys contend “the City cannot absolve itself of its responsibility to provide the protections inherent in federal and state laws…simply because the contract sits with DOHMH instead of DOE. The service is promoted on public school websites, and it is DOE’s responsibility to ensure that student data is protected, regardless of which City agency signs the contract.”

Parents may be more inclined to trust the platform because it has a “stamp of approval” from the school system, Edwards added.

A Health Department spokesperson didn’t specify whether the program is subject to education privacy laws, but said it’s “not a school based service.”

Teenspace has been the city’s highest-profile effort to address the ongoing youth mental health crisis.

“We are meeting people where they are with a front door to the mental health system that for too long has been too hard to find,” said Ashwin Vasan, the city’s health commissioner, in May.

Some teens have praised the program, noting it’s a way to bring mental health care to young people who may not otherwise have access.

But some mental health providers have argued it can’t replace the kind of intensive care a clinician provides, especially for kids with severe mental health challenges.

Company officials shared in May that they had helped 36 teens navigate serious incidents including reports of suicide attempts and abuse – cases they referred to child protective services, in-person therapists, or hospitals.

Talkspace CEO Jon Cohen previously told Chalkbeat the company uses an artificial intelligence algorithm to scan transcripts of therapy sessions to help identify teens at risk of suicide.

Even advocates critical of Teenspace’s privacy protections acknowledge the severe shortage of mental health providers and say teletherapy can play a role in filling the gap.

“We know you cannot find providers … there is such a need,” said Haroules. But advocates said the city can do more to ensure its vendors are meeting strict standards for data privacy, especially with such sensitive information.

“Everyone thinks, well, mental health is important for kids, these kids of services are required … when on the other side is: ‘How are they getting to it?’” said Edwards. “It doesn’t matter what the app is, there has to be a standard.”

This was originally published by Chalkbeat. Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at .

]]>
L.A. Schools Probe Charges its Hyped, Now-Defunct AI Chatbot Misused Student Data /article/chatbot-los-angeles-whistleblower-allhere-ai/ Wed, 10 Jul 2024 10:30:00 +0000 /?post_type=article&p=729622 Independent Los Angeles school district investigators have opened an inquiry into claims that its $6 million AI chatbot — an animated sun named “Ed” celebrated as an unprecedented learning acceleration tool until the company that built it collapsed and the district was forced to pull the plug — put students’ personal information in peril.

Investigators with the Los Angeles Unified School District’s inspector general’s office conducted a video interview with Chris Whiteley, the former senior director of software engineering at AllHere, after he told The 74 his former employer’s student data security practices violated both industry standards and the district’s own policies. 

Whiteley told The 74 he had alerted the school district, the IG’s office and state education officials earlier to the data privacy problems with Ed but got no response. His meeting with investigators occurred July 2, one day after The 74 published its story outlining Whiteley’s allegations, including that the chatbot put students’ personally identifiable information at risk of getting hacked by including it in all chatbot prompts, even in those where the data weren’t relevant; sharing it with other third-party companies unnecessarily and processing prompts on offshore servers in violation of district student privacy rules. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


In an interview with The 74 this week, Whiteley said the officials from the district’s inspector general’s office “were definitely interested in what I had to say,” as speculation swirls about the future of Ed, its ed tech creator AllHere and broader education investments in artificial intelligence. 

“It felt like they were after the truth,” Whiteley said, adding, “I’m certain that they were surprised about how bad [students’ personal information] was being handled.”

To generate responses to even mundane prompts, Whiteley said, the chatbot processed the personal information for all students in a household. If a mother with 10 children asked the chatbot a question about her youngest son’s class schedule, for example, the tool processed data about all of her children to generate a response. 

“It’s just sad and crazy,” he said.

The inspector general’s office directed The 74’s request for comment to a district spokesperson, who declined to comment or respond to questions involving the inquiry.

While the conversation centered primarily on technical aspects related to the company’s data security protocols, Whiteley said investigators probed him on his personal experiences with AllHere, which he described as being abusive, and its finances.

Whiteley was laid off from AllHere in April. Two months later, a notice posted to the said a majority of its 50 or so employees had been furloughed due to its “current financial position” and the LAUSD spokesperson said company co-founder and CEO Joanna Smith-Griffin had left. The former Boston teacher and Harvard graduate was successful in raising $12 million in venture capital for AllHere and appeared with L.A. schools Superintendent Alberto Carvalho at ed tech conferences and other events throughout the spring touting the heavily publicized AI tool they partnered to create.

Just weeks ago, Carvalho spoke publicly about how the project had put L.A. out in front as school districts and ed tech companies nationally race to follow the lead of generative artificial intelligence pioneers like ChatGPT. But the school chief’s superlative language around what Ed could do on an individualized basis with 540,000 students had some industry observers and AI experts speculating it was destined to fail.

The chatbot was supposed to serve as a “friendly, concise customer support agent” that replied “using simple language a third grader could understand” to help students and parents supplement classroom instruction, find assistance with kids’ academic struggles and navigate attendance, grades, transportation and other key issues. What they were given, Whiteley charges, was a student privacy nightmare. 

Smith-Griffin recently deactivated her LinkedIn page and has not surfaced since her company went into apparent free fall. Attempts to reach AllHere for comment were unsuccessful and parts of the company website have gone dark. LAUSD said earlier that AllHere is for sale and that several companies are interested in acquiring it.

The district has already paid AllHere $3 million to build the chatbot and “a fully-integrated portal” that gave students and parents access to information and resources in a single location, the district spokesperson said in a statement Tuesday, and “was surprised by the financial disruption to AllHere.” 

AllHere’s collapse represents a stunning fall from grace for a company that was named among the world’s top education technology companies by Time Magazine just months earlier. Scrutiny of AllHere intensified when Whiteley became a whistleblower. He said he turned to the press because his concerns, which he shared first with AllHere executives and the school district, had been ignored.

Whitely shared source code with The 74 which showed that students’ information had been processed on offshore servers. Seven out of eight Ed chatbot requests, he said, were sent to places like Japan, Sweden, the United Kingdom, France, Switzerland, Australia and Canada. 

‘How are smaller districts going to do this?’

What district leaders failed to do as they heralded their new tool, Whiteley said, is conduct sufficient audits. As L.A. — and school systems nationwide — contract with a laundry list of tech vendors, he said it’s imperative that they understand how third-party companies use students’ information. 

“If the second-biggest district can’t audit their [personally identifiable information] on new or interesting products and can’t do security audits on external sources, how are smaller districts going to do this?” he asked.

Over the last several weeks, the district’s official position on Ed has appeared to shift. In late June when the district spokesperson said that several companies were “interested in acquiring Allhere,” they also said its predecessor would “continue to provide this first-of-its-kind resource to our students and families.” In its initial response to Whiteley’s allegations published July 1, the spokesperson said that education officials would “take any steps necessary to ensure that appropriate privacy and security protections are in place in the Ed platform.” 

In in the Los Angeles Times, a district spokesperson said the chatbot had been unplugged on June 14. The 74 asked the spokesperson to provide documentation showing the tool was disabled last month but didn’t get a response. 

Even after June 14, Carvalho continued to boast publicly about LAUSD’s foray into generative AI and what he described with third-party vendors. 

On Tuesday, the district spokesperson told The 74 that the online portal — even without a chatty, animated sun — “will continue regardless of the outcome with AllHere.” In fact, the project could become a source of district revenue. Under the contract between AllHere and LAUSD, which was obtained by The 74, the chatbot is the property of the school district, which was set to receive 2% in royalty payments from AllHere “should other school districts seek to use the tool to benefit their families and students.” 

In the statement Tuesday, the district spokesperson said that officials chose to “temporarily disable the chatbot” amid AllHere’s uncertainty and that it would “only be restored when the human-in-the-loop aspect is re-established.” 

Whiteley agreed that the district could maintain the student information dashboard without the chatbot and, similarly, that another firm could buy what remains of AllHere. He was skeptical, however, that Ed the chatbot would live another day because “it’s broken”

“The name AllHere,” he said, “I think is dead.”

]]>