online safety – The 74 America's Education News Source Wed, 23 Apr 2025 19:02:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png online safety – The 74 32 32 Opinion: Protecting Children Online Takes Technology, Human Oversight and Accountability /article/protecting-children-online-takes-technology-human-oversight-and-accountability/ Thu, 24 Apr 2025 16:30:00 +0000 /?post_type=article&p=1014057 The rise of social media and online gaming has transformed how children interact, learn and play. While platforms like , , and offer opportunities for connection and creativity, they also present serious risks, as online predators, traffickers and exploitative individuals increasingly use these platforms to groom, exploit and manipulate minors.

A 2023 by the National Center for Missing & Exploited Children reported 32 million instances of child sexual abuse material being flagged across social media and gaming platforms. has also warned about the rising number of cases where predators use these platforms to groom children through manipulation, coercion and deception.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


A recent charging that “facilitated the criminal victimization of a 13-year-old child” highlights the systemic vulnerabilities that make young people easy targets. Despite Roblox’s, which cite ‘frequent’ audits and improvements to its algorithms to detect and block behavior that violates its Terms of Use, the lawsuit says loopholes allowed inappropriate content and predator interactions to persist. Discord, known for its private, invitation-only servers, has been cited multiple times for hosting unmoderated spaces where illicit activities thrive.

In fact, the National Society for the Prevention of Cruelty to Children found that platforms like Instagram, Snapchat and TikTok were responsible for of online grooming cases in the U.K. With children spending an average of four to seven hours daily online, exposure to potential harm is greater than ever.

While no single group or entity can solve this crisis alone, social media, gaming and tech companies must prioritize user safety. Rapid advancements in technology and artificial intelligence have unlocked vast opportunities for implementing safeguards in a way that is more streamlined and automated than ever. But it’s not solely about technology enhancements; a multi-pronged approach that leverages technology, human oversight and accountability is necessary, with changes such as:   

  • Stronger content moderation: AI-powered filtering is helpful but flawed. More human oversight is needed to identify harmful content and shut down predator accounts.
  • Improved reporting mechanisms: Users should have an easy, one-click way to report inappropriate content, with clear follow-up actions.
  • Age verification enhancements: Current systems are easily bypassed. More stringent ID-based verification should be mandatory.
  • Proactive predator detection: Platforms should use behavioral analysis to flag predatory activity before harm occurs.
  • Increased transparency and accountability: Lawsuits like the one against Roblox and Discord prove that self-regulation isn’t enough. Social media companies must be held legally responsible for failing to protect children.

With so many stakeholders — tech companies, parents, lawmakers and law enforcement — it’s easy for the legal and ethical responsibility for protecting minors to become blurred. But any crisis as large as this cannot be solved by solely one party, because no single entity can fully prevent these incidents.

Parents and guardians must start having conversations with their children about online safety — not tomorrow, but tonight. They must realize they are the first line of defense in protecting young people from online predators, so extra vigilance, open dialogue and education are critical. Minors must also be taught about red flags, processes for reporting inappropriate content and the importance of talking to a trusted adult when put in a difficult or inappropriate situation, so they can navigate online spaces safely.

But while all must share some responsibility, accountability is paramount. As governments worldwide grapple with how to balance child protection and internet freedom, legislators must push for stricter regulations and penalties for tech companies that fail to safeguard children. And tech companies must begin upholding a safety-by-design standard that invests in better detection of harmful content, grooming patterns and suspicious behavior.

Protecting children in the digital age requires constant vigilance, policy enforcement and education. It’s time to turn the tide against online exploitation to create a safer digital world where children can explore, play and learn without fear of exploitation.

]]>
Here’s How Teens are Preparing for a Minefield of Election Misinformation /article/heres-how-teens-are-preparing-for-a-minefield-of-election-misinformation/ Mon, 04 Nov 2024 20:55:29 +0000 /?post_type=article&p=734989 This article was originally published in

This story was published in collaboration with Headway, a new initiative at The New York Times. Chalkbeat and Headway have been to educators and high school students since February. We have heard from more than 1,000 students and 200 teachers across the nation.

This presidential election year, young Americans are navigating a chaotic world of information, often with limited tools to distinguish what’s credible, what’s questionable, and what’s downright false.

A found that while many young people can detect images generated by artificial intelligence with ease, they struggle to differentiate news from commentary and advertisements and regularly encounter conspiracy theories on social media. Eight in 10 respondents said they believed at least one of those conspiracy theories.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


and their peers told us that they regularly encountered false information online about the election between Vice President Kamala Harris and former President Donald J. Trump. Some teachers have dedicated and fact-checking.

And many students have told us they have gained confidence in spotting falsehoods. We asked more than 1,000 students about what tips them off that a piece of information might be false or misleading, what’s their approach to verifying information, and what advice they have for other teenagers. Here’s what we heard.

Responses have been edited and condensed for length and clarity.

How teens know if information is sketchy, made up or manipulated

“If the content I’m seeing is triggering an extreme emotional reaction in me — rage, fear or joy, to name a few — without offering nuanced context, it leads me to think that it might be designed to mislead. When I encounter something that seems absolutely certain about morally and politically complex topics, such as the Israel-Hamas war, without acknowledging alternative views or uncertainties, I suspect it’s oversimplifying reality to push an agenda.”

— Sena Chang, 18

College freshman at Princeton University in New Jersey

“Articles that sound sketchy, made up, or manipulated are a red flag. Some media sources get rid of the bits and pieces of context that make a situation understandable. And media outlets sometimes contradict each other. Check and cross-check media. When a true piece of media spreads like wildfire, some media outlets will try and get attention from the situation and end up spreading lies about the situation. That’s why I find most articles about popular controversies annoyingly eye-rolling.”

— Antonette Davis, 14

Freshman at Central High School in Philadelphia

A single source doesn’t cut it for verifying what’s true

“I verify my information by getting it from multiple sources, not just people online who are crediting the original article I read. I also look at the information presented in the article from the perspective of a person who doesn’t know anything about the topic and see if the article and the ideas presented still make sense.”

— Yoni Zacks, 17

Senior at the Blake School in Minneapolis.

“More often than not I look it up on Google and read about it on a more reliable website. For example, if an article makes a claim about a piece of legislation, I try to find the full text of the cited legislation to better understand what it’s saying.”

— Olivia Garrison, 17

Graduated in 2023 from Davidson Academy in Reno, Nevada

“There’s a tool called Google Reverse Image Search that I use to check the origins of viral images or memes to see where they first appeared and if they’ve been repurposed out of context. During events like the presidential debate, I also looked at multiple websites offering real-time fact-checking like The New York Times to help contextualize what I was hearing and identify when what the candidates were saying was misinformation.”

— Sena Chang

“To verify information, I try to listen directly to candidates or their campaigns. I find this is the easiest way to understand the candidate’s policy plans, opinions on certain issues, and overall decorum. While commentary can be helpful, it often includes opinions that make me perceive certain things a certain way. Therefore, I find it important to directly hear from a political candidate first. Afterward, I listen to and watch video media with commentary. It helps me compare my understanding to someone else’s and clarify things I might not have fully understood.”

— Meghan Pierce, 18

Freshman at the University of Illinois Urbana-Champaign in Champaign, Illinois

How young people navigate a world of misinformation

“As a teenager, I get a lot of my information from social media. I know many other teenagers get their information this way, too, so my word of advice is to be aware of the algorithm and how you’re fed information usually from one side. You’re not getting the complete story, so do your research instead of trusting one source!”

— Emma Luu, 17

Junior at Pine Creek High School in Colorado Springs, Colorado

“Check anything you think is misleading with a quick search and cross-check if it’s legitimate or not.”

— Arnav Goyal, 14

Freshman at Olentangy Liberty High School in Powell, Ohio

“Become aware of media bias, and do your best to consider different perspectives and stay open-minded while being aware of media bias.”

— Lucas Robbins, 17

Senior at Mandela International Magnet School in Santa Fe, New Mexico

“My (unpopular) take is that fact-checking is easier than it seems. … ​Social media serves as an integral egalitarian news source where anyone can create and share primary source information no matter where they live in the world. However, using social media as a sole source of information can be dangerous. Sometimes even recognizing satirical news sources is hard — I have been a victim of thinking The Onion was a real news source. You don’t have to research every single headline you ever see. The internet can be an overload of information at times, and choosing to disconnect is a skill young people need. However, if you see something that raises eyebrows, understanding the context is just a Google search away.”

— Kush Kaur, 17

Freshman at Collin College in McKinney, Texas

Teenagers are inundated daily with a mix of credible information and fake news. Out of necessity, they’re sharpening their instincts to identify misinformation and building skills to verify or debunk it. Their advice is clear: Stay mindful of algorithmic influence, avoid relying on a single source, and remember that it’s OK to step back when it all feels overwhelming.

Need more insights? Explore the resources below.

Caroline Bauman is the deputy managing editor for engagement at Chalkbeat. Reach her at cbauman@chalkbeat.org.

Erica Meltzer is the national editor at Chalkbeat, where she covers education policy and politics. Reach her at emeltzer@chalkbeat.org

This was originally published by . Chalkbeat is a nonprofit news site covering educational change in public schools. Sign up for their newsletters at .

]]>
Computer Programs Monitor Students’ Every Word in the Name of Safety /article/computer-programs-monitor-students-every-word-in-the-name-of-safety/ Sat, 26 Oct 2024 12:01:00 +0000 /?post_type=article&p=734595 This article was originally published in

Whether it’s a research project on the Civil War or a science experiment on volcano eruptions, students in the Colonial School District near Wilmington, Delaware, can look up just about anything on their school-provided laptops.

But in one instance, an elementary school student searched “how to die.”

In that case, Meghan Feby, an elementary school counselor in the district, got a phone call through a platform called , whose algorithm flagged the phrase. The system sold by educational software company GoGuardian allows schools to monitor and analyze what students are doing on school-issued devices and flag any activities that signal a risk of self-harm or threats to others.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


The student who had searched “how to die” did not want to die and showed no indicators of distress, Feby said — the student was looking for information but in no danger. Still, she values the program.

“I’ve gotten into some situations with GoGuardian where I’m really happy that they came to us and we were able to intervene,” Feby said.

School districts across the country have widely adopted such computer monitoring platforms. With the youth mental health crisis worsened by the COVID-19 pandemic and school violence affecting more K-12 students nationwide, teachers are desperate for a solution, experts say.

But critics worry about the lack of transparency from companies that have the power to monitor students and choose when to alert school personnel. Constant student surveillance also raises concerns regarding student data, privacy and free speech.

While available for more than a decade, the programs saw a surge in use during the pandemic as students transitioned to online learning from home, said Jennifer Jones, a staff attorney at the Knight First Amendment Institute.

“I think because there are all kinds of issues that school districts have to contend with — like student mental health issues and the dangers of school shootings — I think they [school districts] just view these as cheap, quick ways to address the problem without interrogating the free speech and privacy implications in a more thoughtful way,” Jones said.

According to the most recent youth risk behavior from the federal Centers for Disease Control and Prevention, nearly all indicators of poor mental health, suicidal thoughts and suicidal behaviors increased from 2013 to 2023. During the same period, the percentage of high school students who were threatened or injured at school, missed school because of safety concerns or experienced forced sex increased, according to the CDC .

And the threat of school shootings remains on many educators’ minds. Since the Columbine High School shooting in 1999, more than 383,000 students have experienced gun violence at school, according to .

GoGuardian CEO Rich Preece told Stateline that about half of the K-12 public schools in the United States have installed the company’s platforms.

As her school’s designee, Feby gets an alert when a student uses certain search terms or combinations of words on their school-issued laptops. “It will either come to me as an email, or, if it is very high risk, it comes as a phone call.”

Once she’s notified, Feby will decide whether to meet with the student or call the child’s home. If the system flags troubling activity outside of school hours, GoGuardian Beacon contacts another person in the county — including law enforcement, in some school districts.

Feby said she’s had some false alarms. One student was flagged because of the song lyrics she had looked up. Another one had searched for something related to anime.

About a third of the students in Feby’s school come from a home where English isn’t their first language, so students often use worrisome English terms inadvertently. Kids can also be curious, she said.

Still, having GoGuardian in the classroom is important, Feby said. Before she became a counselor 10 years ago, she was a school teacher. And after the 2012 Sandy Hook Elementary School mass shooting, she realized school safety was more important than ever.

Data and privacy

Teddy Hartman, GoGuardian’s head of privacy, taught high school English literature in East Los Angeles and was a school administrator before joining the technology company about four years ago.

Hartman was brought to GoGuardian to help with creating a robust privacy program, he said, including guardrails on its use of artificial intelligence.

“We thought, ‘How can we co-create with educators, the best of the data scientists, the best of the technologists, while also remembering that students and our educators are first and foremost?’” Hartman said.

GoGuardian isn’t using any student data outside of the agreements that school districts have allowed, and that data isn’t used to train the company’s AI, Hartman said. Companies that regulate what children can do online are also required to adhere to regarding the safety and privacy of minors, including the Family Educational Rights and Privacy Act and the Children’s Online Privacy Protection Rule.

But privacy experts are still concerned about just how much access these types of companies should have to student data.

School districts across the country are spending hundreds of thousands of dollars on contracts with some of the leading computer monitoring vendors — including GoGuardian, Gaggle and others — without fully assessing the privacy and civil rights implications, said Clarence Okoh, a senior attorney at the Center on Privacy and Technology at the Georgetown University Law Center.

In 2021, while many schools were just beginning to see the effects of online learning, The 74, a nonprofit news outlet covering education, published an investigation into how Gaggle was operating in Minneapolis schools. Hundreds of documents revealed how students at one school system were subject to constant digital surveillance long after the school day was over, including at home, the outlet reported.

That level of pervasive surveillance can have far-reaching implications, Okoh said. For one, in jurisdictions where legislators have expanded censorship of “divisive concepts” in schools, including critical race theory and LGBTQ+ themes, the ability for schools to monitor conversations including those terms is concerning, he said.

A by the Electronic Frontier Foundation, a nonprofit digital rights group based in San Francisco, illustrates what kinds of keyword triggers are blocked or flagged for administrators. In one example, GoGuardian had flagged a student for visiting the text of a Bible verse including the word “naked,” the report said. In another instance, a Texas House of Representatives site with information regarding “cannabis” bills was flagged.

GoGuardian and Gaggle both also dropped LGBTQ+ terms from their keyword lists after the foundation’s initial records request, the group said.

But getting a full understanding of the way these companies monitor students is challenging because of a lack of transparency, Jones said. It’s difficult to get information from private tech companies, and the majority of their data isn’t made public, she said.

Do they work?

Years before the 2022 shooting at Robb Elementary School in Uvalde, Texas, the school district purchased a technology service to monitor what students were doing on social media, according to . The district sent two payments to the Social Sentinel company totaling more than $9,900, according to the paper.

While the cost varies, some school districts are spending hundreds of thousands of dollars on online monitoring programs. Muscogee County School District in Georgia paid $137,829 in initial costs to install GoGuardian on the district’s Chromebooks, . In Maryland, Montgomery County Public Schools for the 2024-2025 school year after spending $230,000 annually on it, later , according to the Wootton Common Sense.

Despite the spending, there’s no way to prove that these technologies work, said Chad Marlow, a senior policy counsel at the American Civil Liberties Union who authored a on education surveillance programs.

In 2019, Bark, a content monitoring platform, claimed to have helped prevent 16 school shootings in a describing their Bark for Schools program. The Gaggle company website says it 5,790 lives between 2018 and 2023.

These data points are measured by the number of alerts the systems generate that indicate a student may be very close to harming themselves or others. But there is little evidence that this kind of school safety technology is effective, according to the ACLU report.

“You cannot use data to say that, if there wasn’t an intervention, something would have happened,” Marlow said.

Computer monitoring programs are just one example of an overall increase in school surveillance nationwide, including cameras, facial recognition technology and more. And increased surveillance does not necessarily deter harmful conduct, Marlow said.

“A lot of schools are saying, ‘You know what, we’ve $50,000 to spend, I’m going to spend it on a student surveillance product that doesn’t work, instead of a door that locks or a mental health counselor,’” Marlow said.

Some experts are advocating for more mental health resources, including hiring more guidance counselors, and school policies that support mental health, which could prevent violence or suicide, Jones said. programs, including volunteer work or community events, also can contribute to emotional and mental well-being.

But that’s in an ideal world, GoGuardian’s Hartman said. Computer monitoring platforms aren’t the only solution for solving the youth mental health and violence epidemic, but they aim to help, he said.

“We were founded by engineers,” Hartman said. “So, in our slice of this world, is there something we can do, from a school technology perspective that can help by being a tool in the toolbox? It’s not an end-all, be-all.”

is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: info@stateline.org. Follow Stateline on and .

]]>