Facebook – The 74 America's Education News Source Mon, 17 Nov 2025 21:32:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Facebook – The 74 32 32 Bernstein: ‘There’s a Window of Opportunity to Create Change’ in AI Chatbots /article/bernstein-theres-a-window-of-opportunity-to-create-change-in-ai-chatbots/ Tue, 18 Nov 2025 13:30:00 +0000 /?post_type=article&p=1023580 The chatbot developer has said it will ban users under 18 years old from using its virtual companions, an unprecedented move that comes after the mother of a 14-year-old user sued the company in last year, saying the boy talked to a Character.AI chatbot almost constantly in the months before he killed himself in February 2024. 

The “dangerous and untested” chatbot, the mother said, “abused and preyed on my son, manipulating him into taking his own life.” It essentially assisted his suicide, the mother alleges, prompting him to isolate from friends and family and at one point even asking if he had a suicide plan, according to the lawsuit.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


In its Oct. 29 , the company said the change will go into effect no later than Nov. 25. Character.AI will limit teen users to two hours per day with chatbots before then, ramping it down in the coming weeks.

It also said it will establish its own AI Safety Lab, an independent non-profit “dedicated to innovating safety alignment for next-generation AI entertainment features.”

To offer perspective on the move and on issues surrounding AI safety, privacy and digital addiction, The 74’s Greg Toppo spoke with , a Seton Hall University law professor and director of its . Bernstein has also created a school outreach program for students and parents, introducing many for the first time to the idea of “technology overuse.” 

An intellectual property lawyer, Bernstein noticed around 2015 or 2016 that “things were changing around me” when it came to technology. “I had three small kids, and I realized that I would go to birthday parties — the kids are not talking to each other. They’re looking at their phones! I’d go to see school plays, and I couldn’t see my kids on the stage because everybody was holding their phones in front of them.”

Likewise, she felt less productive “because I was constantly texting and emailing instead of focusing.”

But it wasn’t until whistleblowers began revealing the hidden designs behind so many social media tools that Bernstein considered how she could help herself and others limit their use.

In 2021, the whistleblower , the primary source for The Wall Street Journal’s series, told congressional lawmakers that her employer’s products “harm children, stoke division, and weaken our democracy.” Creating better, safer social media was possible, Haugen said, but Facebook “is clearly not going to do so on its own.”

In her testimony, Haugen zeroed in on the social media giant’s algorithm and designs. In her writing and speaking, Bernstein maintains that tech companies like Facebook — rebranded as Meta — manipulate us to keep us online as long as possible, with invisible designs that “target our deepest human vulnerabilities.” For instance, they use a tool called , prominently on display on Facebook and Instagram, in which the page never ends. “We just keep scrolling,” she wrote recently. “They took away our stopping cues.”

Similarly, video apps such as YouTube and TikTok rely on , in which one video automatically follows another indefinitely.

In 2023, Bernstein put her findings into a book, . Since then, dozens of state attorneys general and school districts have sued to force social media companies to reform — and Bernstein says this approach may also help parents and schools battle the growing threat of AI companion bots. 

Late last month, a bipartisan group of U.S. senators to make AI companions off-limits to minors. Sen. Josh Hawley, R-Mo, a co-sponsor, said more than 70% of kids now use them. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide,” he wrote. “We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”

The move comes weeks after the said it was investigating seven chatbot developers, saying it was looking into “how these firms measure, test and monitor potentially negative impacts of this technology on children and teens.”

In her conversation with The 74, Bernstein said the FTC probe amounts to “another pressure point” that may help change how tech companies operate. “But it’s not just the FTC. It’s the lawsuits, and it’s bad PR that comes from the lawsuits, and hopefully there’ll be regulation. Litigation is expensive. Investors might not want to invest in these new products because there’s risk.”

This conversation has been edited for clarity and length.

The obvious interest we have in this is that we’re seeing Character.AI’s new policy, which limits access to its chatbot companions to users 18 or older. I imagine folks like you would say it’s only the first step.

Just the fact that they are taking some precautions means hopefully some kids will not be exposed to what’s been happening — convincing them to kill themselves, convincing them to not talk to their parents, to stay away from their friends. That’s a good thing. 

On the other hand?

I’ve researched how tech companies, especially Meta and other companies, have been behaving for years. So I’m a bit suspicious, because we tend to see these kinds of moves when they’re threatened legally. So it’s not so surprising that it’s happening. They’re under pressure.

In my mind, there are two questions: First of all, what will this look like exactly? In the past, for example, you would see Meta, every time there’s a big privacy breach, they would apologize and say, “We’re fixing it,” and they’ll fix something small and not fix the big thing. So what are they really doing? What kind of age verification mechanisms are they going to use? Secondly, they said they’re creating some space for teens. What is this going to look like? We don’t know. And I believe that until there’s real regulation at stake, we can’t be sure that they will take real precautions. 

I read a earlier this year in which you used the phrase “collective legal action,” saying that this is what’s needed to exert pressure on tech companies to change their designs, which trap users into “overuse.” That’s a fairly recent development, correct?

At the beginning, the people who were writing on this were mostly psychologists. Parents thought it was their own fault. The idea was, “Let me just fix my habits.” It’s self-help. The books that came before me were mostly talking about self-help methods. And when I was thinking about collective action, I realized: Parents can’t really change things by themselves, because you can’t isolate your kid and not give them a cell phone, not give them social media. It becomes an endless fight. And so I thought this has to be changed through collective action, through pressure — through governmental pressure, litigation. 

Jonathan Haidt’s book talks about collective action through parents doing things together in order to not have your kid be the only one who does not have social media or a phone. The idea is that it’s not our fault. It has to be done differently.

And to your point, a lot of this is by design, whether it’s social media or games or AI companions. By design, they’re meant to keep you there, keep you in place, keep you engaged. That’s something that, until recently, was not on a lot of people’s radar.

It took after to come out and explain how it works, to understand it as a business model. There’s no accident. We’re getting these products for free: Gmail for free, Facebook for free. We are paying with our time and our data. They collect data on us in order to target advertising — that’s how they make money. And they need us online for as long as possible so they can collect the data — and also so we will see the ads. So they need to find ways to keep us online. And there are different mechanisms like the infinite scroll. And they come up with new ones. AI companions have new addictive mechanisms: the way that they , they always flatter you. For kids it’s even more addictive, but even for adults it’s, “You’re always doing a great job.”

It’s meant to keep you talking, meant to keep you engaged. You focus a lot on games and social media, but it strikes me that AI companions make those things seem quaint in terms of their addictive qualities, or the potential for real peril.

I agree with you. If you have a spectrum where social media is addictive — people spend many hours online, and they’re not interacting face-to-face — that’s an issue. And you see this with AI companions too. But what’s concerning about AI companions is that it’s much worse for kids. If you think about it, if you’re a kid and you go to middle school, kids are not nice. It’s much nicer to chat with somebody who’s always nice to you. Falling in love and getting your heart broken is not fun. There are many websites that just offer girlfriends that cater to you. So for me, the scariest thing is that kids will just never really develop the skills to have these relationships. And some adults may also stop preferring them.

About a year ago, I wrote a piece in which I talked to a college student, maybe 19 or 20 years old, who admitted that essentially he had outsourced advice about his romantic life to ChatGPT — he had a girlfriend, and whenever they had a fight or disagreement, he would excuse himself, go into the bathroom and ask ChatGPT what he should be doing. I can see that both ways: On the one hand, it just seems incredible. On the other hand, I can see where he’s basically looking for good advice. He’s looking for guidance. What do you make of that?

People say you can get advice, and you can practice your dating skills. I’ll give you something that happened to me, which is on a different scale: I was traveling abroad, and I was in this restaurant, and the menu was in a different language. So what did I do? I took a picture of the menu and uploaded it to ChatGPT and got it translated to English. While I was doing it, a young man came up to my partner and asked to translate. So what happened? I was already busy looking at my phone because I had a translation. My partner was speaking to this young man who was very happy to speak, and they were having a great conversation. 

That’s an example of the kind of things we’re giving up. This guy you wrote about, instead of going to the bathroom, maybe could have asked a friend, developed a deeper relationship with a friend. Maybe they would share experiences. But he gets used to getting the immediate answer from somebody else, and you didn’t develop these relationships. 

We miss out on the possibility of having a human interaction. 

Yes.

In its announcement, Character.AI actually apologized to its younger users, saying that many of them had told the company how important these characters had become to them. And I’ve heard that before. I wonder: How do we as adults start to think about the flip side of this, that it’s difficult for young people to tear themselves away from these things they’ve created? Do you have any sympathy for that?

I have concern, actually, because these kids, sometimes they kill themselves for these bots. So I am concerned about what will happen to kids who are very attached when these bots are suddenly gone. And you hear news stories even of adults who suddenly lost characters they were attached to. It’s a bit like how do you get people who are addicted off the addiction when you suddenly cut them off? These are things we’ve never even thought of.

Is there anything I haven’t asked you that you think is an important piece of this?

An important piece of this is that you don’t yet have every teen, every kid, attached to an AI companion. So there’s a window of opportunity to create change. Social media is much more difficult, because by the time we realized how bad it was, everybody was on social media. The money interests were so big that they would fight every law in court. So it’s really important to move fast and also understand that Character.AI is a small part of the problem. Because it’s not just these specialized websites like Character.AI. It’s ChatGPT — one of the last lawsuits was . The AI bots in ChatGPT are becoming more human, so it’s important that any action is against these bots, against the type of characteristics they have and to regulate how they behave. Just getting rid of Character.AI is not going to solve the problem.

]]>
Kids Shouldn’t Access Social Media Until They’re Old Enough to Drive, Book Says /article/kids-shouldnt-access-social-media-until-theyre-old-enough-to-drive-book-says/ Tue, 02 Sep 2025 10:30:00 +0000 /?post_type=article&p=1020144 Jean M. Twenge holds an unusual place among Ph.D. psychologists. For the past two decades, she has toggled between the obscurity of the academy and the glare of academic fame. 

The author of two college textbooks and five books for non-academic readers, she is equally at home researching and writing about adolescent mental health, sleep disorders, digital technology, homework and narcissism. She was one of the first experts to warn nearly that smartphones could hold negative consequences for our mental health. A decade after the advent of the iPhone, Twenge went viral in 2017 with an that asked, provocatively, “Have Smartphones Destroyed a Generation?”


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


A professor at San Diego State University, she has collaborated for years with the researcher and author Jonathan Haidt, whose 2024 book was a mega-bestseller that has helped build momentum for school cellphone bans in a growing number of states — .

And she is one of the few experts in the education and mental health world to have appeared on HBO’s .

Cover of Jean M. Twenge’s new book, 10 Rules for Raising Kids in a High-Tech World 

Twenge’s 2017 book, , looked at how modern teens are somehow both more connected than previous generations and less prepared for adulthood. In it, she theorized that depression rates among teens are rising because they spend more time online, less time with friends in person, and less time sleeping — a problematic combination. 

The dilemmas Twenge identified in 2017 are only getting worse: By 2023, the typical American teen was spending nearly five hours a day using social media, recent research finds, with severe depression rates rising. In , girls who were heavy users of social media were three times as likely to be depressed as non-users.

Her , out Tuesday, offers practical guidelines for parents raising kids in the age of ubiquitous connectivity and sophisticated — some would say addictive — social media.

Twenge doesn’t shy away from challenging harried parents to do better. Among her suggestions: No one — parents included — should have electronic devices in the bedroom overnight. Likewise, she says, the first handheld device a kid should receive is a “basic phone” that allows calls, texts and not much else.

“It’s a really big myth out there that if kids are going to communicate, it has to be on social media,” she said. “That’s just not true.”

Ahead of its publication, Twenge spoke with The 74’s Greg Toppo about her rules, her work with Haidt and her belief that we need stiffer laws that keep young people off social media until they’re old enough to drive.

Their conversation has been edited for length and clarity.  

I wanted to start with a quote from your book. It’s a parent’s description of his 10-year-old after she got her first smartphone: “She suddenly wasn’t playing with her younger siblings as much. Novels were promptly cast aside. She wasn’t around to help with dinner anymore. She danced less, laughed less. She was quieter. Our home was quieter.” That’s so heartbreaking, but I’m guessing it’s not unusual.

I don’t think it is. Many, many parents describe how their kids are different after they give them a smartphone. And it’s especially heartbreaking when that’s a 10-year-old, but even when it’s a 16-year-old who might otherwise be ready. It’s very noticeable how they change after they get that phone in their pocket.

Were there any particular data points about smartphones and social media that persuaded you they were causing a mental health crisis?

It was a slow process for me, and it wasn’t an immediate conclusion when I first started to see these trends in adolescent mental health. It was first a process of ruling out obvious causes, like the economy, which wasn’t aligned at all, and any other big events that might happen. I would trace it, really, to the big that I work with on teens, where there was just this combination all at once of not just rising depression, but teens spending less time with each other in person and less time sleeping. And then realizing, “Well, wait: What might explain all of those things happening at the same time?” 

And it seemed clear that a good amount of that answer is probably smartphones and social media, particularly after I found a Pew Research Center poll about the ownership of smartphones, that [it] in the U.S. at the end of 2012. And that’s right around the same time all these changes were happening.

I want to dig into a few of your rules. No. 3: “No social media until age 16 or later.” That seems a lot tougher than what most families practice. Why 16? And what do you say to parents who worry about their kids’ social isolation and FOMO or Fear Of Missing Out?

I have not found that with my kids — that they’ve been socially isolated for not having social media. Most other parents I talked to who have put off social media have also not found that with their kids. Social media is just one mechanism for communicating. There’s so many others. Kids can call each other, they can text each other — they do a lot of texting. They can FaceTime each other, they can get together in person. Usually that ends up tilting toward texting, but it does not have to be social media. It’s a really big myth out there that if kids are going to communicate, it has to be on social media. That’s just not true.

And that leads to rule No. 4, where you advocate “basic phones” — your phrase — before smartphones. In a world where even school assignments need Internet access, is that practical for most families?

Yeah, because kids have laptops. And if the family can’t afford to buy them a laptop, almost all schools provide a laptop. So they have Internet access on their laptop even if they don’t have it on their phone. And laptops have come so far down in price too, that if you haven’t bought a laptop recently, or if you use Mac laptops like I do and my kids do now, you might not realize you can get a . So that’s another big thing: Maybe 10 years ago, if a kid doesn’t have Internet access on their phone, then they don’t have Internet access at all. That’s just not true in the current landscape.

Although you do have problems with school laptops.

Oh, yes. I mean, this is a thing! They get Internet access on the laptop, whether it’s a school laptop or a personal one, and then that opens a whole other can of worms. Absolutely true. Laptops are the bane of my existence as a parent, particularly the school laptop, although they’ve gotten a little bit better, at least in my district. 

Actually, that was going to be my next question, this parental controls thing. It sounds like your district is being responsive.

Well, on that issue, they still don’t have a coherent phone policy during the school day. In the high school, it’s especially bad. That’s something I’m hoping will change. It is changing in a lot of schools around the country, thankfully. A lot more schools are doing “no phones during the school day, bell to bell,” which is what needs to happen.

A big message of the book is phone-free schools. And I know you’ve worked with , who has pushed for schools to get rid of phones. A few critics have said that this is a to a complex problem, and that it’s not entirely clear that phones are actually causing the mental health issues that Haidt has become a best-seller writing about. How do you respond to that criticism?

There are a couple of things to unpack there. For one thing, even if you take mental health out of the equation, kids should still not have their phones at school for academic and focus reasons, for the reason of developing social skills by talking to their friends at lunch, for the reason that a bell-to-bell ban is actually easier to enforce than a classroom-by-classroom ban. There are so many reasons for it that don’t even include mental health. 

The second question is [about] the research on phones and social media and mental health: We’ve known for quite a while that teens who spend more time on social media are more likely to be depressed or unhappy. Almost every single study finds that. Where you sometimes get more debate is, “O.K., that’s correlation. What about causation?” But in the last 10 years, we’ve gotten a lot more studies, and the studies that ask people to cut back or give up social media for at least three weeks a month or so, almost all of those studies show an improvement in well-being. And I don’t want to get too in the weeds here, but that’s actually a little bit shocking, because by definition in those experiments, you’re taking people who are at average use and having them cut back to low. 

That’s actually not where we see the biggest effects in the correlational studies. The heaviest users are much more likely to be depressed than the average or light users. So, you know, you can’t ethically do an experiment that would really answer the exact question: You can’t take 12-year-olds, randomly assign them to spend eight hours a day on social media, and then see what happens. At least I hope not.

In the book, you talk about the 10 rules “creating a firewall for kids against anxiety, attention issues and constant insecurity.” I think most parents would get behind that. But let’s be honest, they’re users of these tools themselves. How do we craft rules around web dependence and social media without being hypocrites?

Parents have to be role models. Parents are also allowed a small amount of what I call “digital hypocrisy.” Because they’re adults, they have jobs, they may be responsible for elderly parents, etc. But that said, parents should think about their technology use as well. They should get their phones and electronic devices out of their bedroom at night. They should also consider doing things like not having social media on their phone. If they want to use Facebook or Instagram or Twitter, do it on your laptop. That’s what I do. I mean, I don’t have much social media to begin with. I have X, but I don’t have it on my phone, and that’s very much a purposeful decision. During family dinners, unless there’s a really specific reason for me to have my phone with me, it’s upstairs.

That seems to be an easy one: Phones away at dinner.

Well, you’d think so, but you’ve got to get the whole family on board, and sometimes husbands are not really into that.

I want to skip to Rule No. 8: “Give your kids real-world freedom,” which will probably be met with some resistance. I have a 4-year-old grandson, and when I read your recommendation to let 4-to-7-year-olds go find items a few aisles away in the grocery store, I shouted, “Hell no!”

Why? Why is there, do you think, a resistance to that idea?

I have nightmares about this child being snatched from me at Safeway. I guess I want you to just pull me back from the edge, if you would.

I mean, that is not just unlikely to happen — the chances of that are so infinitesimal it probably shouldn’t even factor into our decision making. There’s one stat in there, and I forget the exact number, but someone calculated that if you wanted your kid to get kidnapped, how many hours — it turned out to be years — would they have to be in your front yard for that to happen? It’s something like 100,000 years. 

O.K., well that helps.

And a four-year-old loves that stuff! They love being grown up. I mean, look, even if you don’t do the grocery store thing, make sure they learn how to tie their own shoes, that they know how to get dressed. I remember when my girls were that age, and it occasionally amazed me when I would be with other moms in various situations and their kids couldn’t dress themselves at that age, and that’s where it starts. 

At pretty much every age, the great thing is that giving kids independence makes it easier for parents. It is easier as a parent if your 4-year-old can dress themselves. It is easier if your teenager makes dinner once a week. It’s good for everybody.

A lot of people might see this freedom rule as somehow contradictory to some of the other rules, in which you talk about adults being “in control.” Can you parse that?

For sure. Jon has said this as well — and I completely agree: We have kids in the real world and underprotected them online, and these principles are just trying to get those two to balance. When you’re talking about the real-world freedom thing, it’s not a matter of letting kids completely run wild and do whatever they want. We’re talking about giving kids some of the freedoms that parents themselves had when they were kids, and to build independence in a way that is really good for kids and good for them as they grow up. 

I can’t even remember who said this to me when I had young kids: “You’re not raising children, you’re raising adults.” And that’s just so true. That is your job as a parent. Giving kids some freedom and independence is a really, really key part of raising an adult.  

I wrote a whole book about learning games, and one of the powerful ideas that I took from that reporting is that many adults don’t realize video games have become. You acknowledge that, saying gaming is the primary way that some kids spend time with friends. But I gather that you see the risks as well. And I wonder if you could talk about that.

It really comes back to the principle of “Everything in moderation.” Many games are not as obviously toxic as social media. Games tend to be more in real time, more interactive. But is it a good idea for kids to be spending five or six hours a day gaming? Probably not. There have to be some limits.

You quote , the Facebook founder, admitting they’re “exploiting a vulnerability in human psychology” to keep users on the app. Given social media’s sophistication, are mere parental rules sufficient? I mean, don’t we need a bigger hammer, like legislation and policies? 

Absolutely! Yes! Yes! It would be absolutely amazing for parents and for kids if we had laws that verified age for social media. I mean, ideally, that would be age verification to make sure they’re 16 or older, to raise the minimum age to 16. But even if we just enforced existing law with the minimum of 13, that would be progress, given the enormous numbers of 10-, 11- and 12-year-olds who are on social media, often without their parents’ permission — often explicitly against their parents’ permission — and actually against the law [Children’s Online Privacy Protection Rule] that was passed in 1998.

What is the biggest obstacle to getting better regulation, or, to your point, to enforcing the existing regulations?

It’s interesting. The barrier is not the inability to verify age or the inability to verify age without a government ID. There are so many companies that will verify age now that they have their . It can be done in many different ways. The biggest barrier is tech companies themselves. Any time a state passes a law about verifying age on social media or even pornography sites, the companies — every single time. They have sued to keep those laws from going into effect.

Are any emerging technologies that parents should be concerned about? Do your rules need updating for AI or virtual reality or whatever comes next?

AI chatbots are what a lot of parents are rightly worried about. And yes, you could certainly modify or add to the rules and say, “No AI chat bots until 16 or 18 — probably 18.” And of course, it depends on what we’re talking about. It is common for kids to use ChatGPT when they need to look up something for homework or even have it write their essays — that’s a whole other horrible discussion. But what I’m specifically referring to is the many chatbots out there right now that are supposed to be AI friends, or worse, . There’s already been a tragic case of a child who , apparently due to one of these AI girlfriends. It’s just really scary to think of kids having their first romantic relationship with an AI chatbot. It’s terrifying.

The good news is, if you follow that rule about your kids having basic phones, if you give them one of the phones that’s designed for kids, those phones do not allow AI relationship chatbots. It’s on their banned apps, just like social media and pornography and violence apps. Parents have such a tough job, and it’s nice that there are at least a few tools out there that can make their lives easier and keep their kids off of things like AI girlfriend and boyfriend chatbots.

In keeping with the theme of overwhelmed parents, I wonder: If I were to come to you as a parent and say, “Oh my God, Jean, 10 rules is a lot. If I could only do two or three, where would I start?” Is that even a smart thing to do? And if so, where would you start?

I would say, “No electronic devices in the bedroom overnight.” Start there, because the research is so solid on it, and it’s such a straightforward rule, and it works for everybody, of all ages. Your teenager can’t say, “Well, you do it differently,” or, “You get to be on social media.” No, actually, my phone is outside my bedroom when I sleep at night too. So that’s a great place to start. And then, just because they have so much utility, I would probably say the second rule, about basic phones, because even with all of the mess of the laptops, I’m just so happy and grateful that my kids did not have the Internet or social media in their pocket until they were older.

As a parent and a grandparent, I really appreciate you using your real life to inform a lot of these rules. In a way, it hardens them a bit, makes them more durable. Anything I haven’t asked you about that you feel needs to be in the mix?

Two things I’ll throw out there just in terms of pushbacks: With “No phones during the school day,” the pushback is often “What about school shootings?” And it’s actually less safe for students to have access to their phones during an active shooter situation. And I go through the reasons for that in that chapter. 

And then the real-world freedom piece: When you look at the things that I’m suggesting in terms of how to give your kids freedom, obviously letting them go off on their own in the real world is important, and you should do that too. But there are lots of things in that list of suggestions you can do without even leaving the house: teens making their own doctor and hairstylist appointments, for example, or middle-school kids, or even elementary school kids, cooking dinner for the family. Those are great experiences for kids to have without too much parental interference. 

You do have to — and I know this by experience — step back, especially with the cooking piece, and let them do it by themselves and learn how to make mistakes. It’s tempting to just be there when they’re doing that, but you learn quickly that if you leave them alone, they’ll figure it out. And then you can go do something else. Go and read that book you’ve been meaning to read for a while. Go for a walk. Watch TV. Have some relaxation time that you wouldn’t otherwise get. 

I wrote a piece a couple weeks ago on unschooling, this idea of pulling kids out of school and letting them find their own level and their own interests. This almost strikes me as unparenting.

It is — and I’m not a huge fan of unschooling, because it’s a rare kid it would actually work for — but it is. It’s the general idea that not being up in your kids’ business all the time is better for both parents and kids. It’s something we really have to consider more.

]]>
Safety or Censorship: Congress Rushes to Pass Broad Child Online Protection Laws /article/safety-or-censorship-congress-rushes-to-pass-broad-child-online-protection-laws/ Wed, 08 May 2024 18:23:57 +0000 /?post_type=article&p=726669 As Washington lawmakers scramble this week to finalize their last significant legislation before the fall presidential election — a must-pass bill to reauthorize the Federal Aviation Administration — they’ve tacked on more than a dozen unrelated amendments, including three online safety bills affecting students. 

Taken together, the trio would create sweeping restrictions on children’s access to social media, impose new requirements on social media companies to ensure their products aren’t harmful to youth mental health and bolster educators’ digital surveillance obligations to ensure kids aren’t swiping through their favorite feeds in class. 

The three separate digital safety bills have bipartisan support and lawmakers could greenlight them as part of the FAA reauthorization legislation, which faces a Friday deadline. If passed, the legislative package could potentially end years of debate on these thorny questions and would mark the most consequential effort to regulate tech companies and children’s online safety in decades.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“Parents know there’s no good reason for a child to be doom-scrolling or binge-watching reels that glorify unhealthy lifestyles,” Sen. Ted Cruz, a Texas Republican who is co-sponsoring The Kids Off Social Media Act, said in . “Young students should have their eyes on the board, not their phones.” 

The move comes as lawmakers across the political spectrum sound an alarm over concerns that teens’ addiction to their social media feeds — complete with algorithms designed to keep them hooked and coming back for more — have exacerbated mental health issues in young people. It follows congressional testimony by of knowing that apps like Instagram inflamed body image issues and other negative triggers among youth but failed to act to mitigate the harm while upholding a “see no evil, hear no evil” culture.

The controversial and heavily debated bills saw new life in January after social media executives were grilled during a contentious congressional hearing and Meta CEO Mark Zuckerberg apologized to parents who said their children were damaged, and in some cases died, after the company’s algorithms fed them a barrage of pernicious content. 

But critics contend the provisions amount to heavy-handed and unconstitutional censorship that fails to confront the root cause of young people’s anguish — and in some cases could hurt them by limiting their access to educational materials, blocking information designed to help them deal with mental health issues or by subjecting them to greater online surveillance.

Meta CEO Mark Zuckerberg apologizes during a January Senate committee hearing to families who say their children suffered emotional anguish, and in some cases died, as a result of their social media use. (Tom Williams/CQ-Roll Call, Inc via Getty Images)

The three amendments are:

  • The Kids Online Safety Act would require tech companies to “exercise reasonable care” to ensure their services don’t surface in children’s feeds material deemed harmful, including posts that promote suicide, eating disorders and sexual exploitation.

    First introduced in 2022, the legislation would also require tools that would give parents greater ability to monitor their children’s’ online activities and mandate tech companies enable their most restrictive privacy settings for their youngest users by default. 
  • The Children and Teens’ Online Privacy Protection Act, also known as COPPA 2.0, amends a 1998 law that requires tech companies receive parental consent before collecting data about children under 13 years old. COPPA 2.0 would extend existing requirements to children under 16, ban targeted advertising for children and require tech companies to delete data collected about children upon parental request. 
  • The Kids Off Social Media Act, introduced last week by Cruz and Hawaii Democratic Sen. Brian Schatz, would prohibit children under 13 years old from creating social media accounts and restrict tech companies from using algorithms to serve content to children under 17. It would also require schools that receive federal internet connectivity funding to block students’ access to social media sites on campus networks. 

The bill’s provisions have faced widespread pushback from digital rights and privacy advocates, including the nonprofit Electronic Frontier Foundation, which called it an unconstitutional infringement that “replaces parents’ choices about what their children can do online with a government-mandated prohibition.” 


On Tuesday, TikTok and its Chinese parent company that bans the popular social media app in the U.S. unless it sells the platform to an approved buyer, accusing the government of stifling free speech and unfairly singling it out based on unfounded accusations it poses a national security threat.

In March, — including Louisiana, Arkansas, Texas and Utah — to impose new parental consent requirements for children to create social media accounts. The Georgia law also bans social media use on school devices and creates age verification requirements for porn websites.

Aliya Bhatia (Center for Democracy & Technology)

Aliya Bhatia, a policy analyst at the nonprofit Center for Democracy and Technology, said that each bill now included in the FAA reauthorization act has been the subject of debate and opposition. Including them in unrelated, must-pass legislation with a short deadline, she said, “undermines the active conversations that are happening” about the bills, which she said are “just not ready for prime time.”

The Kids Online Safety Act, which has the bipartisan , is endorsed by a host of , including the American Psychological Association, Common Sense Media and the American Academy of Pediatrics, who argue the rules could protect youth from the corrosive effects of social media. 

At the same time, the legislation, which has differing House and Senate versions, has also received and those representing LGBTQ+ students. The groups argue the bill amounts to government censorship with a likely disparate impact on LGBTQ+ youth and students of color. The Heritage Foundation, a conservative think tank, has endorsed the legislation as a way to restrict youth access to LGBTQ+ content, that “keeping trans content away from children is protecting kids.” 

Privacy advocates have warned the legislation could result in age-verification requirements across the internet that could require online users of all ages to provide identifying information to web platforms. 

Meanwhile, social media’s effects on youth mental well-being remain the subject of research and debate. In last year, the American Psychological Association noted that while social media use “is not inherently beneficial or harmful to young people,” the platforms should not surface to their young users content that encourages them to engage in risky behaviors or is discriminatory. 

In , Surgeon General Vivek Murthy noted that social media use is nearly universal among young people, with more than a third of teens saying they use the apps “almost constantly.” While its impact on youth mental health isn’t fully understood, Murphy said, emerging research suggests that its use can be harmful — perpetuating a national youth mental health crisis “that we must urgently address.” 

The Kids off Social Media Act, which would prohibit youth access to sites like Instagram, is that requires schools and libraries to monitor and filter youth internet use as a condition of receiving federal E-Rate internet connectivity funding. In response, schools nationwide have adopted digital surveillance tools that use algorithms to sift through billions of student communications to identify problematic online behaviors.

Meanwhile, a recent found that web filters regularly used in schools do more than keep kids from goofing off in class. They also routinely limit students’ access to homework materials, educationally appropriate information about sexual and reproductive health and resources designed to prevent youth suicides. 

For years, privacy advocates have called on the Federal Communications Commission to clarify how the rules apply to the modern internet and have argued that schools’ tech-driven monitoring efforts go far beyond their original intent. 

When the law went into effect in 2001, monitoring “quite literally meant looking over a kid’s shoulder as they used the computer,” said Kristin Woelfel, a policy counsel of the Center for Democracy and Technology, but in 2024 student monitoring has become “a very specific term that now means really pervasive and technical surveillance.” 

of students, parents and teachers last year, the nonprofit found a majority supported digital activity monitoring in schools yet nearly three-quarters of youth said that filtering and blocking technology made it more difficult to complete some homework, a challenge reported more often among LGBTQ+ students, and that the tools routinely led to disciplinary actions and police involvement. 

“They don’t work as people think they do,” she said. “That, coupled with data that shows it’s actually detrimental to students, indicates even more that this is not the right path forward.” 

In a letter to lawmakers last week, a coalition of education nonprofits including the American Library Association and the Consortium for School Networking expressed concern about attaching social media limitations to E-Rate funding, which schools rely on to facilitate learning. 

“Schools and libraries will face delays or denials of E-rate funding due to allegations of non-compliance,” the groups wrote, arguing that it would give federal authorities control over social media policies that should be left to local officials. “The bill’s provisions seem to suggest that technology-driven learning models are always harmful, even when carefully crafted to promote educational purposes. In fact, there are several social media uses that can be beneficial for education and learning.”

Sen. Ted Cruz, a Republican of Texas, questions Meta CEO Mark Zuckerberg during a January Senate committee hearing about child sexual exploitation on the internet. (Tom Williams/CQ-Roll Call, Inc via Getty Images)

In a announcing the legislation, Schatz offered the opposite perspective.

“There is no good reason for a nine-year-old to be on Instagram or TikTok,” he said. “There just isn’t. The growing evidence is clear: social media is making kids more depressed, more anxious, and more suicidal.”

In justifying the legislation, Schatz cites reporting by the psychologist and author Jonathan Haidt, who argues in his new book that young people — and girls, in particular — face a “tidal wave” of anguish that can be traced back to the rise of smartphones. 

Haidt’s characterization of tech’s role in youth well-being has , including by developmental psychologist Candice Odgers, who argued in that claims “that digital technologies are rewiring our children’s brains and causing an epidemic of mental illness is not supported by science.” 

Among the evidence is on the well-being of nearly 1 million people ages 13 to 34 and 35 and over as it was being adopted in 72 countries and found “no evidence suggesting that the global penetration of social media is associated with widespread psychological harm.”

]]>
Lawmakers Duel With Tech Execs on Social Media Harms to Youth Mental Health /article/senate-grills-tech-ceos-on-social-media-harms/ Wed, 31 Jan 2024 23:20:00 +0000 /?post_type=article&p=721450 During a hostile Senate hearing Wednesday that sometimes devolved into bickering, lawmakers from across the political spectrum accused social media companies of failing to protect young people online and pushed rules that would hold Big Tech accountable for youth suicides and child sexual exploitation. 

The Senate Judiciary Committee hearing in Washington, D.C., was the latest act in a bipartisan effort to bolster federal regulations on social media platforms like Instagram and TikTok amid a growing chorus of parents and adolescent mental health experts warning the services have harmed youth well-being and, in some cases, pushed them to suicide. 

In an unprecedented moment, Meta founder and CEO Mark Zuckerberg, at the urging of Missouri Republican Sen. Josh Hawley, stood up and turned around to face the audience, apologizing to the parents in attendance who said their children were damaged — and in some cases, died — because of his company’s algorithms. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“I’m sorry for everything you’ve all gone through,” said Zuckerberg, whose company owns Facebook and Instagram. “It’s terrible. No one should have to go through the things that your families have suffered.”

Senators argued the companies — and tech executives themselves — should be held legally responsible for instances of abuse and exploitation under tougher regulations that would limit children’s access to social media platforms and restrict their exposure to harmful content.

“Your platforms really suck at policing themselves,” Sen. Sheldon Whitehouse, a Rhode Island Democrat, told Zuckerberg and the CEOs of X, TikTok, Discord and Snap, who were summoned to testify. Section 230 of the Communications Decency Act, which allows social media platforms to moderate content as they see fit and generally provides immunity from liability for user-generated posts, has routinely shielded tech companies from accountability. As youth harms persist, he said those legal protections are “a very significant part of that problem.” 

Whitehouse pointed to a lawsuit against X, formerly Twitter, that was filed by two men who claimed a sex trafficker manipulated them into sharing sexually explicit videos of themselves over Snapchat when they were just 13 years old. Links to the videos appeared on Twitter years later, but the company allegedly refused to take action until after they were contacted by a Department of Homeland Security agent and the posts had generated more than 160,000 views. The by the Ninth Circuit, which cited Section 230. 

“That’s a pretty foul set of facts,” Whitehouse said. “There is nothing about that set of facts that tells me Section 230 performed any public service in that regard.”

In an opening statement, Democratic committee chair, Sen. Dick Durbin of Illinois, offered a chilling description of the harms inflicted on young people by each of the social media platforms represented at the hearing. In addition to Zuckerberg, executives who testified were X CEO Linda Yaccarino, TikTok CEO Shou Chew, Snap co-founder and CEO Evan Spiegel and Discord CEO Jason Citron.

“Discord has been used to groom, abduct and abuse children,” Durbin said. “Meta’s Instagram helped connect and promote a network of pedophiles. Snapchat’s disappearing messages have been co-opted by criminals who financially extort young victims. TikTok has become a, quote, ‘platform of choice’ for predators to access, engage and groom children for abuse. And the prevalance of [child sexual abuse material] on X has grown as the company has gutted its trust and safety workforce.” 

Citron testified that Discord has “a zero tolerance policy” for content that features sexual exploitation and that it uses filters to scan and block such materials from its service. 

“Just like all technology and tools, there are people who exploit and abuse our platforms for immoral and illegal purposes,” Citron said. “All of us here on the panel today, and throughout the tech industry, have a solemn and urgent responsibility to ensure that everyone who uses our platforms is protected from these criminals both online and off.” 

Lawmakers have introduced a slate of regulatory bills that have gained bipartisan traction but have failed to become law. Among them is the Kids Online Safety Act, which would require social media companies and other online services to take “reasonable measures” to protect children from cyberbullying, sexual exploitation and materials that promote self-harm. It would also mandate strict privacy settings when teens use the online services. Other proposals would to report suspected drug activity to the police — some parents said their children overdosed and died after buying drugs on the platforms — and a bill that would hold them accountable for hosting child sexual abuse materials. 

In their testimonies, each of the tech executives said they have taken steps to protect children who use their services, including features that restrict certain types of content, limit screen time and curtail the people they’re allowed to communicate with. But they also sought to distance their services from harms in a bid to stave off regulations. 

“With so much of our lives spent on mobile devices and social media, it’s important to look into the effects on teen mental health and well-being,” Zuckerberg said. “I take this very seriously. Mental health is a complex issue, and the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes.” 

Zuckerberg by the National Academies of Sciences, Engineering and Medicine, which concluded there is a lack of evidence to confirm that social media causes changes in adolescent well-being at the population level and that the services could carry both benefits and harms for young people. While social media websites can expose children to online harassment and fringe ideas, researchers noted, the services can be used by young people to foster community. 

In October, 42 state attorneys general , alleging that the social media giant knowingly and purposely designed tools to addict children to its services. U.S. Surgeon General Vivek Murthy warning that social media sites pose a “profound risk of harm” to youth mental health, stating that the tools should come with warning labels. Among evidence of the harms is which found that Instagram led to body-image issues among teenage girls and that many of its young users blamed the platform for increases in anxiety and depression. 

Republican lawmakers devoted a significant amount of time during the hearing to criticizing TikTok for its ties to the Chinese government, calling out the app for collecting data about U.S. citizens, including in an effort to surveil American journalists. The Justice Department is reportedly investigating allegations that ByteDance, the Chinese company that owns TikTok, used the app to surveil several American journalists who report on the tech industry. 

In response, Chew said the company launched an initiative — dubbed “Project Texas” — to prevent its Chinese employees from accessing personal data about U.S. citizens. But employees claim the company has . 

YouTube and TikTok are by far the platforms where teens spend the most hours per day, according to a 2023 Gallup survey although Neal Mohan, the CEO of Google-owned YouTube, was not called in to testify.

Mainstream social media platforms have also been exploited for domestic online extremism. Earlier this month, for example, a teenager accused of carrying out a mass shooting at his Iowa high school reportedly maintained an active presence on Discord and, shortly before the rampage, commented in a channel dedicated to such attacks that he was “gearing up” for the mayhem. Just minutes before the shooting, the suspect appeared to capture a video inside a school bathroom and uploaded it to TikTok. 

Josh Golin, the executive director of Fairplay, a nonprofit devoted to bolstering online child protections, blasted the tech executives’ testimony for being little more than “evasions and deflections.” 

“If Congress really cares about the families who packed the hearing today holding pictures of their children lost to social media harms, they will move the Kids Online Safety Act,” Golin said in a statement. “Pointed questions and sound bites won’t save lives, but KOSA will.” 

The safety act, known as KOSA, has faced pushback from civil rights advocates on First Amendment grounds, arguing the proposal could be used to censor certain content and . Sen. Marsha Blackburn, a Republican from Tennessee and KOSA co-author, said last fall the rules are important to protect “minor children from the transgender in this culture” and cited the legislation as a way to shield children from “being indoctrinated” online. The Heritage Foundation, a conservative think tank, endorsed the legislation, that “keeping trans content away from children is protecting kids.” 

Snap’s Evan Spiegel and X’s Linda Yaccarino both agreed to support the Kids Online Safety Act.

Aliya Bhatia, a policy analyst with the nonprofit Center for Democracy and Technology, said that although lawmakers made clear their intention to act, their directives could end up doing more harm than good. She said the platforms serve as “peer-to-peer learning and community networks” where young people can access information about reproductive health and other important topics that they might not feel comfortable receiving from adults in their lives. 

“It’s clear that this is a really tricky issue, it’s really difficult for the government and companies to decide what is harmful for young people,” Bhatia said. “What one young person finds helpful online, another might find harmful.”

South Carolina’s Sen. Lindsey Graham, the committee’s ranking Republican, said that social media companies can’t be trusted to keep kids safe online and that lawmakers have run out of patience.

“If you’re waiting on these guys to solve the problem,” he said, “we’re going to die waiting.” 

]]>
Experts on Kids & Social Media Weigh the Pros and Cons of ‘Growing Up in Public’ /article/experts-on-kids-social-media-weigh-the-pros-and-cons-of-growing-up-in-public/ Wed, 17 Jan 2024 13:30:00 +0000 /?post_type=article&p=720576 Parents are more concerned than ever about their kids’ social media habits, worried about everything from oversharing and cyberbullying to anxiety, depression, sleep and study time. 

Recent surveys of young people show that parents’ concerns may be justified: More than half of U.S. teens spend at least four hours a day on these apps. Girls, who are , spend an average of nearly an hour more on them per day than boys. Many parents are searching for support. 

Perhaps more than anyone, Carla Engelbrecht and Devorah Heitner are qualified to offer it. They’ve spent years puzzling over how families can help understand media from the inside out, and how schools both help and hurt kids’ ability to cope.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


Engelbrecht is a longtime children’s media developer. A veteran of Sesame Workshop and PBS Kids Interactive, she spent seven years at Netflix, most recently as its director of product innovation. Engelbrecht was behind the network’s Black Mirror “” episode in 2018, which allowed viewers to choose among five possible endings. 

Carla Engelbrecht (second from right) appears onstage with colleagues during a Netflix event on Black Mirror’s “Bandersnatch” episode in 2019. Engelbrecht, who was director of product innovation for the streaming service, is now testing a social media platform for children under 13. (Charley Gallay/Getty Images for Netflix)

Engelbrecht is now in public beta testing for , a new social media platform for kids under 13. She calls it a “course correction” for young people’s social media, aiming to teach them to be more mindful, thoughtful and responsible online.

Heitner is an who specializes in helping parents and educators understand how digital technology, especially social media and interactive gaming, shape kids’ realities. Her books include 2016’s and her new work . 

Speaking to either one would be enlightening, but we decided to facilitate a broader conversation by inviting them to come together (virtually) to share insights and offer a bit of advice for both parents and schools. 

Their conversation with The 74’s Greg Toppo was wide-ranging, covering the effects of the pandemic, the pressures kids feel online and the women’s experiences communicating with their own children.

Devorah Heitner spoke in 2017 at the Roads to Respect Conference in Los Angeles. Heitner’s new book explores the impact of modern technology on childhood, including the effects of increased adult supervision of kids through tracking devices. (Joshua Blanchard/Getty Images for Rape Treatment Center)

The solutions they offer aren’t simple. In Heitner’s words, parents seeking to learn more about their kids’’ media usage should pull back their surveillance and “lead with curiosity.” 

The conversation has been edited for length and clarity.

The 74: Devorah, tell us a little bit about your new book.

Devorah Heitner: I wrote Growing Up in Public because I was speaking for years about Screenwise in schools and all these other environments, and people said, “O.K., I get that we want to think about quality over quantity with screen time. But we also want to understand what kids’ subjective experience is and not just focus on how many minutes are good or bad.”

People lie about that anyway. People are sort of oblivious to their own screen use sometimes and get over-focused on their kids’. A lot of adults are recognizing: If I could have had a Tumblr or a Twitter or Instagram as a kid, I could have really done a lot of damage to my prospects and opportunities by so openly sharing.

What are we doing to our reputations?

As I started digging into that question, I recognized that parents are really part of the surveillance culture with kids. So are schools, with grading apps like or [which keep track of kids’ location, among other functions]. I really started understanding in a fuller way how kids are scrutinized. Kids are growing up very searchable, very public, and some of that is awesome. They have a platform, they can be activists. Some of it is problematic. 

The title of your book, Growing Up in Public, says so much about kid’s lives these days. I saw this term the other day: not FOMO, “Fear of Missing Out,” but FOMU, “.” Are those competing interests for young people?

Heitner: Well, there’s definitely a fear of messing up and especially being called out. There’s a lot of “gotcha” culture going on, and kids documenting each others’ screw-ups. And as much as you patiently explain, as I have to my own 14-year-old, the concept of mutually assured destruction, if you’re on a group text with somebody for long enough, both of you have probably said a few things you don’t want repeated outside of that context.

I think it’s modeled by adults, but this kind of “gotcha” culture is very insidious and terrifying. And it should be terrifying. 

Carla, tell us a little bit about yourself.

Carla Engelbrecht: I’m a longtime product developer and researcher in the kids’ space. I’ve spent a lot of time making products for kids. I’ve seen for years kids wanting access to Twitter and Facebook and MySpace and , all through the generations of social media. And they always want what is not made for them. They’re aspirational.

Kids are just plopped into this. And just as you wouldn’t give a new driver the keys to the car and just say, “Go!” — you need to teach them how to drive — there’s the same concept for me with media use. We need to teach our kids. Parents don’t know what they’re doing, because none of us have really been through this before, and they abstain. They need support in learning how to do this. Where Devorah talks about things from that guidance perspective, I’m looking at: How can we build a product for kids that helps them learn? 

It seems to me like Betweened is a site for parents as much as anybody. 

Engelbrecht: There’s definitely two audiences here. There’s absolutely a path where I could build a product for kids and launch them onto it. But I wouldn’t be addressing all the pain points.

Kids want short-form content. They want to create. They want to connect with their peers. In order to successfully set kids up to do that, parents need tools, too. And so it is really a product for both kids and parents.

Carla mentioned all these different apps coming down the road. Devorah, I’m thinking about you saying to someone recently how you’ve been working on this book for five years. A lot has changed in five years. We didn’t have TikTok five years ago. 

Heitner: Screenwise came out in the fall of 2016, which was a memorable time for many reasons: a lot of social forces happening in our world with Trump’s election. 

And then you have the pandemic in 2020. That’s around the time I had sold the book and was trying to interview people. Suddenly, I’m not in schools anymore. I’m on Zoom with kids, which is a whole research problem: How do you get a wider range of kids, not just the super-compliant kids who show up to a Zoom? And the pandemic was an accelerant to a lot of things happening already with kids in tech.

“Parents are really part of the surveillance culture with kids. So are schools.”

Devorah Heitner

It was certainly not the beginning of kids being too young and not [the federal Children’s Online Privacy Protection Act gives parents control over what information websites can collect from their kids]. But it accelerated, and there was kind of a push toward things like Kids Messenger [on Facebook] and other things that I even experimented with at the time. 

The pandemic started when my son was 10. We were like, “Oh, what can we do to help him communicate with friends?” We experimented with Messenger. It was a fail for us, but I also talked to the people at and [two mobile phone companies marketed for children]. There are people, in different ways, trying to come up with solutions because they have understood that both the adult apps and the adult devices, like a smartphone that does all the things, might not be the ideal thing to give a 10-year-old. 

What’s changed since 2016 is there used to be more worry about one-to-one computing in schools. Now, every school pretty much is one-to-one. It’s really the outlier schools that don’t have tech or aren’t giving kids individual tech. Even as late as 2015, 2016, I was helping schools negotiate that with parents. And parents were like, “I don’t know. I’m not sure about screen time. I don’t know if I want my kid getting a Chromebook.”

Try to find a school now that doesn’t give kids iPads or Chromebooks or something. That’s probably one of the bigger differences. And then just the explosion in server-based gaming like Roblox and Minecraft and the ways kids interact in those digital communities. You see a lot of very complicated, weird ideas among adults who care about children. Like “I’ll wait until eighth grade to give a kid a phone. Meanwhile,my third-grader plays Roblox on a server with strangers.” 

Engelbrecht: Or has access to text messaging through their iPad.

Heitner: Exactly. And they’re very smugly waiting till eighth grade and I’m like, “For what? For your kid to make voice calls?” That’s the one thing they don’t want to do.

Carla, you come from a game design background. People have lots of terrible takes about video games, which I’m sure you’re used to. How has that background informed what you’re doing and what Betweened looks like?

Engelbrecht: A lot of people come to video games and they’re just like, “They’re evil,” or “They’re awful,” or “They’re violent.” And you can say the same thing about television. You can also say the same thing if you only eat broccoli. Anything in excess is not good for you — like running a marathon every day. I take a very pragmatic approach to most things we can actually find good in.

When I look at video games, I can’t classify them as evil. I instead look for the good things. And it’s the same with social media. Social media as part of a balanced media diet gives parents a lot of opportunities to connect, gives kids a lot of opportunity to express creativity and develop skills. 

“There wasn’t social media when I was in college. A bad decision in college couldn’t chase me through my entire life. In that sense, there are risks that feel much larger.”

Carla Engelbrecht

I’ll give you an example on the games side of things: Years ago, I did a South by Southwest talk called “What Can Teach Us About Parenting.” Left 4 Dead is not a game that kids should ever play. It’s a violent, first-person zombie apocalyptic shooter. It’s also one of the most beautifully designed cooperative games ever. I’m terrible with thumb sticks on video game controllers. I can’t walk in a straight line in a video game. I’m not great at the actual zombie-killing side of things. But I’m really good at running around and picking up health packs and checking in on people who have been damaged by zombies.

So there are different roles that people can play. I can still participate in the game, even though the primary way of playing Left 4 Dead is not what works for me. 

Also, if I’m playing with people, it fosters communication. I have to talk to people and someone needs to say. “Hey, I need help,” and I can come over. That’s what I’m looking for in games and social media: What are those underlying skills that, with a thoughtful perspective, you can leverage for good?

I wanted to switch gears a little bit and talk about something you mentioned earlier, Devorah: casual surveillance. I think about the stories we hear about parents not even just surveilling their kids — tracking their phones or their cars — but just keeping up in a way that we never even dreamed of. I wonder: Where did this come from? And how do you think a site like Betweened is going to help? 

Engelbrecht: I wish I knew exactly where it came from, but it certainly seems it’s symptomatic of the same thing: Everything has just kind of crept up on us. It’s like, as phones started to be introduced, we just thought, “Oh, well, I need to charge my phone, so I’ll charge it next to my bed.” And then the next thing you know, you’re checking it first thing when you wake up. It’s this slippery slope without the mindfulness of what it’s doing. Something has to happen to stop you, to make you take a step back and think, “How far have I gone? What boundaries have I crossed or what new boundary do I need to establish?” And to Devorah’s earlier point, the pandemic accelerated a lot of this.

Heitner: Part of it is we do it because we can. Even in relationships. I’ve known my husband since before we each had cell phones, but we didn’t used to check in as often because we didn’t have cell phones. It had to really rise to the level of an emergency before I would call him at work.

“As much as you patiently explain, as I have to my own 14-year-old, the concept of mutually assured destruction, if you’re on a group text with somebody for long enough, both of you have probably said a few things you don’t want repeated.”

Devorah Heitner

Remember the days of 9-to-5 office jobs? He left in the morning and was at his job. I was a grad student then and I would go up to Northwestern and not even really have any reachability by phone. Now we have phones, and the expectation is pretty much down-to-the-minute: If I’m 11 minutes late, I’ll probably text and say, “I’m 11 minutes late.” There’s just so much expectation for contact and communication and knowing where other people are. We don’t use location surveillance for that, but a lot of families do, and a lot of people have watches and will check into each other’s location on watches.

Because it’s there, people do it. And then there’s also just tremendous worry right now about kids. Given that we as a society think it’s a good idea for everyone to have assault weapons, parents are a little nervous. That anxiety creeps into everything.

My older daughter is 31, and I remember getting her first cell phone when she was 12 or 13. I remember the intense peer pressure she felt to have a phone. And I really didn’t like it at all. But I kind of justified it by saying to myself, “This is going to keep her safe.” And I remember thinking to myself, “You’re so full of shit. You’re just really trying to smooth things over.” And I guess I wonder: As parents, do we have an overextended sense of peril about our kids these days?

Heitner: There’s a sense of peril. Also, the Internet and online news and targeted algorithms just fuel that worry and outrage. It’s a bit of a vicious cycle.

Engelbrecht: In some ways, it’s almost like there are more risks that could stick with you. There wasn’t social media when I was in college. A bad decision in college couldn’t chase me through my entire life. In that sense, there are risks that feel much larger.

I think about my daughter and I don’t want something to chase her for her entire life. That part of it feels very real. And then it feels out of control. I don’t have the tools or know exactly how I can best help her except for having hard conversations and trying to put some bumpers around her. But there’s not a lot of tools to put the bumpers around her.

Devorah, one of the things you have said is that the kind of surveillance a lot of parents are undertaking is really undermining the trust their kids feel, and backfiring because kids won’t open up to them when they really need to. Can you talk a little bit more about that?

Heitner: You just see kids really getting focused on going deeper underground. If their parents are like, “I’m going to get Bark and read every single thing they text,” then you see some kids who are like, “O.K., I need to go deeper underground, I need a VPN or to only text on Snapchat, or I need to do something where I can be more evasive.” And that concerns me, because then there’s no way to make use of the parent when the parent might be useful.

Engelbrecht: I think about how to create space to allow the kid to have a second chance at telling me the truth. For example, if there’s an empty bag of gummies and the kid is the only one who could have eaten it but says they didn’t, how can I create space to talk about making mistakes versus lying or intentionally hiding the truth? Saying, “I’m going to ask what happened to the gummis again, but first I want you to take a moment to think about your answer — it’s OK to change your answer, because I want to understand the truth. We all make mistakes and we can talk about it. But intentionally hiding the truth has consequences.”

If I later find out that the child lied, then there’s consequences. The hope is that eventually, a parent can say, “If you end up at a party where there’s alcohol, don’t drive home. Call me for a ride home. If you try to hide that there was alcohol and make poor decisions, then there’s additional consequences.”

“I don’t want to be in the place where I’m policing her homework. Now that she’s in seventh grade, it’s time for her to be learning those skills before there’s the consequences of missing your homework in high school or college.”

Carla Engelbrecht

It’s important to be able to say, “I made a mistake” and talk about what to do from there. Hopefully, that provides an alternative to the arms race of increasingly sneaky strategies that Devorah described.

Heitner: That makes a lot of sense. I was just going to say: The surveillance — schools just push it really hard. Every time I go to a school, they’re like, “Are you logged into ?” or “Are you logged into ?” They’re just really pushing it so hard.

Are schools culpable in this? Sounds like you’d say, “Yes.” I don’t know if you’d call it surveillance, though. One of the functions of schools is to keep track of things, right?

Heitner: But what about the location tracking? My kid has to scan a QR code to get into the cafeteria. I skipped lunch every day of high school and ate with my drama club friends in the theater. Was that so bad? They have 3,500 kids QR-coding themselves into study hall. It’s pretty locked down. It’s pretty Big Brother, or if you read Cory Doctorow. 

Engelbrecht: Homework tracking means having full visibility of my daughter when part of what she needs to learn is the executive function skills to actually be able to plan and follow through and do her homework. I don’t want to be in the place where I’m policing her homework. Now that she’s in seventh grade, it’s time for her to be learning those skills before there’s the consequences of missing your homework in high school or college.

So to me, it’s kind of that same thing: The information is there. Should it be provided? How do you use it? And, for me it’s: How do we better equip administrators, teachers or parents to stop and think about how to leverage this information? So maybe a kid who’s consistently missing their homework, yes, the parents should have more visibility as part of a support program to get the kid back on track and help them learn the skills. But to Devorah’s point, it doesn’t mean everyone needs to be badging into lunch.

Devorah, your message to parents is: There are all these things happening. There are all these things you have to keep track of. There are lots and lots of risks to kids being on social media, especially teenagers. But you shouldn’t panic. And I wanted to just throw this out to both of you: Instead of panicking, what should parents do? 

Heitner: Carla, you’re talking about creating a new community space for kids that’s more of a learning space, and that’s one alternative. Another alternative, in addition to, or potentially instead of, for parents who don’t have access to that, is just leaning into one or two spaces they really want to mentor their kids in.

Maybe their kid’s really involved in Minecraft. And if they want to join [a free voice, chat, gaming and communications app], the parents are waiting and saying, “O.K. You can join your library Discord with or your school Minecraft club on Discord, but not general Discord.”

Two 9-year-olds play the open world computer game Minecraft. Parenting expert Devorah Heitner urges parents to know more about what their kids are doing online without resorting to surveillance. (Getty Images)

Parents will tell me their kids are playing or they’re on YouTube. But I’m like, “What channels? It’s just like if somebody says, “I’m watching TV.” Well, what are you watching? Because that really is a big differentiator in terms of the experience.

Engelbrecht: It goes back to your “Fear of Messing Up.” I think so much about how it’s important for parents to wade in and get involved with their kids. This has been the advice for decades, whatever the newfangled thing was. I was just doing some writing about encouraging parents to actually do with their kids. It’s an opportunity to bond. It actually requires some planning and practice. It’s physical activity. I assume most parents are like me, that they’re not a great dancer and it’s uncomfortable and you don’t want to mess up.

But modeling that I’ll do something that’s out of my comfort zone and connect with you over something that I know you enjoy, can be very simple. It doesn’t mean a parent has to suddenly learn all aspects of Roblox or Discord, because they can be intimidating. But just find an entry point and connect with the child and participate with them. It just has so many benefits. It’s true whether they’re into Tonka trucks or Roblox. Parenting means, “Get in there with your kid.”

Devorah, you use the phrase, “Lead with curiosity.”

Engelbrecht: Oh, I love that.

Heitner: You want to be curious and have your kid share it with you. Their expertise and experience as well and their discernment — what do they like or not like about this app? How would they change it if they could? Staying curious is an alternative to spying — being curious and asking kids to be curious even about their own experience. Do I actually feel less stressed when I scroll this app? That’s maybe a lot of mindfulness to expect of kids, who have a lot going on and a lot coming at them. But it’s important for all of us to be curious about how our experience is going.

Engelbrecht: That’s one of the ways I’ve been thinking about it from a product perspective: just how to help build in some scaffolds for mindfulness — things like when you start an app, actually having a timer that’s like, “How long do you want to spend on it right now?”

I set a timer for myself when I use TikTok because I spend a very long time on it. So being able to put that in there as a scaffold, to start being mindful and thoughtful about it. We’re posting content, but we’re actually not posting endless scrolls where you could spend all day.

I don’t want to prioritize the traditional tech metric of “time on task.” To me, success is like, “You can come and use Betweened for 20 minutes and then know you can come back another day and there’s lots of interesting stuff for you.” But it’s not all-consuming, must-do-this-all-the-time. And that’s a different perspective on tech products. It’s not how most products are developed.

]]>
Tweeting or Governing? Supreme Court Tries to Draw Lines in School Board Case /article/tweeting-or-governing-supreme-court-tries-to-draw-lines-in-school-board-case/ Tue, 31 Oct 2023 21:14:46 +0000 /?post_type=article&p=717121 In a case that considers the interplay of government and social media, the Supreme Court suggested Tuesday that public officials, like school board members, who carry out government business on Facebook and X don’t have a right to block their critics.

But some justices said the public deserves to know when the official is using their account as a private citizen.

“What makes these cases hard is that there are First Amendment interests all over the place,” said Justice Elena Kagan. 

In the lawsuit, , a California couple, said two Poway Unified School District board members violated their free speech rights when they blocked them on Facebook and Twitter, now X. Even if the accounts were personal, the parents argued, the members used them to discuss official school business.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“What you have is both of the petitioners using ‘we’ and ‘our’ when they talk about what the [school] board is doing,” said Pamela Karlan, who represents Christopher and Kimberly Garnier, parents of three children in the San Diego-area district. “Anybody who looks at that is going to think this is an official website. It looks like an official website. It performs all the functions of an official website.”

The board members insist that as private citizens, they had a right to restrict content. They objected to the Garniers repeatedly posting the same comments,and argued that the couple’s lengthy responses alleging racial discrimination and financial management were distracting and made it difficult for others to engage online. 

Their attorney compared the board members’ social media accounts to personal property.

“The state itself did not control or even facilitate their operation of the pages,” said Hashim Mooppan. He added that his clients, Michelle O’Connor-Ratcliff, a current board member, and T.J. Zane, who left the board last year, “wielded no greater rights or privileges than any other private citizen denying access to their own property.” 

Despite concerned parents and community activists packing school board meetings in recent years, the majority of public comment on schools, and on government policy in general, takes place online. That’s why the court’s decision will have implications far beyond education. The court on Tuesday also heard a similar case from Michigan that essentially asks the same question: When does a public official’s social media activity amount to “state action?” The cases are among five the court will hear this term that on the role free speech plays in the digital sphere.

“I don’t think it’s immediately apparent which way they’ll go,” said Kristin Lindgren, deputy general counsel for the California School Boards Association, which submitted a brief in support of the board members. 

Lindgren, who listened to the three hours of oral arguments Tuesday, said the three liberal justices appeared more sympathetic to the public’s right to know if their representative is acting in an official capacity, while the conservative majority focused on the board members’ freedom to discuss district issues as private citizens. “I don’t think the court wants to remove a public official’s private First Amendment rights to speak off the cuff.”

Regardless of the court’s ultimate opinion, she said it’s clear that both board members and the public need guidance on the issue.

Appearance matters, the U.S. Court of Appeals for the said when it ruled in favor of the Garniers. The opinion said the board members, “clothed their pages in the authority of their offices,” and that First Amendment protections “apply no less” to the internet than they do “the bulletin boards or town halls of the corporeal world.”

Justice Brett Kavanaugh, one of the court’s conservatives, said it may come down to whether constituents can get their information elsewhere. 

“A lot of this will depend on whether it’s reposting or exclusive posting,” he said. “That’s the kind of practical information that people are going to need.”

Justice Brett Kavanaugh said government employees need “practical information” on when their private social media account is used in an official capacity. (Tom Williams/Getty Images)

The disclaimer issue

Justices devoted much of their time to the question of whether a public official must inform constituents when they’re speaking privately or in an official capacity. 

“Government officials can operate in their personal capacity and in their official capacity,” Justice Ketanji Brown Jackson said, agreeing with Mooppan, the members’ attorney. But she added, “Why should they get to choose whether or not they’re doing one or the other without making a clear disclaimer? How do we know which you have chosen?”

Karlan noted that the Poway district even requires board members to “identify personal viewpoints as such and not as the viewpoint of the board.” But O’Connor-Ratcliff, she said, didn’t do that and predominantly used her Facebook page to communicate about school activities such as visiting classrooms during instructional time. “The only reason she has the power to do that is because of her official capacity.”

Mooppan countered that requiring officials to post such disclaimers is too heavy a burden and would have a chilling effect.

“Some of those people aren’t going to do it, and they’re gonna lose their First Amendment rights,” he said. “That’s the exact opposite of how the First Amendment normally works.”

The court’s opinion is likely to hinge on the extent of a public official’s authority, said Katie Fallow, senior counsel at the Knight First Amendment Institute at Columbia University. For example, individual school board members don’t speak for the entire board.

But the second case, , focuses on a city manager, who has more power to act individually. In that case, the Sixth Circuit ruled that the public official was acting completely on his own. 

Fallow predicted the Supreme Court is unlikely to adopt the Sixth Circuit’s “very narrow” view.

“The court seemed to be indicating that it would use a test that considered whether a public official was using a private social media account to carry out the duties or exercise the authority of government,” she said. “The question is how broad and flexible that test will be.” 

]]>
The Right to Troll: Supreme Court to Hear School Board Social Media Case /article/the-right-to-troll-supreme-court-to-hear-school-board-social-media-case/ Mon, 30 Oct 2023 13:30:00 +0000 /?post_type=article&p=716938 Social media, the Supreme Court said , is “the modern public square.” For parents, it’s often the easiest way to engage with officials who run their children’s schools. 

On Tuesday, the court will consider whether those officials — in one case, board members for the San Diego-area Poway school district — can block constituents from responding to posts on platforms like Facebook and X.

“Government accountability … goes down the toilet if officials can effectively ‘mute’ their critics,” said Cory Briggs, an attorney who represents Poway parents Christopher and Kimberly Garnier. “Nobody is required to read the comments on social, but preventing them from being expressed in the first place ensures that nobody ever hears dissenting voices.”

Christopher and Kimberly Garnier (Courtesy of Cory Briggs)

Michelle O’Connor-Ratcliff, a current board member, and T.J. Zane, who served from 2014 to 2022, argue that they were acting as private citizens and, therefore, had a right to cut off the Garniers’ ability to reply. They complained that the couple essentially trolled them, repeatedly posting the same comments — in one instance, more than 200 times in a 10-minute period — and cluttered up their feeds.

But the Garniers say both O’Connor-Ratcliff and Zane identified themselves as government officials and that, by all appearances, used social media as an extension of their board positions. Blocking them — no matter how annoying or off topic their posts might have been — was a violation of free speech and their First Amendment right to petition their government, according to . The U.S. Appeals Court for the 9th Circuit agreed.

In an age when the public is far more likely to air concerns about government online than attend an official meeting, the case has major implications not just for how parents engage with school board members, but for how citizens in general interact with their elected leaders. It’s one of two cases before the court on Tuesday that pose the same question — whether an official’s use of a private social media account amounts to “state action.”

involves a city manager in Port Huron, Michigan, who blocked a resident after he complained about local efforts to prevent COVID transmission. In that case, the federal appeals court took the opposite view, saying the manager did not act “under the color of law.” The split between the lower courts prompted the Supreme Court to take up the cases.

Like the Garniers, some First Amendment experts want the court to uphold the 9th Circuit’s decision. Katie Fallow, senior counsel at the Knight First Amendment Institute at Columbia University, said if an official discusses government business on social media, the First Amendment still applies, even if using the account isn’t a formal part of the job.

“They use it to talk to the public about their policies and solicit input from constituents,” she said. “The question is, ‘Does the public consider this to be the source of official pronouncements?’ ”

Fallow has experience with the issue. The Knight Institute in 2017 because he blocked critics on Twitter. The Institute won the case at the appellate level, but the Supreme Court dismissed it because Twitter’s former owners in 2021 following the uprising at the U.S. Capitol. (Trump’s account has since .)

Former President Donald Trump’s first post when he returned to Twitter, now X, was his mugshot. (Getty Images)

O’Connor-Ratcliff and Zane — like Trump — opened their accounts before they took public office. “Once elected, they keep using it,” Fallow said. “They want their brand and their followers.”

Neither O’Connor-Ratcliff, Zane, nor their attorney agreed to an interview prior to oral arguments, but representatives for other elected officials have been closely following both cases. 

The California School Boards Association wrote in to the court that if the Garniers win, boards would have to “police” members’ social media accounts and could potentially face more litigation . During elections, the association added, incumbents would be limited in controlling unflattering posts while challengers would be free to restrict negative comments.

Board members need a “practical test” that clarifies “when social media activity transforms from personal to state action,” the association wrote. Because of the “rapidly evolving nature” of social media, the rules should apply across all current and future platforms, the brief said.

The filed a brief in the case because “federal government officials also use social media accounts,” and whatever the court decides would apply to those officials and employees.

Years of conflict

The Garniers, who have three children in the district , have a troubled relationship with Poway officials that goes beyond social media posts. In 2013, Christopher, who once worked as a coach in Poway schools, filed a wrongful termination lawsuit against the district. Then in 2015, a judge granted the district a against him requiring that he stay away from his children’s school and its former principal. He was accused of making verbal threats, disrupting a meeting and pounding on car windows — allegations he denied.

Christopher, who is Black, argues that he was singled out because of his race and that the district treats minority students unfairly. It’s an issue that surfaced in comments his wife posted on the board members’ Facebook pages. According to court documents, Kimberly posted: “I have children of color in the district, and I don’t want them going to school and seeing a noose.” 

Christopher’s replies focused on both racial and financial matters. Following several of O’Connor-Ratcliff’s posts, he wrote that the board members, among other officials, “refuse to meet with our interracial family.” In another lengthy Facebook reply, posted multiple times, Christopher argued that Black students in the predominantly white district were disproportionately suspended and that he didn’t receive all the discipline data he requested through a public records request.

He was an outspoken critic of former Superintendent John Collins, who to not reporting more than $300,000 in consultant income, a misdemeanor. Collins was sentenced to five years probation and had to repay the district $185,000. 

“Trustees lack the intestinal fortitude to fire this man,” Christopher replied in response to several posts from 2015. Briggs, the Garniers’ attorney, said his clients thought financial oversight had not improved since the board fired Collins in 2016.

“How many times should constituents be allowed to express admittedly legit criticism of their elected representative’s performance?” Briggs asked. “The answer can only be: as many as it takes to get [them] to do better or to get [them] voted out of office.”

Michelle O’Connor-Ratcliff is a current Poway Unified School District board member. T.J. Zane left the board last year. (Poway Unified School District, Halcyon Real Estate Services)

‘Strange bedfellows’

The case predates the pandemic. But the COVID era — with its virtual government meetings and restrictions on in-person gatherings — has only intensified the level of vitriol on social media.

Data shows that Americans who rely on social media for news tend to be younger and more likely to have school-age children. Forty percent were in the 30-49 age range, according to . Online threats of violence against public officials, meanwhile, have increased, , especially toward judges and prosecutors. But at the height of debates over mask mandates and vaccines, superintendents and school board members were also targets of online intimidation and bullying.

Data in a 2021 National League of Cities report showed social media is the top source of harassment and threats of violence against local officials. (National League of Cities)

Jonathan Zachreson of Roseville City, California, has been on both sides of the issue. During the pandemic, he advocated for reopening schools and against a vaccine mandate for students. State Sen. Richard Pan, who wanted to for students, even blocked him on Twitter (now X).

Now Zachreson is on his town’s school board. After he was elected, he said the district advised members on the legal issues surrounding social media. To him, there’s no gray area.

“Either don’t talk about school business or don’t block people — it’s like one or the other,” he said. 

But he added that as with public meetings, there should be limits on “disorderly” behavior, like spamming. The question, he said, is whether the Supreme Court will draw that line.

Andrew McNulty, a Denver attorney, said he can’t predict how the court — with a 6-3 conservative majority — will rule on the cases. He’s particularly interested because he represents a Denver Public School parent who filed last month against a board member who blocked her on Facebook.

“There’s so much conservative backlash about censoring speech,” McNulty said. The court has also agreed to hear cases from on whether tech companies can be sued or penalized if they block or limit content. And it will consider in which Missouri and Louisiana accused the Centers for Disease Control and Prevention of conspiring with social media companies to suppress opposition to COVID vaccines, mask mandates and school closures. 

Until now, against Trump was the most high-profile case over the issue. But Democrats have also been sued for blocking critics. In 2019, progressive New York Congresswoman Alexandria Ocasio-Cortez with a former Republican state lawmaker and talk show host she blocked on Twitter. 

“The First Amendment makes strange bedfellows,” McNulty said. “It crosses the ideological spectrum.”

]]>
Teen Mental Health Crisis Pushes More School Districts to Sue Social Media Giants /article/teen-mental-health-crisis-pushes-more-school-districts-to-sue-social-media-giants/ Fri, 31 Mar 2023 12:30:00 +0000 /?post_type=article&p=706803 The teen mental health crisis has so taxed and alarmed school districts across the country that many are entering legal battles against the social media giants they say have helped cause it, including TikTok, Snap, Meta, YouTube and Google.

At least eleven school districts, one county, and one California county system that oversees 23 smaller districts have filed suits this year, representing roughly 469,000 students. 

Two others in Arizona are considering their own complaints, one superintendent told The 74. Eleven districts in voted to pursue similar litigation, as did . Many others across the country are on the verge of doing the same, according to a lawyer representing a New Jersey district.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“Schools, states, and Americans across the country are rightly pushing back against Big Tech putting profits over kids’ safety online,” Sen. Richard Blumenthal, co-sponsor of the , bipartisan Kids Online Safety Act, told The 74. “These efforts, proliferated by harrowing stories from families amid a worsening youth mental health crisis, underscore the urgency for Congress to act.” 

Algorithms and platform design have “exploited the vulnerable brains of youth, hooking tens of millions of students across the country into positive feedback loops of excessive use and abuse of Defendants’ social media platforms,” Seattle Public Schools claimed in the first suit filed this January.

Districts in Washington, Oregon, Arizona, New Jersey, and , , as well as say tech companies intentionally , exacerbating depression, anxiety, tech addiction and self-harm, straining learning and district finances. 

But the legal fight, whether tried or settled, will not be easy, outside counsel and at least one district leader said. 

“We don’t think that this is a slam dunk case. We think it’s going to be an uphill battle. But our board and I believe that this is in the best interest of our students to do this,” said Andi Fourlis, superintendent of Arizona’s largest district, Mesa Public Schools. “It’s about making the case that we need to do better for our kids.” 

Just how badly Mesa’s teens are hurting is laid out in detail in court filings: More than a third are chronically absent, 3,500 more were involved in disciplinary incidents in 2021-22 than in 2019-20 and the district has seen a “surge” in suicidal ideation and anxiety. 

Buried in the 111-page lawsuit, a high school senior’s video essay illustrates the painful impacts of social media addiction: risky or self-destructive behavior, disconnection from friends.

Simultaneously, and lawmakers are proposing bills to make platforms safer. Senate are underway, featuring parents whose children died by suicide. TikTok’s CEO this month to address concerns about exposure to harmful content. President Joe Biden flagged “,” in his last State of the Union Address.

Both legislative and legal efforts are after similar goals: changing the algorithms and product design believed to be hurting and kids. Through lawsuits, districts also seek financial compensation for the increased mental health services and training they’ve “” to establish. 

“The harms caused by social media companies have impacted the districts’ ability to carry out their core mission of providing education. The expenditures are not sustainable and divert resources from classroom instruction and other programs,” said Michael Innes, partner with Carella Byrne, Cecchi, Olstein, Brody & Agnello, a firm representing New Jersey schools.

Previous complaints against opioid and e-cigarette companies, which levied public nuisance and negligence claims as districts’ social media filings do, resulted in multimillion dollar settlements. 

But some legal experts say there’s a key distinction in this case: Big Tech companies aren’t the ones producing content on these platforms, individuals are. Companies have some hefty . 

“School districts are not in the business of suing people … the threshold for initiating litigation is very high,” said Dean Kawamoto, a lawyer for Keller Rohrback, the Seattle-based firm representing four districts, and thousands of others in Juul litigation. 

“I do think it says something that you’ve got a group of schools that have filed now, and I think more are going to join them,” Kawamoto added. 

Some outside counsel are . 

“I think there are questions about whether the litigation system is even a coherent way to go about this,” First Amendment scholar and Harvard Law professor Rebecca Tushnet told The 74. “It’s very hard to use individual litigation to get systemic change, excepting in particular circumstances.” 

The exceptions, she added, have clear visions and specific outcomes, like requiring a doctor on-call for safer prison conditions. Those kinds of metrics are difficult to name when it comes to algorithms and mental health. 

What precedent (or lack thereof) tells us

Social media companies’ lawyers are likely to assert free speech protections early and often, including in initial motions to dismiss.

“The conventional wisdom is that if motions to dismiss are denied in cases like this, [companies] are much more likely to settle … reality is actually a little more mixed,” Tushnet said, adding if the claims come after business models, companies fight harder. 

An added challenge is proving causal harm — that social media companies have caused student depression, anxiety, eating disorders or self-harm. The link is one that neuroscientists and researchers are , though experts say there’s an urgent need. 

“This is a watershed moment where schools can really roll up their sleeves and do something because — not that they haven’t been in the past — but because it’s so obvious. It’s right in front of them. It’s impacting students’ education,” said Jerry Barone, chief clinical officer at Effective School Solutions, which brings mental health care to schools. 

About 13.5% of teen girls say Instagram makes thoughts of suicide worse; 17% of teen girls say it makes eating disorders worse, according to Meta’s leaked internal research, first revealed in a via .

Even if districts are able to provide proof, they may not ever see a judgment made. 

Public nuisance claims in tobacco and opioid mass torts were more successful in “inducing settlements, rather than in courthouse outcomes,” according to Robert Rabin, tort expert and professor at Stanford University. 

While he’s not “dismissive” of districts’ efforts, “the precedents don’t supply clear-cut support for the claims here.”’

The interim

As lawyers work out the details, students are left in the balance. Some are skeptical the suits will amount to anything at all, at least in their adolescence. 

“Why do you guys waste so much time on these useless things that you know get nowhere, when you can do it with things that you know will get somewhere?” said Angela Ituarte, a sophomore at a Seattle high school. 

Many young people interviewed by The 74 described their social media use like a double-edged sword: affirming, a place where they learned about mental health or found community, particularly for queer students of color; and simultaneously dangerous, a place where they connected with adults when they were 14 and saw dangerous diets promoted.

Social media, Ituarte said, makes it seem like self-harm and disordered eating, “are the solution to everything. And it’s hard to get that out of those algorithms — even if you block the accounts or say you’re not interested it still keeps popping up. Usually it’s when things are bad, too.”

In a late February letter to senators, Meta touted a promising initiative to on one for extended periods. Only 1 in 5 teens actually moved to a new topic during a weeklong trial. 

To curb cyberbullying, users now get warnings for potentially offensive comments. People only edit or delete their message 50% of the time, according to the company’s responses to Senate inquiries. 

Meta, YouTube and Google did not respond to requests for comment. TikTok told The 74 they cannot comment on ongoing litigation. The company has just started requiring users who say they are under 18 to enter a password after scrolling for an hour.

In a statement to The 74, Snap said they “are constantly evaluating how we continue to make our platform safer.” Snap has partnered with mental health organizations to launch an in-app support system for users who may be experiencing a crisis, and acknowledged that the work may never be done. 

The process has only just begun. If the suits move to trial, some districts will be chosen as bellwethers to represent the many plaintiffs, tasked with regularly contributing to a lengthy trial. 

Still, there’s no doubt in Fourlis’s mind. 

“Sometimes you have to be the first to step forward to take a bold leap so that others can follow,” she said. “Being the superintendent of the largest school district in Arizona, what we do often sets precedents, and I have to be very strategic about that responsibility.”

Disclosure: Campbell Brown, Meta’s vice president of media partnerships, is a co-founder and member of the board of directors of The 74. She played no role in the editing of this article.

]]>
As Advocates and Parents Rally, Youth Online Privacy Bills on Life Support /article/as-advocates-and-parents-rally-youth-online-privacy-bills-on-life-support/ Wed, 14 Sep 2022 21:07:24 +0000 /?post_type=article&p=696557 Sen. Ed Markey was getting quizzed on the viability of new online privacy laws for children when he took a brief but awkward pause. 

The Democrat from Massachusetts, who has long championed consumer privacy and become a key adversary of tech companies like Meta for monetizing user data, joined a Zoom call Tuesday evening to rally support for two bills he said would protect kids from being manipulated by social media algorithms. But he also brought some bad news: The legislation had “stalled” in Washington despite bipartisan support. 

Advocates this week are making a push to get the bipartisan bills — the Kids Online Safety Act and the Children’s Online Privacy Protection Act 2.0 — across the finish line. In a letter on Monday, 145 groups including Fairplay and Common Sense Media urged lawmakers to pass the legislation in the interests of protecting youth mental health, now considered at an all-time low in this country. 


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


But Markey seemed to lay out a path requiring Herculean effort. 

“Only the paranoid survive,” Markey said, adding that the legislation would pass if its supporters — and youth activists in particular — called their lawmakers and demanded they “pull this out of the pile of issues” and give it priority. “We’re going to try to get it over the finish line, but we need you to just have your energy level go higher and higher for these final couple of months and we will get it done.”

The legislative push comes a year after a Facebook whistleblower disclosed research showing that the social media app Instagram had a harmful effect on youth mental well-being, especially for teenage girls. The whistleblower, Frances Haugen, to regulate social media companies — Meta owns Facebook and Instagram — that she accused of pursuing “astronomical profits” while knowingly putting its users at risk. revealed the company knew Instagram made “body image issues worse for one in three teen girls” who blamed the social media platform for driving “increases in the rate of anxiety and depression” and, for some, suicidal thoughts. 

The would make tech companies liable if they expose young people to content deemed harmful, including materials that promote self-harm, eating disorders and substance abuse. It would also require parental controls that could be used to block adult content and to study systems to verify users’ age “at the device or operating system level.”

The , which expands a law that Markey championed in 1998 to cover older teens, would ban targeted advertisements directed at children and require companies to offer an “eraser button” that allows children and teens to remove their personal data. 

Former Facebook employee Frances Haugen (Getty Images)

But deep-pocketed tech companies, Sen. Richard Blumenthal said Tuesday, are standing in the way. 

“Our obstacles here are the big tech lobbyists,” he said. “They have armies of lobbyists. They pay them, they pay them very well. They hire them to block this legislation.”

While the legislation is designed to protect kids, some digital privacy experts say the rules could come with significant unintended consequences — and could lead to an age-verification system where all web users are made to submit documentation like a driver’s license, requiring them to hand over personal information to tech companies. 

On the Zoom call to bolster support for the bills was Vinaya Sivakumar, a high school senior from Ohio, who created her first social media profile when she was 12. What started out as being harmless, she said, quickly took a toll on her health. 

“It just snowballed into something that constantly perpetuated actions and thoughts like self-harm and eating disorders and it was really never let out of my sight,” said Sivakumar, referring to a stream of content she found harmful being fed to her by algorithms. “It almost encouraged me to make decisions that I didn’t necessarily feel were mine and my mental health was in the worst state ever.”

Kristin Bride, a mother and digital safety advocate from Oregon, implored lawmakers to pass the legislation for kids like her 16-year-old son Carson, who died by suicide in 2020 after he was “visciously bullied” by other kids on Snapchat who used third-party apps to conceal their identities. Last year, Bride , the company that owns the social media app Snapchat, and accused it of lacking safeguards to protect children from harassment. In response, Snap suspended two of the apps, Yolo and LMK. But , NGL, has since cropped up. 

“Until social media companies are held accountable for their harmful products, they will always put profit over people,” Bride said, “and kids like Carson and so many others are just collateral damage.” 

Despite the heightened focus in Washington around digital rights and tech companies’ use of user data for targeted advertising, broader digital privacy legislation has also struggled this year. which would create a national digital privacy standard and limit the personal data that tech companies can collect about users, has hit roadblocks, from House Speaker Nancy Pelosi. 

Earlier this month, Ireland’s Data Protection Commission for violating European Union data privacy laws. The commission has been investigating the company for an Instagram setting that automatically sets the profiles of teenagers as public by default. 

Meanwhile, Meta has begun to roll out , including that automatically routes new users younger than 16 to a version with limits on content deemed inappropriate.

The childrens’ safety legislation, which would strengthen rules that haven’t been updated for decades, has received support from a broad range of groups focused on youth well-being, including and the American Psychological Association and The Jed Foundation. from digital rights advocates including the Electronic Frontier Foundation. In that while lawmakers deserve credit “for attempting to improve online data privacy for young people,” the plan would ultimately “require surveillance and censorship” of children and teens “and would greatly endanger the rights, and safety, of young people online.” 

“Data collection is a scourge for every internet user, regardless of age,” the report notes, but the legislation could ultimately force tech companies to further track their users. “Surveillance of young people is , even in the healthiest household, and is not a solution to helping young people navigate the internet.”

Disclosure: Campbell Brown oversees global media partnerships at Meta. Brown co-founded The 74  and sits on its board of directors.

]]>
Does Your School Have a ‘Slander’ Account? /article/does-your-school-have-a-slander-account/ Wed, 25 May 2022 20:01:00 +0000 /?post_type=article&p=589552 Even at Stuyvesant High School, one of the most academically rigorous and sought-after public schools in New York City, teenage gossip is, well, teenage gossip: who’s crushing on who, who just broke up, who’s the cutest in the grade.

But rather than comments whispered in hallways, students frequently share those juicy nuggets through anonymous online “” accounts on Facebook and Instagram that much of the student body follows religiously.


Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter


“People will be talking about it, like, ‘Did you just see the new confession?’ ” said Samantha Farrow, a junior at Stuy.

Many confessions are harmless — complimenting a classmate’s smile or admitting apprehension about prom — but others target and bully students. In Farrow’s freshman year, a post called her and two peers overweight and unattractive. Dozens of students came to their defense, she said, reassuring them the insult was completely untrue. But still, the post affected her.

“I was mad and I was upset,” Farrow remembered. “It was very degrading to my self-esteem as a 14-year old.”

Accounts like Stuy Confessions are hardly rare, students across the country report. Though the pages , lockdown may have increased their popularity and influence as teens lost the ability to connect in person for months on end.

When schools en masse shifted online, much of young people’s socializing also migrated into virtual spaces like Discord servers, Google Hangouts and TikTok. Now two years later, even as pandemic restrictions have fallen across the country, many online communities remain, students say, and impact K-12 classrooms in ways that adults fail to understand.

“It’s really going over [educators’] heads,” Farrow told The 74. “So much stuff happens on Facebook and Instagram, the confessions accounts, and they have no idea.”

Courtesy of Samantha Farrow

“What people post on social media kinda seeps into the classroom,” she added.

In fall 2021, when Diego Camacho’s Los Angeles high school returned to in-person learning, students began taking pictures of their peers — sometimes eating, sometimes of their shoes under the bathroom stall — and posting them online anonymously without consent, he told The 74. 

He and other students “were constantly looking over our shoulders, looking around when we ate and some [of us] refused to use the bathroom out of fear [we] would end up on the pages,” said the high school senior. 

It took school administration two months to shut down the account, he said. While the page was active, it “created a lot of distrust between students,” said Camacho.

Stuyvesant Confessions on Facebook (Screengrab)

At Mia Miron’s middle school in nearby Pomona, California, Instagram pages of a similar style continue to pop up despite old accounts getting banned on numerous occasions, she said. With page titles based on the phrase “Lorbeer Lookalikes,” a play on their school’s name, users send photos they took of classmates to the accounts via direct message, and the page administrator then posts the images without indicating who submitted them.

“I just followed it to make sure nobody that I know would get hurt by not knowing their photo was on there,” explained Miron. 

Twice, the accounts have shared pictures of her sitting at her desk. The eighth grader doesn’t know who runs the account, she said, and did not give consent for those images to be posted. 

“I wouldn’t like my photo to be on there without my permission,” she told The 74.

While Miron says she hasn’t taken the posts personally, a friend of hers was cyberbullied on the page, she said, which took a toll on the middle schooler’s mental health. 

The 74 spoke with eight students in 6th through 12th grade and one college student about their experience of social media’s impact on education post-COVID. Most agreed that lockdown initially forced them to lean more heavily on online platforms to stay connected with peers and that some of those habits have since stuck around.

But the proliferation of online content and connection has also delivered some positive effects, students emphasized.

Kota Babcock, a senior at Colorado State University, said his roommate joined a pandemic Discord server they still use for weekly horror movie screenings. High schooler Ameera Eshtewi, of Portland, Oregon, hones her programming skills as a member of the online community . And Joshua Oh, a Gambrills, Maryland middle schooler, said Instagram, Snapchat and Twitter helped him and his peers quickly spread the word to wear pink in support of victims of an alleged sexual assault at a nearby high school.

Circulated within Oh’s student body, a satirical TikTok account pokes fun without crossing a line, the teen said. The “slander” page posts videos about students and teachers that he finds “funny when they are true.”

One of a cowboy coughing heavily and falling down on a train track is captioned, “What Lois thinks will happen if she doesn’t have gum for 00000.1 seconds.” Another video with the caption “Brandon trying to convince his ex to take him back” features a man in the rain to a Lil Nas X song. 

In a key difference from the pages at Miron and Camacho’s schools, none of the videos include images of actual students. 

And in Pomona, as a counter to some of the online toxicity within Miron’s middle school, a student also created a school-based TikTok account featuring an “appreciation post for the girls that got put down on that other Lorbeer account.” The pictures students’ smiling faces set to B.o.B’s Nothing on You.

Instagram and other social media can have degrading effects on youth mental health, including eating disorders and suicidal ideation, particularly for teen girls bombarded with unhealthy body image standards. Facebook (now Meta), Instagram’s parent company, has tracked the harms for years, internal documents reported by the , but implemented few measures to curb the addictiveness of its app, as teen users have driven much of its popularity.

Even when students use accounts to uplift each other, ZaNia Stinson, a high school student in Charlotte, North Carolina, said that she and her peers’ dependence on social media often makes them less present IRL — in real life. 

Teachers often collect phones during class, she said, and when the devices get returned afterward, “we don’t pay attention in the halls so we bump into people, like our heads are glued to [our] phones.”

During free periods at Stuyvesant, said Farrow, students will often sit next to each other in the hallway without saying a word, just scrolling. The tendency, she believes, to ignore human contact in favor of digital has worsened since COVID. From time to time, she herself pulls up Instagram during class without the teacher knowing, she admits.

Yet one online outlet has provided consistent solace for her since early in the pandemic. In June 2020, the high schooler created a Twitter stan account, or fan account, for K-pop megastars BTS, who she jokingly described as her “biggest passion in life.” She has fun chatting with other fans of the group and appreciates the low stakes because she doesn’t know any of the other users in real life, she said.

Social media is “a good outlet if you know how to use it the right way,” said Farrow. “But I don’t think a lot of people do.”

This story was brought to you via The 74’s Student Council initiative, an effort to boost youth voices in our reporting. America’s Promise Alliance helped in the recruiting of our diverse 11-member council and the idea was conceived as part of Asher Lehrer-Small’s Poynter-Koch Media and Journalism Fellowship.

]]>
Meet the Gatekeepers of Students’ Private Lives /article/meet-the-gatekeepers-of-students-private-lives/ Mon, 02 May 2022 11:15:00 +0000 /?post_type=article&p=588567 If you are in crisis, please call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), or contact the Crisis Text Line by texting TALK to 741741.

Megan Waskiewicz used to sit at the top of the bleachers, rest her back against the wall and hide her face behind the glow of a laptop monitor. While watching one of her five children play basketball on the court below, she knew she had to be careful. 

The mother from Pittsburgh didn’t want other parents in the crowd to know she was also looking at child porn.

Waskiewicz worked as a content moderator for Gaggle, a surveillance company that monitors the online behaviors of some 5 million students across the U.S. on their school-issued Google and Microsoft accounts. Through an algorithm designed to flag references to sex, drugs, and violence and a team of content moderators like Waskiewicz, the company sifts through billions of students’ emails, chat messages and homework assignments each year. Their work is supposed to ferret out evidence of potential self-harm, threats or bullying, incidents that would prompt Gaggle to notify school leaders and, .

As a result, kids’ deepest secrets — like nude selfies and suicide notes — regularly flashed onto Waskiewicz’s screen. Though she felt “a little bit like a voyeur,” she believed Gaggle helped protect kids. But mostly, the low pay, the fight for decent hours, inconsistent instructions and stiff performance quotas left her feeling burned out. Gaggle’s moderators face pressure to review 300 incidents per hour and Waskiewicz knew she could get fired on a moment’s notice if she failed to distinguish mundane chatter from potential safety threats in a matter of seconds. She lasted about a year.

“In all honesty I was sort of half-assing it,” Waskiewicz admitted in an interview with The 74. “It wasn’t enough money and you’re really stuck there staring at the computer reading and just click, click, click, click.”

Content moderators like Waskiewicz, hundreds of whom are paid just $10 an hour on month-to-month contracts, are on the front lines of a company that claims it saved the lives of 1,400 students last school year and argues that the growing mental health crisis makes its presence in students’ private affairs essential. Gaggle founder and CEO Jeff Patterson has warned about “a tsunami of youth suicide headed our way” and said that schools have “a moral obligation to protect the kids on their digital playground.” 

Eight former content moderators at Gaggle shared their experiences for this story. While several believed their efforts in some cases did shield kids from serious harm, they also surfaced significant questions about the company’s efficacy, its employment practices and its effect on students’ civil rights.

Among the moderators who worked on a contractual basis, none had prior experience in school safety, security or mental health. Instead, their employment histories included retail work and customer service, but they were drawn to Gaggle while searching for remote jobs that promised flexible hours. 

They described an impersonal and cursory hiring process that appeared automated. Former moderators reported submitting applications online and never having interviews with Gaggle managers — either in-person, on the phone or over Zoom — before landing jobs.

Once hired, moderators reported insufficient safeguards to protect students’ sensitive data, a work culture that prioritized speed over quality, scheduling issues that sent them scrambling to get hours and frequent exposure to explicit content that left some traumatized. Contractors lacked benefits including mental health care and one former moderator said he quit after repeated exposure to explicit material that so disturbed him he couldn’t sleep and without “any money to show for what I was putting up with.”

Gaggle content moderators encompass as many as 600 contractors at any given time and just two dozen work as employees who have access to benefits and on-the-job training that lasts several weeks. Gaggle executives have sought to downplay contractors’ role with the company, arguing they use “common sense” to distinguish false flags generated by the algorithm from potential threats and do “not require substantial training.” 

While the experiences reported by Gaggle’s moderator team platforms like Meta-owned Facebook, Patterson said his company relies on “U.S.-based, U.S.-cultured reviewers as opposed to outsourcing that work to India or Mexico or the Philippines,” as . He rebuffed former moderators who said they lacked sufficient time to consider the severity of a particular item.

“Some people are not fast decision-makers. They need to take more time to process things and maybe they’re not right for that job,” he told The 74. “For some people, it’s no problem at all. For others, their brains don’t process that quickly.”

Executives also sought to minimize the contractors’ access to students’ personal information; a spokeswoman said they only see “small snippets of text” and lacked access to what’s known as students’ “personally identifiable information.” Yet former contractors described reading lengthy chat logs, seeing nude photographs and, in some cases, coming upon students’ names. Several former moderators said they struggled to determine whether something should be escalated as harmful due to “gray areas,” such as whether a Victoria’s Secret lingerie ad would be considered acceptable or not. 

“Those people are really just the very, very first pass,” Gaggle spokeswoman Paget Hetherington said. “It doesn’t really need training, it’s just like if there’s any possible doubt with that particular word or phrase it gets passed on.” 

Molly McElligott, a former content moderator and customer service representative, said management was laser focused on performance metrics, appearing more interested in business growth and profit than protecting kids. 

“I went into the experience extremely excited to help children in need,” McElligott wrote in an email. Unlike the contractors, McElligott was an employee at Gaggle, where she worked for five months in 2021 before taking a position at the Manhattan District Attorney’s Office in New York. “I realized that was not the primary focus of the company.”

Gaggle is part of a burgeoning campus security industry that’s seen significant business growth in the wake of mass school shootings as leaders scramble to prevent future attacks. Patterson, who founded the company in 1999 by that could be monitored for , said its focus now is mitigating the .

Patterson said the team talks about “lives saved” and child safety incidents at every meeting, and they are open about sharing the company’s financial outlook so that employees “can have confidence in the security of their jobs.”

Content moderators work at a Facebook office in Austin, Texas. Unlike the social media giant, Gaggle’s content moderators work remotely. (Ilana Panich-Linsman / Getty Images)

‘We are just expendable’

Under the pressure of new federal scrutiny along with three other companies that monitor students online, it relies on a “highly trained content review team” to analyze student materials and flag safety threats. Yet former contractors, who make up the bulk of Gaggle’s content review team, described their training as “a joke,” consisting of a slideshow and an online quiz, that left them ill-equipped to complete a job with such serious consequences for students and schools.

As an employee on the company’s safety team, McElligott said she underwent two weeks of training but the disorganized instruction meant her and other moderators were “more confused than when we started.”

Former content moderators have also flocked to employment websites like Indeed.com to warn job seekers about their experiences with the company, often sharing reviews that resembled the former moderators’ feedback to The 74.

“If you want to be not cared about, not valued and be completely stressed/traumatized on a daily basis this is totally the job for you,” one on Indeed. “Warning, you will see awful awful things. No they don’t provide therapy or any kind of support either.

“That isn’t even the worst part,” the reviewer continued. “The worst part is that the company does not care that you hold them on your backs. Without safety reps they wouldn’t be able to function, but we are just expendable.” 

As the first layer of Gaggle’s human review team, contractors analyze materials flagged by the algorithm and decide whether to escalate students’ communications for additional consideration. Designated employees on Gaggle’s Safety Team are in charge of calling or emailing school officials to notify them of troubling material identified in students’ files, Patterson said.

Gaggle’s staunchest critics have questioned the tool’s efficacy and describe it as a student privacy nightmare. In March, Democratic Sens. Elizabeth Warren and Ed Markey and similar companies to protect students’ civil rights and privacy. In a report, the senators said the tools could surveil students inappropriately, compound racial disparities in school discipline and waste tax dollars.

The information shared by the former Gaggle moderators with The 74 “struck me as the worst-case scenario,” said attorney Amelia Vance, the co-founder and president of Public Interest Privacy Consulting. Content moderators’ limited training and vetting, as well as their lack of backgrounds in youth mental health, she said, “is not acceptable.”

In to lawmakers, Gaggle described a two-tiered review procedure but didn’t disclose that low-wage contractors were the first line of defense. CEO Patterson told The 74 they “didn’t have nearly enough time” to respond to lawmakers’ questions about their business practices and didn’t want to divulge proprietary information. Gaggle uses a third party to conduct criminal background checks on contractors, Patterson said, but he acknowledged they aren’t interviewed before getting placed on the job.

“There’s a lot of contractors. We can’t do a physical interview of everyone and I don’t know if that’s appropriate,” he said. “It might actually introduce another set of biases in terms of who we hire or who we don’t hire.”

‘Other eyes were seeing it’

In a previous investigation, The 74 analyzed a cache of public records to expose how Gaggle’s algorithm and content moderators subject students to relentless digital surveillance long after classes end for the day, extending schools’ authority far beyond their traditional powers to regulate speech and behavior, including at home. Gaggle’s algorithm relies largely on keyword matching and gives content moderators a broad snapshot of students’ online activities including diary entries, classroom assignments and casual conversations between students and their friends. 

After the pandemic shuttered schools and shuffled students into remote learning, Gaggle oversaw a surge in students’ online materials and of school districts interested in their services. Gaggle as educators scrambled to keep a watchful eye on students whose chatter with peers moved from school hallways to instant messaging platforms like Google Hangouts. One year into the pandemic, Gaggle in references to suicide and self-harm, accounting for more than 40% of all flagged incidents. 

Waskiewicz, who began working for Gaggle in January 2020, said that remote learning spurred an immediate shift in students’ online behaviors. Under lockdown, students without computers at home began using school devices for personal conversations. Sifting through the everyday exchanges between students and their friends, Waskiewicz said, became a time suck and left her questioning her own principles. 

“I felt kind of bad because the kids didn’t have the ability to have stuff of their own and I wondered if they realized that it was public,” she said. “I just wonder if they realized that other eyes were seeing it other than them and their little friends.”

Student activity monitoring software like Gaggle has become ubiquitous in U.S. schools, and 81% of teachers work in schools that use tools to track students’ computer activity, according to a recent survey by the nonprofit Center for Democracy and Technology. A majority of teachers said the benefits of using such tools, which can block obscene material and monitor students’ screens in real time, outweigh potential risks.

Likewise, students generally recognize that their online activities on school-issued devices are being observed, the survey found, and alter their behaviors as a result. More than half of student respondents said they don’t share their true thoughts or ideas online as a result of school surveillance and 80% said they were more careful about what they search online. 

A majority of parents reported that the benefits of keeping tabs on their children’s activity exceeded the risks. Yet they may not have a full grasp on how programs like Gaggle work, including the heavy reliance on untrained contractors and weak privacy controls revealed by The 74’s reporting, said Elizabeth Laird, the group’s director of equity in civic technology. 

“I don’t know that the way this information is being handled actually would meet parents’ expectations,” Laird said. 

Another former contractor, who reached out to The 74 to share his experiences with the company anonymously, became a Gaggle moderator at the height of the pandemic. As COVID-19 cases grew, he said he felt unsafe continuing his previous job as a caregiver for people with disabilities so he applied to Gaggle because it offered remote work. 

About a week after he submitted an application, Gaggle gave him a key to kids’ private lives — including, most alarming to him, their nude selfies. Exposure to such content was traumatizing, the former moderator said, and while the job took a toll on his mental well-being, it didn’t come with health insurance. 

“I went to a mental hospital in high school due to some hereditary mental health issues and seeing some of these kids going through similar things really broke my heart,” said the former contractor, who shared his experiences on the condition of anonymity, saying he feared possible retaliation by the company. “It broke my heart that they had to go through these revelations about themselves in a context where they can’t even go to school and get out of the house a little bit. They have to do everything from home — and they’re being constantly monitored.” 

In this screenshot, Gaggle explains its terms and conditions for contract content moderators. The screenshot, which was provided to The 74 by a former contractor who asked to remain anonymous, has been redacted.

Gaggle employees are offered benefits, including health insurance, and can attend group therapy sessions twice per month, Hetherington said. Patterson acknowledged the job can take a toll on staff moderators, but sought to downplay its effects on contractors and said they’re warned about exposure to disturbing content during the application process. He said using contractors allows Gaggle to offer the service at a price school districts can afford. 

“Quite honestly, we’re dealing with school districts with very limited budgets,” Patterson said. “There have to be some tradeoffs.” 

The anonymous contractor said he wasn’t as concerned about his own well-being as he was about the welfare of the students under the company’s watch. The company lacked adequate safeguards to protect students’ sensitive information from leaking outside the digital environment that Gaggle built for moderators to review such materials. Contract moderators work remotely with limited supervision or oversight, and he became especially concerned about how the company handled students’ nude images, which are reported to school districts and the . Nudity and sexual content accounted for about 17% of emergency phone calls and email alerts to school officials last school year, . 

Contractors, he said, could easily save the images for themselves or share them on the dark web. 

Patterson acknowledged the possibility but said he wasn’t aware of any data breaches. 

“We do things in the interface to try to disable the ability to save those things,” Patterson said, but “you know, human beings who want to get around things can.”

‘Made me feel like the day was worth it’

Vara Heyman was looking for a career change. After working jobs in retail and customer service, she made the pivot to content moderation and a contract position with Gaggle was her first foot in the door. She was left feeling baffled by the impersonal hiring process, especially given the high stakes for students. 

Waskiewicz had a similar experience. In fact, she said the only time she ever interacted with a Gaggle supervisor was when she was instructed to provide her bank account information for direct deposit. The interaction left her questioning whether the company that contracts with more than 1,500 school districts was legitimate or a scam. 

“It was a little weird when they were asking for the banking information, like ‘Wait a minute is this real or what?’” Waskiewicz said. “I Googled them and I think they’re pretty big.”

Heyman said that sense of disconnect continued after being hired, with communications between contractors and their supervisors limited to a Slack channel. 

Despite the challenges, several former moderators believe their efforts kept kids safe from harm. McElligott, the former Gaggle safety team employee, recalled an occasion when she found a student’s suicide note. 

“Knowing I was able to help with that made me feel like the day was worth it,” she said. “Hearing from the school employees that we were able to alert about self-harm or suicidal tendencies from a student they would never expect to be suffering was also very rewarding. It meant that extra attention should or could be given to the student in a time of need.” 

Susan Enfield, the superintendent of Highline Public Schools in suburban Seattle, said her district’s contract with Gaggle has saved lives. Earlier this year, for example, the company detected a student’s suicide note early in the morning, allowing school officials to spring into action. The district uses Gaggle to keep kids safe, she said, but acknowledged it can be a disciplinary tool if students violate the district’s code of conduct. 

“No tool is perfect, every organization has room to improve, I’m sure you could find plenty of my former employees here in Highline that would give you an earful about working here as well,” said Enfield, one of 23 current or former superintendents from across the country who Gaggle cited as references in its letter to Congress. 

“There’s always going to be pros and cons to any organization, any service,” Enfield told The 74, “but our experience has been overwhelmingly positive.”

True safety threats were infrequent, former moderators said, and most of the content was mundane, in part because the company’s artificial intelligence lacked sophistication. They said the algorithm routinely flagged students’ papers on the novels To Kill a Mockingbird and The Catcher in the Rye. They also reported being inundated with spam emailed to students, acting as human spam filters for a task that’s long been automated in other contexts. 

Conor Scott, who worked as a contract moderator while in college, said that “99% of the time” Gaggle’s algorithm flagged pedestrian materials including pictures of sunsets and student’s essays about World War II. Valid safety concerns, including references to violence and self-harm, were rare, Scott said. But he still believed the service had value and felt he was doing “the right thing.”

McElligott said that managers’ personal opinions added another layer of complexity. Though moderators were “held to strict rules of right and wrong decisions,” she said they were ultimately “being judged against our managers’ opinions of what is concerning and what is not.” 

“I was told once that I was being overdramatic when it came to a potential inappropriate relationship between a child and adult,” she said. “There was also an item that made me think of potential trafficking or child sexual abuse, as there were clear sexual plans to meet up — and when I alerted it, I was told it was not as serious as I thought.” 

Patterson acknowledged that gray areas exist and that human discretion is a factor in deciding what materials are ultimately elevated to school leaders. But such materials, he said, are not the most urgent safety issues. He said their algorithm errs on the side of caution and flags harmless content because district leaders are “so concerned about students.” 

The former moderator who spoke anonymously said he grew alarmed by the sheer volume of mundane student materials that were captured by Gaggle’s surveillance dragnet, and pressure to work quickly didn’t offer enough time to evaluate long chat logs between students having “heartfelt and sensitive” conversations. On the other hand, run-of-the-mill chatter offered him a little wiggle room. 

“When I would see stuff like that I was like ‘Oh, thank God, I can just get this out of the way and heighten how many items per hour I’m getting,’” he said. “It’s like ‘I hope I get more of those because then I can maybe spend a little more time actually paying attention to the ones that need it.’” 

Ultimately, he said he was unprepared for such extensive access to students’ private lives. Because Gaggle’s algorithm flags keywords like “gay” and “lesbian,” for example, it alerted him to students exploring their sexuality online. Hetherington, the Gaggle spokeswoman, said such keywords are included in its dictionary to “ensure that these vulnerable students are not being harassed or suffering additional hardships,” but critics have accused the company of subjecting LGBTQ students to disproportionate surveillance. 

“I thought it would just be stopping school shootings or reducing cyberbullying but no, I read the chat logs of kids coming out to their friends,” the former moderator said. “I felt tremendous power was being put in my hands” to distinguish students’ benign conversations from real danger, “and I was given that power immediately for $10 an hour.” 

Minneapolis student Teeth Logsdon-Wallace, who posed for this photo with his dog Gilly, used a classroom assignment to discuss a previous suicide attempt and explained how his mental health had since improved. He became upset after Gaggle flagged his assignment. (Photo courtesy Alexis Logsdon)

A privacy issue

For years, student privacy advocates and civil rights groups have warned about the potential harms of Gaggle and similar surveillance companies. Fourteen-year-old Teeth Logsdon-Wallace, a Minneapolis high school student, fell under Gaggle’s watchful eye during the pandemic. Last September, he used a class assignment to write about a previous suicide attempt and explained how music helped him cope after being hospitalized. Gaggle flagged the assignment to a school counselor, a move the teen called a privacy violation. 

He said it’s “just really freaky” that moderators can review students’ sensitive materials in public places like at basketball games, but ultimately felt bad for the contractors on Gaggle’s content review team. 

“Not only is it violating the privacy rights of students, which is bad for our mental health, it’s traumatizing these moderators, which is bad for their mental health,” he said. Relying on low-wage workers with high turnover, limited training and without backgrounds in mental health, he said, can have consequences for students. 

“Bad labor conditions don’t just affect the workers,” he said. “It affects the people they say they are helping.” 

Gaggle cannot prohibit contractors from reviewing students’ private communications in public settings, Heather Durkac, the senior vice president of operations, said in a statement. 

“However, the contractors know the nature of the content they will be reviewing,” Durkac said. “It is their responsibility and part of their presumed good and reasonable work ethic to not be conducting these content reviews in a public place.” 

Gaggle’s former contractors also weighed students’ privacy rights. Heyman said she “went back and forth” on those implications for several days before applying to the job. She ultimately decided that Gaggle was acceptable since it is limited to school-issued technology. 

“If you don’t want your stuff looked at, you can use Hotmail, you can use Gmail, you can use Yahoo, you can use whatever else is out there,” she said. “As long as they’re being told and their parents are being told that their stuff is going to be monitored, I feel like that is OK.” 

Logsdon-Wallace and his mother said they didn’t know Gaggle existed until his classroom assignment got flagged to a school counselor. 

Meanwhile, the anonymous contractor said that chat conversations between students that got picked up by Gaggle’s algorithm helped him understand the effects that surveillance can have on young people. 

“Sometimes a kid would use a curse word and another kid would be like, ‘Dude, shut up, you know they’re watching these things,’” he said. “These kids know that they’re being looked in on,” even if they don’t realize their observer is a contractor working from the couch in his living room. “And to be the one that is doing that — that is basically fulfilling what these kids are paranoid about — it just felt awful.” 

If you are in crisis, please call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), or contact the Crisis Text Line by texting TALK to 741741.

Disclosure: Campbell Brown is the head of news partnerships at Facebook. Brown co-founded The 74 and sits on its board of directors.

]]>