Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

What a social media warning label can’t do

What the surgeon general wants to do for kids safety leaves the rest of us behind.

GettyImages-1460738717
GettyImages-1460738717
SolStock via Getty
Adam Clark Estes
Adam Clark Estes is a senior technology correspondent at Vox and author of the User Friendly newsletter. He’s spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice.

The case that social media is hurting America’s children gained more momentum this week after Surgeon General Vivek Murthy called for warning labels on social media platforms. If the news gave you déjà vu, that’s understandable. Just a year ago, the same surgeon general issued a lengthy advisory about social media and youth mental health. But as much as the surgeon general’s new call to action, a guest essay published by the New York Times, catches our attention, a warning label alone won’t rescue young people from the damaging effects of social media.

Several states enacted legislation to tackle social media’s harm on young people, and New York Gov. Kathy Hochul this month announced a ban on “addictive” social media algorithms. Perhaps the most powerful proposal is the Kids Online Safety Act (KOSA), which has been passed around Capitol Hill for a couple of years but might finally go up for a vote soon. One big thing this comprehensive children’s online safety legislation offers is a mandate to give parents more tools to manage their kids’ online privacy and their experience on certain platforms. And even those tools might not be enough.

The newfound urgency to protect kids against social media threatens to distract us from the larger question of how we can protect everyone online.

After all, the same algorithms making teenagers miserable by serving up an endless stream of content that tends to harm their self-image are bad for adults, too. And the same unchecked data collection that powers those algorithms will continue to be harmful to adult internet users, even if we find a way to make kids safer. The very fact that just a handful of massive tech companies, like Meta, have grown so powerful suggests that the real solution to the social media problem might have more to do with breaking up monopolies than applying warning labels.

“It is odd that we use children as the wedge to address the problem,” said Aaron Mackey, the Electronic Frontier Foundation’s free speech and transparency litigation director. “Any sort of effort that involves just children — say, children’s privacy or harms to children — is under-inclusive, because those harms, they don’t really have an age gap.”

And yet, we do sort of use children as the wedge to address all kinds of problems online. The tradition dates back to the early days of the internet: The Children’s Online Privacy Protection Act (COPPA) was enacted in 1998 to regulate how websites collect data on users under 13. The kids got protection, but data collection for the rest of us was left largely unregulated. The United States still doesn’t have a comprehensive data privacy law, although more legislation to protect kids continues to gain ground.

The past year has seen quite a few policymakers push their plans to protect kids online. The advisory Murthy issued last year was accompanied by an executive order from President Joe Biden that highlighted an “unprecedented youth mental health crisis” caused, in part, by the internet. It was also around that time that the KOSA was gaining steam in the Senate. The bill was first introduced in 2022 and would hold platforms responsible for their effects on kids and give parents more ways to control how their kids use those tools. KOSA and its House equivalent now have enough votes to pass, although it’s not clear how popular it would be with young people.

There’s a good chance KOSA becomes the congressional action that Murthy needs for his warning label to have real teeth.

For decades, surgeons general have been issuing warnings about everything from tobacco and alcohol to violent video games and loneliness, but they amount to mere lip service without the introduction of new policies that actually force people to change their behavior. Saying people under 21 shouldn’t drink and making it illegal for people under 21 to drink are two very different things. Should it get signed into law, KOSA will introduce a slate of new regulations, including a mandate for platforms to prevent harm to users under 17. The law would also force affected platforms to limit personalized recommendations and features that encourage young people to spend more time on them. It does not address these issues for the rest of us.

Related

Even if this new legislation fails and the surgeon general’s social media warning label never becomes a reality, there are more than a few grassroots movements aimed at limiting the negative impacts of social media on kids. One of the most prominent right now comes alongside the publication of the book The Anxious Generation by Jonathan Haidt, a social psychologist and NYU professor. Based on themes he explored in an expansive Atlantic feature published last year arguing that the introduction of smartphones and Instagram caused a youth mental health crisis, Haidt’s new book has spent 11 weeks at the top of the New York Times bestseller list and points to dozens of organizations trying to reverse the trend of phone-based childhood. If the government can’t fix the social media problem, these groups seem to argue, then maybe parents and schools will.

What we can’t know until it happens, is just how much restricting access to social media sites will improve lives. Some, including the ACLU, argue that cutting off access to websites or platforms simply amounts to censorship, and that we should focus on putting checks on tech companies or providing parents with tools for responsible use rather than outright bans. Others worry that a warning label would actually have the opposite of the intended effect.

“We’ve certainly seen cases where warning labels, such as ‘disclaimers’ added to media images depicting retouched or ultra-thin models, actually worsen the issues (e.g., body image) they’re trying to address,” Jacqueline Nesi, a psychologist and professor at Brown University, wrote on Tuesday in her newsletter, Techno Sapiens. “I think we run that same risk here.”

And let’s not forget about what young people think. Social media is not necessarily a net negative for people under the age of 18, just as it’s not necessarily bad for all of society. It helps young people find friends and be creative. It can even be fun. In a 2023 Pew study, 80 percent of teens said social media helped them feel more connected with their peers, and 71 percent said it helped them through tough times.

“Kids are saying, they want the products, they want the benefits, but they don’t want the harms,” said Camille Carlton, policy director at the Center for Humane Technology. “They don’t want to feel like they can’t put it down. They want balance in their lives.”

You could probably say the same for the rest of us, too.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh