Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Google’s Eric Schmidt Calls for a Check Against Hate Online but Avoids Encryption Talk

In an op-ed, the Alphabet chairman dances delicately between censorship and free expression.

Win McNamee | Getty Images

Over the weekend, Democratic front-runner Hillary Clinton publicly berated tech companies for not doing enough to combat ISIS, giving Silicon Valley a taste of what’s to come in the heated election cycle.

Eric Schmidt, Google’s former CEO and executive chairman of its parent Alphabet, penned a cloaked response to Clinton and other politicians in a New York Times opinion piece on Monday. The column leads with praise for the wonders of the global Web, before admitting it can be used for egregious harm. “Ever since there’s been fire, there’s been arson,” he writes.

What he does not write are the words “encryption” or “backdoor” — a tacit signal that the Internet giant is holding firm in its position against mounting political pressure.

Tech giants, like Google and Facebook, have said they are willing to scrub content from social media accounts and videos if they promote terrorism or violence. But they’ve stopped short of acquiescing to the other rising political demand: That the companies shut down encrypted messages and open up a backdoor passage for governments to track user information.

Schmidt, long a proponent of radical openness on the Web and an opponent of backdoor access, offers some conciliatory language. He mentions some unnamed “tools” that should be built to sift out “hate and harassment” on social media sites and videos (without mentioning Google’s YouTube). In short, if there’s bad content, we should ditch it:

We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice. Without this type of leadership from government, from citizens, from tech companies, the Internet could become a vehicle for further disaggregation of poorly built societies, and the empowerment of the wrong people, and the wrong voices.

But his argument dances around the pivotal question. Schmidt doesn’t make it clear how Internet companies or governments determine what to “spell check” — or who should spell check. Earlier, in the same paragraph, he makes a claim that seems to contradict the one above:

Authoritarian governments tell their citizens that censorship is necessary for stability. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

As Internet middleman, YouTube has had troubles in the past navigating the choppy waters of whether to remove content. On its site, it describes its policy about hate speech this way: “There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their race.” The video site requires its users to flag flagrant content.

This isn’t a simple issue. With terrorism set to be the central focus of the Presidential election, it’s not one that tech companies can shake off. Schmidt’s final line — that the onus is on the collective “us” to build an Internet “free from coercion and conformity” — reads like a plea from Silicon Valley to Washington, D.C.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh