Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

‘Machine Learning Is Hard’: Google Photos Has Egregious Facial Recognition Error

A programmer flagged a nasty problem with Google’s facial recognition. Thankfully, its engineers responded promptly.

Google

In May, Google unfurled its new Photos app as the crowning achievement of machine learning capabilities — a service that stores and catalogues your images with computing smarts that can pick out buildings, landscapes, animals, even abstract events like birthdays, on its own.

As users get their hands on the app, though, it’s evident Photos is far from perfect. Two days ago, Jacky Alciné, an African-American programmer based in New York, flagged a flagrant error on Photos: It had tagged him and his friend as “gorillas.”

To Google’s credit, its human ambassadors responded swiftly. Yonatan Zunger, an engineer and “chief architect” of Google+, the social service from which Photos was stripped, replied to Alciné on Twitter within roughly 90 minutes, noting that he had alerted the Photos team.

Zunger checked in with Twitter missives the following day; Alciné thanked him, and noted that the erroneous label was removed.

Google has advanced more on artificial intelligence than others, infusing the advanced tech into speech and photo recognition as well as natural language processing. (Its trippy neural network construction has yet to be baked into consumer products.) The company is frank that its machine learning abilities still have a way to go. In his exchange on Twitter, Zunger noted how the company was still working on “long-term fixes” for linguistics and image recognition for “dark-skinned faces.”

Google put out this conciliatory statement: “We’re appalled and genuinely sorry that this happened. We are taking immediate action to prevent this type of result from appearing. There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”

Zunger, for his part, put it more bluntly.

This article originally appeared on Recode.net.

More in Technology

Podcasts
Are humanoid robots all hype?Are humanoid robots all hype?
Podcast
Podcasts

AI is making them better — but they’re not going to be doing your chores anytime soon.

By Avishay Artsy and Sean Rameswaram
Future Perfect
The old tech that could help stop the next airborne pandemicThe old tech that could help stop the next airborne pandemic
Future Perfect

Glycol vapors, explained.

By Shayna Korol
Future Perfect
Elon Musk could lose his case against OpenAI — and still get what he wantsElon Musk could lose his case against OpenAI — and still get what he wants
Future Perfect

It’s not about who wins. It’s about the dirty laundry you air along the way.

By Sara Herschander
Life
Why banning kids from AI isn’t the answerWhy banning kids from AI isn’t the answer
Life

What kids really need in the age of artificial intelligence.

By Anna North
Culture
Anthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque messAnthropic owes authors $1.5B for pirating work — but the claims process is a Kafkaesque mess
Culture

“Your AI monster ate all our work. Now you’re trying to pay us off with this piece of garbage that doesn’t work.”

By Constance Grady
Future Perfect
Some deaf children are hearing again because of a new gene therapySome deaf children are hearing again because of a new gene therapy
Future Perfect

A medical field that almost died is quietly fixing one disease at a time.

By Bryan Walsh