By clicking one of our links you're supporting our labs and our independence, as we may earn a small share of revenue. Recommendations are separate from any business incentives.
Last March, Facebook’s highly advanced DeepFace facial recognition program was unveiled as a research-only project. But now it looks like the software has already begun to roll out for some users in the U.S.—and it's expected to eventually reach all 1.1 billion Facebook users. One might think this spells the end of photographic anonymity, but the truth isn’t as straightforward as you might think.
Thanks to the vast quantity of uploaded and tagged photos on the site, DeepFace learns what you look like by accessing a photo set of 4.4 million labeled faces. It can recognize you even if you’re turned to the side, through the use of a clever 3D facial reconstruction function. In fact, it can recognize you from pretty much any angle—the software's 97.35% accuracy rate means it's about as good at facial recognition as you are. Only… it knows more people than you ever could in a lifetime. And the algorithm improves as the program continues to scan Facebook's vast, ever-expanding image libraries. The implications of this are both fascinating and frightening.
DeepFace is so powerful that it can automatically identify and tag you in any photo uploaded to the site, even if the image belongs to a stranger. While users will be notified whenever they're tagged in a photo—and given the option of blurring out their faces—it's a pretty alarming prospect. Even with these added privacy options, the idea of facial blurring brings to mind shows like Cops and the idea of "guilt by anonymity"—not to mention certain social pressures. I mean, who wants to be the guy who opted to blur out his face in any otherwise good photo?
It’s important to note that Facebook isn’t the only organization developing this kind of technology, and is in fact only part of a wider movement toward improved facial identification software. The U.S. government, Google, and other players all have their own programs currently under development.
As facial recognition technology matures and increases in availability, there’s no saying how it will be used or regulated. Erik Learned-Miller, a computer scientist working on a facial recognition program partially funded by the U.S. government, recently told Science Magazine that he worries about how governments could abuse software like this.
But reservations aside, it may be inevitable. Brian Mennecke, an information systems researcher at Iowa State University in Ames, offered similarly chilling words, bluntly telling the magazine that there's "no going back."
Maybe so, but perhaps context is key. The idea of potential abuse of photo identification programs is frightening, but for now we’re just looking at a social media site adding improvements to its photo-tagging feature. And it’s not as if Facebook hasn’t been using a less advanced facial recognition program for some time.
So should we fear Facebook's DeepFace technology, or the future that it indicates? No, seriously—we really want to know.