MIT Student Uses Math to Take Portraits Like a Master
New algorithm could let you ape Avedon and Arbus with ease.
Recommendations are independently chosen by Reviewed’s editors. Purchases you make through our links may earn us a commission.
A partnership between MIT, Adobe, and the University of Virginia has yielded a piece of software designed to take any old portrait and turn it into an eye-catching work of art. The paper, published ahead of SIGGRAPH 2014 (the annual conference for digital imaging and interactive software) details how this new algorithm applies famous styles to ordinary smartphone photos.
So what makes these famous portraits stand out? Often, it's a creative choice of lighting and editing techniques. Diane Arbus, for example, always seems to capture a languid, reserved expression in her subjects—often with soft lighting and subjects that seem to just melt into the background.
Richard Avedon's portraits, however, often isolate a subject against a solid background, using harsh lighting to manipulate the way light and shadow fall on a person's face. His contrast-heavy portraits seem to hint at a deeper conflict within his subjects, with his lighting and editing style often calling attention to the contours, curves, wrinkles, and shape of their faces.
Led by grad student YiChang Shih, this project interprets the characteristics of each famous photographer's work, then massages your photo to look similar—no fancy lighting required. This “local transfer” method relies on facial recognition software in order to find out where light and shadow fall on the subject’s countenance, not just depending on global settings like contrast and exposure to alter your portrait. This program can remap highlights and shadow while maintaining the subject's unique face.
In simpler terms, what Instagram does for landscapes, this project can do specifically for faces. Rather than a simple filter that mimics the look of film and affects the image globally, this project will analyze your subject's face and use the same techniques as the masters to achieve similar results. The only catch appears to be that the original photo needs to have relatively flat, even lighting in order to get the best results.
The finished project even analyzes fine details in order to fully realize a natural-looking portrait. “You want the small scale — which corresponds to face pores and hairs — to be similar, but you also want the large scale to be similar — like nose, mouth, lighting,” Shih told the MIT News Office.
It’s no surprise that Adobe’s Disruptive Innovation Group is already looking to refine Shih’s software and turn it into a feature for consumers. Without a doubt, intelligent analysis engines like this will power the next wave of photo editing tools. While Adobe’s own impressive Content Aware Fill was an important step for power users, a tool like this algorithm could be killer for the smartphone toting masses.