When Reviewed.com tests digital cameras, it's straightforward enough to count frames per second, measure color accuracy, examine noise, and grade sharpness. In our controlled lab environment, direct measurements and the images on the card tell most of those stories. But for the camera's vaguer characteristics like image stabilization, we had to use a bit of creativity to put that to the test.
You can't just shake the camera randomly, so first we had to find out exactly how a person shakes. An object in space can move in six dimensions—three linear and three rotational—so we figured the average camera shake was an some kind of amalgamation of these motions.
Thanks to modern technology, it's easy to measure this complex shaking motion. Since most of today's mobile devices feature both gyroscopes and linear accelerometers, our resident Ph.D. Dr. Timur Senguen wrote an android app to record the data from these sensors over a ten-second interval. We passed a phone around the office and recorded the shaking habits of 27 people to find the average human shakiness.
It turns out people were excellent at controlling linear and steering wheel-type rotation, but the image stabilization system needs to correct the camera's pitch and yaw in our hands. Not unsurprisingly, it turns out that shakiness plotted against the severity of the shakes creates an exponential decay curve—in other words: there are lots of little shakes but only a few big shakes.
To effectively test the camera’s stabilization abilities, Timur built a shaking rig that simulates the two significant rotational motions using the human shake data we collected. In the set up, a controller board manipulates small motors to jostle the cameras to put the image stabilization systems to work.
Finally, we tested the shaking rig with the Android app. According to the data, the rig successfully mimics our unsteady hands. Modeling and testing complex aspects of product performance may not be simple, but it isn't always as hard as it looks. Currently, the test is in beta and doesn't factor into our scoring. But when we've collected a wide pool of stabilization data, we'll have a scoring algorithm that's normalized against a broad range of digital cameras.