By clicking one of our links you're supporting our labs and our independence, as we may earn a small share of revenue. Recommendations are separate from any business incentives.
Some of our regular visitors probably noticed a shift in our point-and-shoot camera scores this week. The new ratings reflect a handful of behind-the-scenes updates we just instituted, designed to better serve our readers.
The hard data that we collect in our lab tests never changes, but the wants and needs of our readers evolve with time, as does the technology in new digital cameras. Sometimes it makes sense to adjust the importance of some test results within our score sheet. That’s why we use a dynamic scoring system that allows us to institute site-wide changes, quickly and consistently, across every camera in our database.
We made one of those site-wide changes this week when we adjusted the weightings for a handful of tests and ratings. Video sharpness and motion scores are more important than they used to be, for example, while direct-print options and video color are no longer considered at all. Still-image quality scores were not affected, and DSLR ratings haven't been adjusted at all just yet.
We simply tweaked a few multipliers in our score sheet, and the new weightings were applied instantly and uniformly to all of the point-and-shoots we’ve tested since 2009. This caused a few cameras to move around in the rankings; the Sony HX100V, for example, leapfrogged the Canon SX40 HS, mostly on the strength of its video system.
The test results are the same as they’ve always been, but standards and expectations have changed in the past three years, and we've systematically adjusted our scores to reflect that.
We’ve also adjusted the way we scale our overall scores. This change is more obvious; cameras with mediocre ratings got a bump of nearly two points out of 10. Cameras at the top of the rankings earned one or two extra tenths of a point. Our reviews and assessments are entirely unchanged, and this re-scaling had no effect on relative rankings.
Basically, we redefined the bottom end of the scale. We rate every camera that we've reviewed on a scale of 0 to 10. The camera with the best overall score always earns a 10, and until a few days ago, the camera with the lowest overall score earned a 0. But even the worst camera has one redeeming quality, so it didn’t seem fair to totally write it off.
As of this week, the bottom end of our scale is anchored by a Hypothetical Worst Camera. We found the worst real-world ratings for each performance, design, and usability metric that we’ve recorded since 2009, and combined them all into one terrible, thankfully fictional product. Imagine the worst parts of every cheap point-and-shoot rolled into one box of bolts. Mercifully, that camera doesn’t actually exist, but it is based on the worst scores we’ve seen out in the real world.
By using a Hypothetical Worst Camera for the bottom of the scale, it’s easier to show that even the really bad real-world cameras might have a few worthwhile characteristics.
The Lytro is a perfect example. It's the worst camera that we've tested, but now it earns a 3.0 instead of a 0. It was the source of a few of the awful scores we included with the Hypothetical Worst Camera, but clocked some respectable scores in a bunch of categories. By virtue of being the world’s first light field camera, it has a good reason to exist, even if the image quality is about as good as your cell phone from 2008. The new scaling system gets that point across much more effectively.
We’re still fiddling with a few minor details in the system, so you might see a few tenths of a point come and go here and there. But the big changes are done, and we think that these new ratings do a better job communicating a camera’s value on a scale that most people can relate to, as well as how the cameras relate to each other.
Thanks for reading,