Advertisement

J.D. Power explains how Samsung eked out win over the iPad in customer satisfaction survey... kind of

Last week we reported that a J.D. Power tablet satisfaction survey bizarrely gave Samsung the top prize. I say "bizarrely" because if you look at the chart below, Apple's iPad bested Samsung tablets in every single category except for cost. Keep in mind that each category is afforded the same weight.

Also strange is that Samsung managed to attain a 5/5 score in overall satisfaction while failing to receive a 5/5 in any of the metrics used to compute that final score.

Naturally, many suspicious eyebrows were raised in the wake of J.D. Power's inexplicably tabulated rankings.

9to5Mac and TechCrunch both reached out to J.D. Power asking for a bit of much-needed clarification.

J.D. Power's response reads:

It's important to note that award is given to the brand that has the highest overall score. In this study, the score is comprised of customer's ratings of five key dimensions or factors. To understand the relative rank of brands within each of these five dimensions we provide consumers with PowerCircle Rankings, which denote the brand that has the highest score within each factor regardless of how much higher their score is. In the case of Apple, although they did score higher on four out of five factors measured, its score was only marginally better than Samsung's. At the same time, however, Apple's score on cost was significantly lower than that of all other brands. As such, even though its ratings on other factor was slightly higher than Samsung's, Apple's performance on cost resulted in an overall lower score than Samsung.

So essentially, Apple narrowly bested Samsung in the categories it did win and was way behind in the lone category it lost -- cost. While that makes sense in theory, the chart above indicates that the voting wasn't at all close with respect to factors such as "performance" and "ease of use."

All in all, this study is still pretty bizarre. I mean, why release a chart that doesn't accurately and visually represent actual scoring? All this does is undermine the very reason behind conducting such surveys in the first place, namely providing consumers with data to help inform their purchasing decisions.