Discussion about
korbendalis

November 2nd 2012 11:50 pm

Perceived Bias: GDGT Score

Rather than just complain about the numbers I decided to actually read the first 5 reviews of the Apple iPad Gen 4, Gen 3, iPad Mini, Samsung Nexus 10, Asus Nexus 7, and Kindle Paperwhite. Then I looked at the Reviews FAQ that Ryan Block posted to determine how gdgt assessed the gdgt score. I see that the Critic reviews do not tie into the gdgt scoring system (or if they do, I cannot look at the score, but rather am directed to the site of the Critic). I also see that the score is not based on a direct average of the Critics and Users.

It is apparent to me, that there is a strong bias toward Apple products and weak bias against Android products in Critic reviews as presented by gdgt score. I feel bothered not that the Critics all seem to love Apple, but that the gdgt score does not seem to always reflect the reviews of the Users and Critics. And in some cases it would seem that the gdgt score doesn't reflect even the Critics scores alone (where there are no user reviews).

I appreciate gdgt as a useful resource for finding current, recent , and classic gadgets. I also like the social atmosphere of gdgt as a place where I can converse and commune with like-minded people. I have been a member since it's debut, I've listened to the pre-gdgt Podcasts, and have read Peter and Ryan's Engadget writings from the beginnings of Engadget. I am an avid gadget geek and user of the Android OS. I perceive that gdgt has little direct influence on the reviews of a critic from an external site. However, if gdgt review staff are showing a bias, then it makes me question the validity of the gdgt score. And more so to question that score is presented to new users and visitors, as valid.

Is gdgt showing an accurate depiction of the reviews of a device? Is the gdgt score truly representative of the rating of the device, leaving "nothing to chance"? If it is merely left up to the gdgt staff to determine the score, then why do we still have Critic and User reviews/ scores?

sort by

5 replies
slewis69au

Unfortunately I agree. It seems according to the vast majority of US reviewers Apple can do no wrong. This device should be a flop, it's inferior to it's competition in almost all meaningful ways and costs up to twice as much.

And in the review they snarkily point out that the Nexus seven is "unprofitable" what has that got to do with a gadget review? Who cares what the gross margin of the company you are buying the product off is? If that's the case why not point out that as acknowledged in their last earnings call Apples gross margin is as low as it has ever been.

And the iPad mini announcement? Actually showing a picture of the Nexus 7 and doing a lame ludicrously unfair comparison. Since when does Apple give their competitors oxygen in a launch? I know it sounds trite but Steve Jobs must be rolling in his grave.

Only an iSheep would give the latest generation iPad the marks that GDGT have.
2 like dislike
SentientD

The bias is no longer perceived. This is gross.
2 like dislike
gehill2000

Unfortunately I agree with korbendalis. An iPhone user since the first gen, I came to the site hoping to find objective reviews for devices from other manufacturers. Instead, the scoring I observed seemed so incredibly tilted in favor of Apple that I wonder if gdgt is nothing more than a veiled, yet novel and cleverly social approach to Apple marketing and business development.
8 like dislike
MtnSloth

I'm not an employee or an editor . . . so this is just my take on gdgt scores. In case others need a link to see what korbendalis is referring to, see ( gdgt.com­/help/ ), specifically the "gdgt recommendations & score" section. To summarize:
  • "This data [specs, critic and user reviews, in-house analysis] is fed into a proprietary algorithm that helps guide us towards a final score, which is then approved by our top editors." (ref: "What factors do you look at when deciding a gdgt score or recommendation?")
  • " . . . When it's all done, we study the results and yell at each other for a few hours until we can agree on a gdgt score." (ref: "How is the gdgt score formulated?")
Thus, the score is informed by an algorithm; but, in the end, the awarded score is an editorial decision; or at least that is my interpretation of the posted information.

Even if the final score were entirely determined by an algorithm (again, which it is not), the algorithm is probably more complex than we imagine. For example, one would presumably weight certain aspects of a product review differently depending on their perceived relevance - so we already have one example where editorial control is going to exist within in the algorithm itself - as well it should! Thus, a quick and dirty analysis of the scores from external sites is probably a waste of time.

Furthermore, what we don't see is the results of the in-house analysis and review; and we can only speculate if the output from that work is reduced to a series of scores that are then fed into their algorithm along with the other scores. We also don't know the relative weight given to the in-house review versus the various external reviews versus user reviews. To further complicate matters, they might attempt to normalize the scores from sources with a well known bias - like deducting points from Walt Mossberg's reviews of Apple products. They also might give greater weight to some critics than they will to others. Thus there is no way for those of us on the outside to know exactly how mathy the final score actually is.

As is all too often the case, the use of numerical scores creates an illusion of accuracy and precision that may (and probably doesn't) exist. For example, it is very unlikely that one is going to hit on a perfect model from day one. One learns new things all the time, and adjustments the model on the basis of new information. Hence, the algorithm is probably not static; it is probably in need of regular tweeking (i.e. editorial control). In addition, the subjective nature of what is "good" to consumers changes over time. How one deals with that must either alter the model OR the latitude with the use of the model. Once again, editorial control is required.

It is possible that the gdgt staff are pulling numbers out of their collective backsides in support of a conspiracy to misinform the users and push them to buying Apple (and to a lesser degree Android) products - as you suggest. However, it is also possible that the scores are more deterministic than you think; and you just lack the means to verify the outcome.

In the end, one either trusts the editors, or one doesn't.
5 like dislike
korbendalis

I'm at a cross-roads on this. I see from what you have written that the number probably should not be taken as definitive. But I believe it is also reasonable to think that the prominent number posted along with every product could be seen as a validation or rejection of its value.

As with most sites that want more traffic it is important to present the content in an appealing manner. I think that gdgt has very good visual appeal (it is a smartly designed and functional website), but the bolstering of the content with numbers that may not be directly related to the reviews, hides the truth. A newcomer can figure this out by looking at the reviews more deeply, but on the surface they would never know.

I continue to return to gdgt to find specs, read and post discussions, and participate with the community. But when I see Apple products consistently taking center stage to any competition, knowing that the reviews are not reflected in the gdgt score, it perturbs me like a thorn I can't remove.
1 like dislike