Perceived Bias: GDGT Score
It is apparent to me, that there is a strong bias toward Apple products and weak bias against Android products in Critic reviews as presented by gdgt score. I feel bothered not that the Critics all seem to love Apple, but that the gdgt score does not seem to always reflect the reviews of the Users and Critics. And in some cases it would seem that the gdgt score doesn't reflect even the Critics scores alone (where there are no user reviews).
I appreciate gdgt as a useful resource for finding current, recent , and classic gadgets. I also like the social atmosphere of gdgt as a place where I can converse and commune with like-minded people. I have been a member since it's debut, I've listened to the pre-gdgt Podcasts, and have read Peter and Ryan's Engadget writings from the beginnings of Engadget. I am an avid gadget geek and user of the Android OS. I perceive that gdgt has little direct influence on the reviews of a critic from an external site. However, if gdgt review staff are showing a bias, then it makes me question the validity of the gdgt score. And more so to question that score is presented to new users and visitors, as valid.
Is gdgt showing an accurate depiction of the reviews of a device? Is the gdgt score truly representative of the rating of the device, leaving "nothing to chance"? If it is merely left up to the gdgt staff to determine the score, then why do we still have Critic and User reviews/ scores?
- "This data [specs, critic and user reviews, in-house analysis] is fed into a proprietary algorithm that helps guide us towards a final score, which is then approved by our top editors." (ref: "What factors do you look at when deciding a gdgt score or recommendation?")
- " . . . When it's all done, we study the results and yell at each other for a few hours until we can agree on a gdgt score." (ref: "How is the gdgt score formulated?")
Even if the final score were entirely determined by an algorithm (again, which it is not), the algorithm is probably more complex than we imagine. For example, one would presumably weight certain aspects of a product review differently depending on their perceived relevance - so we already have one example where editorial control is going to exist within in the algorithm itself - as well it should! Thus, a quick and dirty analysis of the scores from external sites is probably a waste of time.
Furthermore, what we don't see is the results of the in-house analysis and review; and we can only speculate if the output from that work is reduced to a series of scores that are then fed into their algorithm along with the other scores. We also don't know the relative weight given to the in-house review versus the various external reviews versus user reviews. To further complicate matters, they might attempt to normalize the scores from sources with a well known bias - like deducting points from Walt Mossberg's reviews of Apple products. They also might give greater weight to some critics than they will to others. Thus there is no way for those of us on the outside to know exactly how mathy the final score actually is.
As is all too often the case, the use of numerical scores creates an illusion of accuracy and precision that may (and probably doesn't) exist. For example, it is very unlikely that one is going to hit on a perfect model from day one. One learns new things all the time, and adjustments the model on the basis of new information. Hence, the algorithm is probably not static; it is probably in need of regular tweeking (i.e. editorial control). In addition, the subjective nature of what is "good" to consumers changes over time. How one deals with that must either alter the model OR the latitude with the use of the model. Once again, editorial control is required.
It is possible that the gdgt staff are pulling numbers out of their collective backsides in support of a conspiracy to misinform the users and push them to buying Apple (and to a lesser degree Android) products - as you suggest. However, it is also possible that the scores are more deterministic than you think; and you just lack the means to verify the outcome.
In the end, one either trusts the editors, or one doesn't.
As with most sites that want more traffic it is important to present the content in an appealing manner. I think that gdgt has very good visual appeal (it is a smartly designed and functional website), but the bolstering of the content with numbers that may not be directly related to the reviews, hides the truth. A newcomer can figure this out by looking at the reviews more deeply, but on the surface they would never know.
I continue to return to gdgt to find specs, read and post discussions, and participate with the community. But when I see Apple products consistently taking center stage to any competition, knowing that the reviews are not reflected in the gdgt score, it perturbs me like a thorn I can't remove.
And in the review they snarkily point out that the Nexus seven is "unprofitable" what has that got to do with a gadget review? Who cares what the gross margin of the company you are buying the product off is? If that's the case why not point out that as acknowledged in their last earnings call Apples gross margin is as low as it has ever been.
And the iPad mini announcement? Actually showing a picture of the Nexus 7 and doing a lame ludicrously unfair comparison. Since when does Apple give their competitors oxygen in a launch? I know it sounds trite but Steve Jobs must be rolling in his grave.
Only an iSheep would give the latest generation iPad the marks that GDGT have.
11 users following this discussion, including:
This discussion has been viewed 3786 times.
Last activity .