Google sponsors research that outs faux product review groups, calculates 'spamicity' and more

Ever consulted a crowdsourced review for a product or service before committing your hard-earned funds to the cause? Have you wondered how legit the opinions you read really are? Well, it seems that help is on the way to uncover paid opinion spamming and KIRF reviews. Researchers at the University of Illinois at Chicago have released detailed calculations in the report Spotting Fake Reviewer Groups in Consumer Reviews -- an effort aided by a Google Faculty Research Award. Exactly how does this work, you ask? Using the GSRank (Group Spam Rank) algorithm, behaviors of both individuals and a group as a whole are used to gather data on the suspected spammers.

Factors such as content similarity, reviewing products early (to be most effective), ratio of the group size to total reviewers and the number of products the group has been in cahoots on are a few bits of data that go into the analysis. The report states, "Experimental results showed that GSRank significantly outperformed the state-of-the-art supervised classification, regression, and learning to rank algorithms." Here's to hoping this research gets wrapped into a nice software application, but for now, review mods may want to brush up on their advanced math skills. If you're curious about the full explanation, hit the source link for the full-text PDF.