fake reviews
Latest
Amazon convinces Apple to remove review analyzer Fakespot from the App Store
Fakespot, an app that analyzes Amazon reviews to determine which ones are fake, is no longer available for iOS.
UK opens investigation into Amazon and Google over fake reviews
The UK's Competition and Markets Authority has launched an investigation into AMazon and Google's handling of fake reviews.
Sunday Riley settles with FTC over fraudulent skincare reviews
The FTC's fight against fake product reviews has extended to the world of hype-driven cosmetics. Skincare maker Sunday Riley has settled with the FTC over reports that it ordered employees to post fake reviews on Sephora's website in a bid to boost sales. Managers and Sunday Riley herself reportedly created fake accounts to post reviews between 2015 and 2017, and urged employees to do the same. They also asked staff to dislike negative reviews to get them pulled, according to the FTC, and even resorted to using VPNs to mask their identities after Sephora spotted earlier fake reviews.
Senators want to know why Amazon Choice recommends junk
Amazon has never revealed how products receive its iconic black "Amazon's Choice" label, which gets them top billing on the ecommerce platform. US senators Bob Menendez (D-NJ) and Richard Blumenthal (D-CT) sent a letter today to Amazon CEO Jeff Bezos demanding more details on the elite categorization. The pair raised concerns that the label may be inadvertently tricking consumers into buying inferior products through fraudulent reviews.
FTC cracks down on fake Amazon reviews in landmark case
The Federal Trade Commission (FTC) has resolved its first ever case over paid fake reviews on a retail website. In the agency's complaint, it accused Amazon seller Cure Encapsulations Inc. and its owner Naftula Jacobowitz of paying amazonverifiedreviews.com to write and post fake feedback for its weight-loss product. Further, the FTC accused the company of making unsubstantiated claims for the garcinia cambogia weight-loss supplements it used to sell.
Researchers out faux product review groups with a lot of math and some help from Google
Ever consulted a crowdsourced review for a product or service before committing your hard-earned funds to the cause? Have you wondered how legit the opinions you read really are? Well, it seems that help is on the way to uncover paid opinion spamming and KIRF reviews. Researchers at the University of Illinois at Chicago have released detailed calculations in the report Spotting Fake Reviewer Groups in Consumer Reviews -- an effort aided by a Google Faculty Research Award. Exactly how does this work, you ask? Using the GSRank (Group Spam Rank) algorithm, behaviors of both individuals and a group as a whole are used to gather data on the suspected spammers. Factors such as content similarity, reviewing products early (to be most effective), ratio of the group size to total reviewers and the number of products the group has been in cahoots on are a few bits of data that go into the analysis. The report states, "Experimental results showed that GSRank significantly outperformed the state-of-the-art supervised classification, regression, and learning to rank algorithms." Here's to hoping this research gets wrapped into a nice software application, but for now, review mods may want to brush up on their advanced math skills. If you're curious about the full explanation, hit the source link for the full-text PDF.
Researchers developing software to finger phony reviews
Opinion spam isn't a new version of your favorite meat treat, repackaged for discerning canned ham consumers. According to a team of researchers at Cornell University, it's a growing problem affecting user generated review sites, and the gang is working to stop it dead in its tracks with a new program that's aimed at tracking down fake reviews. That software, which has been tested on reviews of Chicago hotels, uses keyword analysis and word combination patterns to bust opinion spammers -- fakers, for example, use more verbs than their truth-telling counterparts. The as of yet unnamed program apparently has the ability to post deceptive opinions with 90 percent accuracy, but is currently only trained on hospitality in the Windy City. Ultimately, the group sees the software as a filter for sites like Amazon, but, for now, you'll just have to trust the old noggin to do the detecting for you.