CNET had to correct most of its AI-written articles
Some of the 77 articles were also edited for plagiarism.
CNET has issued corrections for over half of the AI-written articles the outlet recently attributed to its CNET Money team. Following an internal audit after it was first notified of an AI-written article with substantial errors, CNET Editor-in-Chief Connie Guglielmo says the publication identified additional stories that required correction. She claims a “small number” needed “substantial correction,” while others had “minor issues” that saw CNET fix things like incomplete company names and language the outlet deemed was vague. In all, of the 77 articles the publication now says were written as part of a trial to test an “internally designed AI engine,” 41 feature corrections.
As The Verge points out, some articles feature corrections that note CNET “replaced phrases that were not entirely original.” In those instances, the outlet says its plagiarism checker tool either “wasn’t used properly” by the editor assigned to the story or it failed to identify writing the tool had lifted from another source. Earlier this week, Futurism, the publication that first broke the news that CNET was quietly using AI to write financial literacy articles, said it found extensive evidence the website’s AI-generated content that showed “deep structural and phrasing similarities to articles previously published elsewhere.” Pointing to one piece on overdraft fees, Futurism noted how CNET’s version featured nearly identically phrasing to an earlier article from Forbes Advisor. It’s worth noting that AI, as it exists today, can’t be guilty of plagiarism. The software doesn’t know it’s copying something in violation of an ethical rule that humans apply to themselves. If anything, the failure falls on the CNET editors who were supposed to verify the outlet’s AI tool was creating original content.
Despite the public setback, CNET appears set on continuing to use AI tools to write published content. “We've paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors,” Guglielmo said. “In the meantime, expect CNET to continue exploring and testing how AI can be used to help our teams as they go about their work testing, researching and crafting the unbiased advice and fact-based reporting we're known for.”