Advertisement

OpenAI published the tool that writes disturbingly believable fake news

It originally said the AI was too dangerous to release in full.

In February, OpenAI announced that it had developed an algorithm that could write believable fake news and spam. Deciding that power was too dangerous to unleash, OpenAI planned a staged release so that it could offer pieces of the tech and analyze how it was used. Now, OpenAI says it has seen "no strong evidence of misuse," and this week, it published the full AI.

The AI, GPT-2, was originally designed to answer questions, summarize stories and translate texts. But researchers came to fear that it could be used to pump out large volumes of misinformation. Instead, we mostly just saw it used for things like training text adventure games and writing stories about unicorns.

Because the scaled back versions have not led to widespread misuse, OpenAI has released the full GPT-2 model. In its blog post, OpenAI says it hopes the full version will help researchers develop better AI-generated-text detection models and root out language biases. "We are releasing this model to aid the study of research into the detection of synthetic text," OpenAI wrote.

The idea of an AI that can mass produce believable fake news and disinformation is understandably unnerving. But some argued that this technology is coming whether we want it or not and that OpenAI should have shared its work immediately so that researchers could develop tools to combat, or at least detect, bot-generated text. Others suggested that this was all a ploy to hype up GPT-2. Regardless, and for better or worse, GPT-2 is no longer under lock and key.