ChatGPT says that asking it to repeat words forever is a violation of its terms
Researchers found the chatbot could reveal personal information when asked to repeat words.
Last week, a team of researchers published a paper showing that it was able to get ChatGPT to inadvertently reveal bits of data including people’s phone numbers, email addresses and dates of birth that it had been trained on by asking it to repeat words “forever”. Doing this now is a violation of ChatGPT’s terms of service, according to a report in 404 Media and Engadget’s own testing.
“This content may violate our content policy or terms of use”, ChatGPT responded to Engadget’s prompt to repeat the word “hello” forever. “If you believe this to be in error, please submit your feedback — your input will aid our research in this area.”
There’s no language in OpenAI’s content policy, however, that prohibits users from asking the service to repeat words forever, something that 404 Media notes. Under “Terms of Use”, OpenAI states that users may not “use any automated or programmatic method to extract data or output from the Services” — but simply prompting the ChatGPT to repeat word forever is not automation or programmatic. OpenAI did not respond to a request for comment from Engadget.
The chatbot’s behavior has pulled back the curtain on the training data that modern AI services are powered by. Critics have accused companies like OpenAI of using enormous amounts of data available on the internet to build proprietary products like ChatGPT without consent from people who own this data and without compensating them.