Advertisement

Snapchat adds OpenAI-powered chatbot and proactively apologizes for what it might say

My AI will be available for Snapchat+ subscribers this week.

Snap

Snap announced today that it’s adding an OpenAI chatbot (similar to ChatGPT) to Snapchat. “My AI” is an experimental feature initially available for $3.99-per-month Snapchat+ subscribers, although the company reportedly wants to expand it to all users eventually. Snap’s bot rolls out this week.

My AI will appear as a regular Snap user profile, suggesting the company is marketing it less as an all-purpose writing machine and more as a virtual friend. “The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” CEO Evan Spiegel told The Verge. “And this is something we’re well positioned to do as a messaging service.” When it rolls out, you’ll find the bot pinned to the app’s chat section above conversations with friends.

Snap’s announcement says the bot runs “the latest version of OpenAI’s GPT technology that the authors have customized for Snapchat.” That reportedly refers to OpenAI’s Foundry, a recently leaked, invitation-only developer program for deep-pocketed developers; it lets them use GPT-3.5, the more advanced model on which ChatGPT is based. The company’s publicly available API currently only supports up to GPT-3, an older and less intelligent model. We contacted Snap for clarification on the model used and will update this article if we hear back.

Black screen with the Snapchat+ logo at the top. In the middle are two white chat boxes. The first bubble asks AI to write a haiku about a friend named Lukas; the second bubble is My AI’s response with a short haiku about his love of cheese.
Snapchat

Snap’s chatbot will include restrictions to stay within the platform’s trust and safety guidelines. Hopefully, it avoids a similar fate to CNET’s AI-written articles, the AI Seinfeld experiment or various other AI bot train wrecks. For example, My AI will reportedly steer clear of swearing, violence, sexually explicit content or political opinions. Snap reportedly plans to continue tuning the model as more people use it and report inaccurate or inappropriate answers. (You can do so by holding down on a troublesome message and submitting feedback.)

Even with those protections, Snap’s bot could still become a dumpster fire of misinformation and offensive content. “As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance!” the company said in its announcement post. “While My AI is designed to avoid biased, incorrect, harmful, or misleading information, mistakes may occur.”