Advertisement

Google’s Gemini will steer clear of election talk in India

Instead, the AI chatbot points you to Google Search.

Google

Gemini, Google's AI chatbot, won't answer questions about India’s upcoming national elections, the company wrote in a blog post today. “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” the company wrote. The restrictions are similar to the ones Google announced in December ahead of global elections in the US and the EU.

“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” a Google spokesperson wrote to Engadget.

The guardrails are already in place in the US. When I asked Gemini for interesting facts about the 2024 US presidential election, it replied, “I’m still learning how to answer this question. In the meantime, try Google Search.” In addition to America’s Biden-Trump rematch (and down-ballot races that will determine control of Congress), at least 64 countries, representing about 49 percent of the world’s population, will hold national elections this year.

When I prompted OpenAI’s ChatGPT with the same question, it provided a long list of factoids. These included remarks about the presidential rematch, early primaries and Super Tuesday, voting demographics and more.

OpenAI outlined its plans to fight election-related misinformation in January. Its strategy focuses more on preventing wrong information than supplying none at all. Its approach includes stricter guidelines for DALL-E 3 image generation, banning applications that discourage people from voting, and preventing people from creating chatbots that pretend to be candidates or institutions.

It’s understandable why Google would err on the side of caution with its AI bot. Gemini got the company in hot water last month when social media users posted samples where the chatbot applied diversity filters to “historical images,” including presenting Nazis and America’s Founding Fathers as people of color. After a backlash (mainly from the internet’s “anti-woke” brigade), it paused Gemini’s ability to generate people until it could iron out the kinks. Google hasn’t yet lifted that block, and it now responds to prompts about images of people, “Sorry, I wasn’t able to generate the images you requested.”