The message encouraged party supporters to vote against "a dangerous left-wing government" that would rely on Arab politicians "who want to destroy us all -- women, children and men -- and enable a nuclear Iran that would wipe us out."
Facebook subsequently implemented a 24-hour ban after a "careful review" found a violation of its hate speech policy. "Should there be any additional violations we will continue to take appropriate action," the company said in a statement.
The suspension only affected the bot, and not Mr Netanyahu's official Facebook page. In a radio interview following the incident, he said a campaign worker was to blame for the message and that he made sure it was removed as soon as he saw it. "The mistake was immediately fixed -- I didn't write it," he said. "Do you think I really would write such a thing and then deny it? I'm a serious person. Not everything on my campaign page is edited by me."
This is certainly not the first time chatbots have come under fire for their problematic messaging. Microsoft's "Tay" bot and its successor "Zo" have displayed controversial behavior, as has virtual assistant MyKai. However, it appears that Mr Netanyahu's chatbot did not display its offensive message as a result of AI and machine learning, but rather because of direct human input, which could have a significant impact on what is already a very precarious political situation in the Middle East.