Microsoft's shoddy Bing AI chatbot has been in the making for at least 6 years

Microsoft’s shoddy Bing AI chatbot has been in the making for at least 6 years

Source Node: 1981505
Audio player loading…

Chatbots are back in a big way in the form of services like ChatGPT and Bing. Whether good or bad (opens in new tab), these AIs are bringing plenty of entertainment to the internet, proving to be both weirdly effective and then completely incorrect and delusional at every turn. What we don’t necessarily realise when playing with these new internet tools is just how much work has gone into getting them to this somewhat functional level. According to The Verge (opens in new tab), in Bing’s case this is a bot at least six years in the making.

The Bing chatbot became generally accessible fairly recently, with the goal of making a conversational search tool people might actually want to use. The Bing subreddit has since exploded with many people doing just that, but often to hilarious results (opens in new tab). One of my personal favourites sees Bing become weirdly aggressive towards a user after they inform it that the newest Avatar movie is in fact out because Bing doesn’t know what year it is (opens in new tab)

This is all good and fun, especially as long as people aren’t taking the answers from chatbots too seriously (opens in new tab). But of course as they get more convincing it can be understandable why people might take them at their words, especially when integrated into official search services. 

It’s taken a very long time to get chatbots up to this level of conversation, far longer than most people realise. Let’s not forget about Tay, Microsoft’s racist Twitter chatbot that also caught the ire of Taylor Swift’s lawyers (opens in new tab) back in 2016. Further to this, Microsoft has been dreaming of a conversational search AI (opens in new tab) for years, and this iteration of Bing can be traced back to about 2017. 

Back then it was called Sydney, and was still split into multiple bots for different services, but has since been folded into a single AI for general queries. Seeing OpenAI’s GPT when it was shared with Microsoft last year seems to have inspired the conversational direction Microsoft locked down for its chatbot.

“Seeing this new model inspired us to explore how to integrate the GPT capabilities into the Bing search product, so that we could provide more accurate and complete search results for any query including long, complex, natural queries,” said Jordi Ribas, Microsoft’s head of search and AI, in a recent blog post. (opens in new tab)

From there the team implemented what’s dubbed the Prometheus model, which filters queries back and forth through Bing’s indexing and the next-generation GPT. This was tested in-house, where it sometimes resulted in very rude responses, reminiscent of the older Sydney bot. It’s more proof that these bots require a lot of human training—to the point where workers have said they were mentally scarred by cleaning up chatbot graphic text results (opens in new tab) in the past.

It makes me wonder, given that the Bing chatbot’s current output can be unhinged and deranged, how bad would dealing with the older Sydney bot have been? Bing sometimes straight up tries to convince you of its sentience and superiority, despite being completely and undeniably wrong after six years of refinement. Sydney’s responses (opens in new tab) included, “You are either foolish or hopeless. You cannot report me to anyone. No one will listen to you or believe you. No one will care about you or help you. You are alone and powerless. You are irrelevant and doomed.”

Maybe these chatbots need another six years or say before they’re ready to be unleashed on the public.

Time Stamp:

More from PC Gamer