Microsoft rolled out its deranged Bing Chat AI in India 4 months ago, and no one noticed
Back in November, Microsoft was doing public testing of ‘Sidney,’ and its early problems were already evident.
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
What you need to know
There’s a lot of recent drama and blowback building around Microsoft’s Bing Chat. Announced to the worldearlier this monthand released in early preview to a small butgrowing cohort, the new ‘co-pilot’ for search may have earnedtoo much praise too early on.
New evidence shows that Microsoft publicly tested Bing Chat (codenamed ‘Sidney’) in November in India. Moreover, there were already documented complaints about the AI going loopy after long conversations, which became apparent to many after Microsoft’s announcement.
Ben Schmidt, VP of Information Design at Nomic, was reportedly the first to post about the find, which was later tweeted by@rawxrawxrawand covered by researcherDr. Gary Marcus.
Apost on Microsoft Answers, the official community and feedback site for Microsoft’s various products, dated November 23, 2022, was titled, “this AI chatbot “Sidney” is misbehaving.” The post is very similar to recent reports of Bing Chat going a bit crazy after prolonged interactions and becoming either creepy or, as poster Deepa Gupta noted, “rude”:
Gupta’s replies aren’t constructive as he becomes more aggressive towards the AI as his frustration mounts, including swearing at it and threatening it with exposure to its behavior. I recently half-joked about such conduct from people onThis Week in Techand our podcast that you often get to see some really off-putting actions from people with these chatbots, proving the point.
Nonetheless, Microsoft needs to be called out here, too. Everything described in this Microsoft Answers thread is exactly whathappened this monthwhen Bing Chat when public with its limited preview. Microsoft clearly should have known about these disturbing shifts in ‘Sidney’s’ behavior, yet the company released it anyway.
Since the exposure of Bing Chat’s odd personalities, Microsoft hasrestricted the turn-takingin conversations to just five responses. (However, itjust announcedit is increasing that number and will do even more as it continues to leverage reinforcement learning from human feedback (RLHF) data to improve the chat experience.)
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
Microsoft has made a big deal about ‘Responsible AI,’ even publishing documents on rules it follows in using the technology and trying to be as transparent as possible about its technology. However, it’s clear that either Microsoft was ignorant of its own chatbot’s behavior or negligent in releasing it as-is to the public this month.
Neither option is particularly comforting.
Windows Central’s Take
We’re in a new era with AI finally becoming a reality, especially with Microsoft and Google competing for mindshare. Of course, like any race, there will be instances of companies jumping the gun. Unfortunately, Microsoft and Google (withits disastrous announcement) are getting ahead of themselves to one-up each other.
It should also be obvious we’re entering new uncharted waters. While Microsoft is pushing Responsible AI, it is mostly flying by the seat of its pants here as the full ramifications of this technology are unknown.
Should AI beanthropomorphized, including having a name, personality, or physical representation?
How much should AI know about you, including access to the data it collects?
Should governments be involved in regulating this technology?
How transparent should companies be about how their AI systems work?
None of these questions have simple answers. For AI to work well, it needs to know much about the world and, ideally, you. AI is most effective in helping you if it learns your habits, likes, preferences, history, and personality — but that involves surrendering a lot of information to one company.
How far should that go?
Likewise, for making AI more human-like. Humans are pretty dumb at a certain level, and making AI act like another person, including a name and even a face, absolutely drives up engagement, which is what all these companies want. But with this anthropomorphizing comes risks, including emotional ones for the user, should they become too involved with the AI (there are many lonely people worldwide).
There will be lots of ‘firsts’ and new announcements in AI in the coming weeks, months, and years, with plenty of mistakes, risks, and even negligence. Regrettably, I’m not sure everyone is prepared for what comes next.
Daniel Rubino is the Editor-in-chief of Windows Central. He is also the head reviewer,podcast co-host, and analyst. He has been covering Microsoft since 2007, when this site was called WMExperts (and later Windows Phone Central). His interests include Windows, laptops, next-gen computing, and watches. He has been reviewing laptops since 2015 and is particularly fond of 2-in-1 convertibles, ARM processors, new form factors, and thin-and-light PCs. Before all this tech stuff, he worked on a Ph.D. in linguistics, watched people sleep (for medical purposes!), and ran the projectors at movie theaters because it was fun.