If you ask the Microsoft Bing chatbot if the Google Bard chatbot has been closed, it will reply "no," although earlier the answer was the opposite. Microsoft claimed the competitor was shutting down, citing as evidence a tweet from Hacker News in which a user joked that it had already closed, and Bard replied that it had already closed.
The fact that the Microsoft Bing chatbot has now addressed its flaw may be interpreted in different ways: either as a demonstration of the ability to quickly correct generative AI, or as a proof that artificial intelligence (AI)-based systems are so infinitely extensible that it becomes habitual to receive reports of their next errors.
The situation described here is an early example of large-scale lies and misinformation from artificial intelligence, which cannot reliably assess reliable information sources, misperceives lies about itself, and misinforms the user about its own capabilities.
This is a ridiculous situation, yet one that may have serious implications. Given the inability of AI language models to reliably distinguish fact from fiction, their widespread use threatens to leave behind a trail of disinformation and mistrust, informational provocations that can't be fully confirmed or authoritatively refuted. This is because Microsoft, Google, and OpenAI have decided that market share is more important than security.
Many examples of misinformation being spread by AI-powered chatbots have already been described... and now they are beginning to refer to each other's mistakes.
If you notice an error, click it with the mouse and press CTRL + ENTER.