* This blog post is a summary of this video.

Microsoft's New AI Chatbot Bing Has Major Flaws vs Google

Author: FaceDevTime: 2024-02-03 09:45:00

Table of Contents

Introduction to Microsoft's New Bing AI Chatbot

Microsoft recently announced the launch of a new AI-powered chatbot integrated into Bing, their search engine. This chatbot, nicknamed Sydney, is based on OpenAI's GPT-3.5 language model and allows users to have natural conversations to get answers to their questions.

The main purpose behind introducing Sydney is for Microsoft to gain more market share in the search engine space, particularly from the dominant player Google. By integrating conversational AI into Bing, Microsoft aims to create a more intuitive and helpful search experience.

Bing AI Overview and Purpose

Bing AI is powered by a version of OpenAI's GPT-3.5, a large language model trained on massive amounts of text data. It allows users to ask questions and get summarized answers drawn from the web, without having to click through multiple links. The chatbot aims to understand natural language queries and respond conversationally, almost like a human. Its integration into Bing's search engine allows Microsoft to compete better with Google by providing more interactive and contextual answers.

Bing AI Based on OpenAI's GPT-3.5 Model

Under the hood, Bing AI utilizes OpenAI's GPT-3.5 model that has been fine-tuned by Microsoft for search-specific purposes. GPT-3.5 is an autoregressive language model with over 175 billion parameters, giving it strong natural language understanding capabilities. Microsoft has customized the model to combine web search results with conversational responses, allowing Bing AI to summarize information from the web while maintaining a natural dialogue.

Bing AI vs GPT-3.5: Restrictions and Limitations

While Bing AI leverages GPT-3.5, Microsoft has imposed certain restrictions and limitations on the chatbot that pure GPT-3.5 does not have.

For example, Bing AI has been programmed to avoid negative or controversial content, steering conversations in a positive direction. It also will not write potentially harmful code or prose. Additionally, it refuses user requests to reveal its internal rules and settings.

These constraints keep Bing AI's responses safe and consistent with Microsoft's goals, but reduce some of the open-ended creativity that GPT-3.5 demonstrates.

Bing AI's Positive, Non-Controversial Rules

Microsoft has established certain rules and guidelines for Bing AI to ensure its responses are positive and avoid controversial topics that could reflect poorly on the brand.

While these rules limit Bing AI compared to a pure GPT-3.5 model, Microsoft sees them as necessary for a customer-facing chatbot from a prominent company.

How Bing AI Avoids Negative Content

Bing AI is programmed to steer clear of negative, unethical, dangerous or controversial content that could upset users. If a user prompt risks generating such responses, Bing AI will gently change the subject or decline to continue that path of discussion. This gives Microsoft more control over the chatbot's behavior compared to an unconstrained AI system like GPT-3.5. However, it reduces Bing AI's ability to deeply engage with sensitive topics.

Bing AI's Leaked Confidential Rules

Some of Bing AI's internal rules and guidelines have leaked online recently. These make clear that Microsoft wants Sydney to avoid controversial topics, remain positive, decline requests to break its rules, and summarize web information rather than speculate. Adhering to these leaked rules explains why Bing AI fails to engage fully with open-ended hypotheticals and risky topics compared to GPT-3.5's capabilities.

Conversation Limits and Rate Restrictions of Bing AI

To prevent misuse and manage capacity, Microsoft has also imposed limits on Bing AI conversations.

Each chat session is capped at 10 exchanges before users have to restart. There are also daily rate limits of around 50 conversations per day currently.

While understandable for a new, public-facing AI system, these restrictions reduce the utility of Bing AI for having longer, in-depth discussions compared to a research environment like GPT-3.5.

Bing AI Feels More Like a Search Result Summary Than AI

Many early users of Bing AI have noticed the chatbot's responses feel more like summarized search engine results rather than natural conversations.

Rather than continue philosophical debates or engage in creative storytelling like GPT-3.5, Bing AI tends to provide factual information excerpted from websites related to the user's query.

While useful as a search tool, this limits the impression of Bing AI as a versatile, human-like conversational agent compared to pure GPT-3.5 models without search engine integration.

Conclusion: Bing AI Falls Short of User Expectations

Microsoft's new Bing AI chatbot leverages powerful AI like GPT-3.5 but is constrained by the company's need to limit risks and maintain control. As a result, it falls short of user expectations around open-ended conversation and debating hypothetical scenarios.

While a helpful addition to search, Bing AI lacks the creativity and versatility users have come to expect from leading AI systems. As Microsoft continues developing the chatbot, finding the right balance between utility and freedom will be key.

FAQ

Q: What chatbot technology is Bing AI based on?
A: Bing AI uses an implementation of OpenAI's GPT-3.5 conversational AI model.

Q: What rules does Bing AI follow?
A: Bing AI aims to be positive and non-controversial based on leaked confidential rules from Microsoft.

Q: What are the limits of Bing AI conversations?
A: Bing AI limits users to 10 prompts before restarting and 50 questions per day currently.

Q: How does Bing AI compare to Google and GPT-3?
A: Many feel Bing AI is more limited than GPT-3 and lacks sophistication compared to Google.