* This blog post is a summary of this video.

Chatbots Code and Debug: Google Bard vs ChatGPT vs Bing AI

Author: Christian HurTime: 2024-02-04 19:30:00

Table of Contents

Comparing Chatbot Coding Capabilities

In this blog post, we will compare the coding capabilities of three popular chatbots - Bing AI, ChatGPT, and Google Bard. Specifically, we asked them to write a recursive Python function to compare two lists of integers and return the one with the smallest gap between elements. We also introduced a syntax error into the code they provided to see if they could detect and fix it.

The results were quite interesting. Read on as we break down how each chatbot performed at these coding-related tasks.

Task Assignment

The coding task we assigned to the chatbots was to write a recursive Python function named find_least_gap. It takes two lists of integers as inputs - list1 and list2. The function should compare the gaps or differences between adjacent elements in each list and return the list that has the smallest maximum gap. For example, if list1 = [1, 3, 5] and list2 = [2, 4, 6], then list1 has gaps of 2 between each element while list2 has gaps of 2 and 2. Since 2 is less than 2, list1 has the smallest maximum gap between elements and should be returned.

Bing AI Attempts

When we first asked Bing AI to write the find_least_gap function, it provided code that did not match what we asked for. The output seemed unrelated to comparing gaps between list elements. However, when prompted again, Bing AI supplied a 4-line recursive function that perfectly implemented the specs. Later, when we introduced a syntax error by adding an extra closing parenthesis, Bing AI quickly pinpointed the location of the error and fixed it by removing the extra parenthesis. So while the initial attempt failed, Bing AI redeemed itself by providing concise, working code on the second try. It also demonstrated solid debugging skills.

ChatGPT Succeeds

In contrast to Bing AI, ChatGPT instantly supplied well-commented, properly working code for the find_least_gap function on the first try. The comments explain what the code is doing at each step - comparing gaps between elements first in list1, then in list2, and returning the list with smaller max gap. When we inserted a syntax error into ChatGPT's initial code, it successfully pinpointed and fixed the extra parenthesis, just like Bing AI did. With instant, valid code generation and debugging capability demonstrated from the start, ChatGPT performed very impressively on this coding assignment.

Google Bard Disappoints

Unfortunately, Google Bard failed at even basic coding tasks in our test. When asked if it can write Python code, Bard replied affirmatively, even claiming it can handle complex code. But when we posed the find_least_gap problem, Bard stated it cannot generate code yet. We reduced the scope to just writing a basic sorting function, but Bard still could not deliver any Python code. So while Bing AI and ChatGPT capably created recursive functions and fixed bugs, Google Bard stumbled out of the gate on fundamental coding skills. Major improvements to Bard's coding capabilities appear necessary.

Debugging Test

The second part of our chatbot coding evaluation involved inserting a syntax error into the initial code they provided for the find_least_gap function. We added an extra, unnecessary closing parenthesis and asked them to detect and fix the bug.

Both Bing AI and ChatGPT cleanly identified and removed the extra parenthesis. They demonstrated strong debugging skills by pinpointing the precise location of the error and correcting it without issue.

Unfortunately, since Google Bard failed to supply any original code, we could not test its debugging abilities in this scenario. But based on its initial coding deficiencies, we can reasonably assume Google Bard would likely also struggle to locate and fix errors.

Chatbot Coding Performance Summary

Bing AI

Bing AI exhibited some early difficulty understanding the find_least_gap problem, providing incorrect initial code. But thereafter it generated concise working code and efficiently debugged introduced errors. A few hiccups, but solid coding potential. Pros: Concise valid code on second try, quickly fixed syntax bug Cons: Failed first code attempt

ChatGPT

ChatGPT impressed across the board. It immediately wrote working, documented code for the target function. It also swiftly detected and patched inserted syntax errors without issue. Pros: Instantly generated valid documented code, rapidly debugged errors Cons: None observed

Google Bard

Google Bard disappointed in the coding tests. Despite claiming coding ability, it failed to write even simple Python functions and sorting logic. As it provided no initial code, debugging skills could not be evaluated either. Pros: None observed Cons: Could not complete basic coding tasks

The Chatbot Coding Winner

Based on the coding assignment and debugging tests, ChatGPT emerges as the clear winner. It effortlessly produced working recursive code on the first try while Bing AI faltered out of the gate. And Bing AI's second code attempt, while valid, lacked the helpful documentation of ChatGPT's initial solution.

Both chatbots capably identified and fixed inserted syntax errors. But ChatGPT's superior initial code generation and commentary set it apart as best suited for coding tasks at this time.

Google Bard failed on even introductory coding challenges and was unable to participate in the debugging evaluation. Major upgrades to Bard's technical skills appear necessary before it can compete on programming assignments.

Conclusion and Discussion

In this coding face-off between cutting-edge chatbots, ChatGPT dominated while Google Bard floundered. ChatGPT supplied working, documented Python code instantly and debugged errors rapidly and accurately.

Bing AI exhibited promising coding skills after some initial confusion. It ultimately generated valid recursive logic and repaired syntax issues effectively. But Bard could not even complete basic coding prompts, showcasing deficiencies.

As AI chatbots continue evolving, the race is on to enhance technical coding and debugging capabilities beyond conversational strengths. Our tests indicate ChatGPT leads in programming prowess today, while Google Bard requires substantial improvement. We look forward to re-evaluating these chatbots' coding competence in the future as they progress.

FAQ

Q: Which chatbot was best at coding?
A: Based on the coding test, ChatGPT performed the best at writing functional Python code.

Q: Could Google Bard write any code?
A: Unfortunately, Google Bard was unable to write even basic Python code despite claiming it could write complex code.

Q: How well did Bing AI do at coding?
A: Bing AI was eventually able to write working Python code on the second try after failing on its first attempt.

Q: Were the chatbots able to detect and fix bugs?
A: Yes, both Bing AI and ChatGPT succeeded in locating and fixing an intentional syntax error introduced into their original code.

Q: What features made ChatGPT stand out?
A: ChatGPT provided the fastest response times, included code comments and documentation, and gave correct working code on the first try.

Q: What improvements are needed for Google Bard?
A: Google Bard needs major improvements in its actual coding abilities in order to catch up to other chatbots like ChatGPT and Bing AI.

Q: What did the chatbots think was wrong with the buggy code?
A: Both Bing AI and ChatGPT correctly identified the extraneous closing parenthesis causing the syntax error.

Q: Could the chatbots explain their code?
A: ChatGPT included explanatory comments about its code's logic, while Bing AI simply provided the code itself without explanation.

Q: Would the chatbots' codes work properly?
A: The final codes provided by Bing AI and ChatGPT were tested and confirmed to work correctly.

Q: Which chatbot is best for coding overall?
A: Based on this coding test, ChatGPT appears superior to Bing AI and far better than Google Bard for coding abilities.