The AI vibe coding debugging optimization x10
When AI makes a stupid bug during programming and can't fix it when told to "Fix the bug", "TRY HARDER", "Fix the bug or I'll unplug you" and similar, what to do? In this short article I will tell u how I ended this nonsense and made the LLM fix the *** bug.
LLMs writing code
When Vibe Coding programs using LLMs, we often discover that they made an error causing the application to fail tragically.
Problem
The AI tries to solve the same bug over and over and falls flat on its stupid face, increasing my frustration levels with each ridiculous attempt of solving the bug and tries to gaslight me that it already solved it.
Solution 1: Reset the conversation
When there are incorrect attempts in the context, it is much harder for the LLM to solve the problem (context is polluted) - sometimes just resetting the conversation with your LLM tool of choice can greatly increase the chances of solving the bug.
Better Solution 2: Tell the AI what to check
I'll give an example of 2 prompts, one of which made the AI do stupid random changes in the project, while other fixed the bug immediately, in this example bug, a simple JS game script was failing to center a bullet trail.
First prompt, AI tried 5 times, the AI failed 5 times:
"Fix a bug where bullet trails aren't centered."
Second prompt, AI solved the problem in the first attempt
"Fix a bug where bullet trails aren't centered. Take into account half bullet widths, center points vs starting points, and make sure the values use variables and not magic numbers. Write a helper function if u need one."
"Fix a bug where bullet trails aren't centered."
Second prompt, AI solved the problem in the first attempt
"Fix a bug where bullet trails aren't centered. Take into account half bullet widths, center points vs starting points, and make sure the values use variables and not magic numbers. Write a helper function if u need one."
This time LLM identified, that it used "20" instead of bullet width, that it took the top left point instead of the center point and created a helper function for drawing the bullet, instead of inlining everything, and that fixed the bug.
Solution 3: Ask for a list of things to check
In this method you ask the AI to list what they think can cause a certain buggy behavior, with a prompt like this:
"I expected the game window to have border, like we defined it in the 'bordered' class, but I don't see any borders on my game window. Can you list the possible causes and try to fix it?"
Simply asking the AI to "Add a border to the game window" failed multiple times, but with the prompt above it listed 5 bad things the AI made when designing the hierarchy and the css cascade, and fixed it.
Best solution: prevention ✅
The best solution to problems with debugging LLM generated nonsense, at least for one of my project was asking for Systems, Functions, Classes instead of features.
We can predict that the AI will inline all the math and add magic numbers, and will duplicate code and even features.
For example, when I asked the LLM to add the same feature to the right click context menu and a regular button in another menu, it wrote the feature twice, making 2 different bugs in the 2 different implementations.
I rolled back the changes and asked to "Create a function to do x and add it to this menu and that menu". It still had a bug, but at least the bug was contained in one place and was solved with the very next prompt.
Good programming practices
Even when I'm Vibe Coding and not reading the code, I still ask for good programming and coding practices. For example I ask to "Create an enemy system with support for different enemy types and behaviors, such as: ..." and I get much better results than just asking for the enemy behaviors directly. It forces the AI to think of the code structure instead of devolving into a nested hell of if statements.
From frustration to vibing :)
Using these simple techniques increased my enjoyment of Vibe Coding infinitely, because instead of constantly fighting with bugs I was able to add fun features. While it took more effort to write detailed prompts like this, they were giving me much better results faster - the prompts were just effective.