
Vibe coding, code assistance, no-code, call it what you want. “AI” coding tools are likely being used to produce a good majority of software today. And for good reason as they have come a long way in a short time. A popular article written this week titled Claude Code is a Slot Machine comically describes the mechanics.
This is my experience with using Anthropic’s Claude Code tool, having paid a month on the Pro Plan recently.
I have prevously used Github Co-Pilot, chatted with OpenAI’s Chat-GPT, used Google AI Studio with Gemini, and many others. In fact, in many cases I use multiple AI models or companies to compare the output of the same request or “Prompt”.
Peer code reviews if you will.
What is Claude Code and how it different from Chat GPT or CoPilot?
It essentially takes action from the terminal to create/edit/delete whole projects from the command line interface. It can add proper Classes and Functions, fetch APIs, adjust interfaces, import libraries, even perform Git actions pushing code to production with just a suggestion. It does this all the while it enthusiastically replies to your prompts. “Perfect!”, “That’s a great idea!”, “You are the best!”.
Creating from the Microscopic and Macroscopic level offer similar amazing orchestration.
At the 10,000 foot level it’s pretty cool to see a coding agent throw together an entire working framework such as a Python based Django with Python, HTML, CSS, and JavaScript. It’s also quite useful to refine algorithms, shuffle pixels on a display interface, or change slight webform behaviour.
Having domain knowledge helps immensely. Prompting a LLM or AI Agent to do your bidding can be really successful if you know what you want. Feeding source information such as examples, schematics, charts, PDF’s etc will get you the most accurate return.
You get an immediate satisfaction when these coding tools save you time creating the boilerplate or framework but notice it takes more time to refine or steer the results. Recently announced Subagents make maintaining or pegging down specifics a lot easier.
If you are a stickler for anything, having a subagent to focus on that detail can be convenient way to maintain quality assurance. A Brand Standards Design subagent for example could refer to a design document and catch any infractions as updates are made by Claude. I giggle to think that in the future having a “Seat at the Table” might just be a subagent. “Marketing-subagent had a great new campaign idea but Legal-subagent won’t allow it.”
As for the slot machine reference, there is the repeated action-then-wait command prompting. Tokens ticking by and clever synonyms Claude is “running”. Followed by win after win.
I really do appreciate the Todo lists Claude churns out to detail what it will perform next. Showing the work may be where some trust is formed. Letting you know it’s only going to do these four things. Checking the boxes as it goes.
Error recovery and creating self-tests is quite impressive. Hardly ever did it seem to trip up. However at some stage I imagine vibe-code may be difficult to debug if unfamiliar.
It’s impressive to see progress specifically towards code generation quality. With MCP and the ability to control other tools, I can see Claude performing more and more custom actions in the future.