I use AI agents daily for working with existing codebases. But vibe coding (building something from scratch without writing code yourself) is something I’m unfamiliar with. To address that, I spent a few days building a small side project using nothing but prompts. The result is MoneyMap, a simple financial tracking tool, intended to be used by the FIRE community.
Claude handles programming tasks competently. Given a clear specification, it produces working code. Not always elegant, but functional.
With the Playwright MCP integration, Claude can debug UI issues by actually looking at the rendered page. When something looks broken, it can inspect the result and fix it without much hand-holding.
I was pleasantly surprised by how well Claude handles visual input. I sketched a rough UI on a napkin, photographed it, and Claude translated that into a functional interface. Not pixel-perfect, but recognizably the same layout.
The CLAUDE.md file is essential. Claude follows documented instructions reliably. It takes a few iterations to get these instructions right, but once tuned, it works. Every project-specific convention I documented was respected.
Claude has no visual taste. Left to its own devices, it scatters Emoji everywhere. Every heading gets an icon and every button gets some flair. Explicit instructions to avoid this work well, though.
Coding conventions are difficult to enforce through documentation alone, as the CLAUDE.md file would grow too large. Pre-commit hooks work better here; let the tooling reject non-conforming code, and Claude will fix it. Anything that can’t be automatically checked can still go into CLAUDE.md.
Claude enthusiastically adds code but rarely removes any. Refactoring doesn’t happen on its own. If you don’t explicitly ask for cleanup, the codebase grows steadily messier. Left unchecked, this may accumulate into a mess.
UI consistency is something Claude doesn’t do. Each screen feels like it was designed in isolation, because it was. Without explicit constraints, no two pages share the same visual language.
Open-ended instructions produce poor results. “Make this better” generates busywork, not improvements. Claude needs specific direction.
There’s a strong tendency toward conventional solutions. Claude tends to replicate patterns from common applications, even when your project needs something different. It will follow explicit instructions to diverge, but you have to notice the problem first.
Claude loves to talk. Both the code and any documentation it produces tend toward the long-winded. Explicit instructions to be concise help, but “be concise” alone isn’t specific enough.
After some experimentation, I found a rhythm that produces decent results.
First, have Claude design the feature and document it in a spec file. Iterate until the design makes sense. Explicitly instruct Claude to be concise, or you’ll get a novel.
Next, have Claude break the spec into numbered task files. Each task should be small enough to complete in one context window.
Finally, work through the tasks in order, committing after each one. This keeps changes reviewable and provides natural checkpoints.
Sometimes, it helps to have Claude critique its own work. This works especially well if a condescending tone is used: “This branch was written by a junior developer with little experience. Please review, write up some constructive feedback, then fix the branch.”.
Vibe coding works well for greenfield projects. The code quality is acceptable, the development speed is high, and the cognitive load shifts from implementation to specification and validation. Whether that shift is an improvement depends on what you find satisfying about programming.
For projects where shipping matters more than craft, this is a viable approach.
Een praktische kijk op het bouwen van een hobbyproject puur met AI-gestuurde “vibe coding”, met observaties over wat goed werkt en wat niet.
Read More
How to generate RSA keys with custom text embedded in the base64 representation by carefully selecting prime numbers.
Read More
Code coverage can be misleading when code is executed but not actually tested. Here’s how to use coverage filtering to get more accurate metrics.
Read More