All this matters even when we use LLMs and prompt them.
I recently experimented with writing a miniature object store inspired by the MinIO codebase.
When I asked the LLM to derive the implementation for MinIO, it produced something that was
too procedural and harder to understand. When I wrote it myself, step by step, I ended up with
fewer and crisper abstractions, and the code was easier to read and evolve.
This matches a pattern I keep seeing. Without a stable vocabulary of abstractions,
LLM-generated code tends to be procedural. If I push it to “refactor,” it often swings to the
other extreme and creates too many classes and layers, making the design unnecessarily
complicated.
This is why I prefer to use LLMs as a translation layer inside my what/how loop. I use them to
quickly sketch a first version, but I still rely on writing and refactoring to shape the
structure—because the code I keep is the code I can explain, test, and change with confidence.
Prompts alone satisfy a scenario, but don’t build the structure
of the solution to accommodate future scenarios
This is also why generating test cases with LLMs to improve “test coverage” is not something
I find very useful.
Passing the test, or making the code work for a scenario, is merely the baseline. The primary
goal is not just to satisfy the current scenario, but to solidify and build the structure of
the solution so it can accommodate future scenarios.
If we simply “make it work,” we create fragile code. We must organize the solution so that
the “how” can evolve without breaking the “what.” We achieve this through:
- Cohesion: Grouping parts that share the same “what” (business intent).
- Decoupling: Separating parts that have different reasons to change.



Speak Your Mind