The Big-O Complexity of Vibe Coders
Vibe coding is increasingly part of how software engineers work with LLMs. Right now, “good” vibe coding is mostly judged by speed. Who gets to a working result fastest?
As LLM usage scales inside companies, a familiar evaluation pattern will likely emerge. Not whether something works, but how its cost grows. The same question we ask of algorithms will start being asked of vibe coding workflows, even if no one calls it Big-O at first.
The push toward something like “vibe complexity” comes from a very practical tradeoff. Teams that do not use vibe coding at all move slower and leave obvious productivity gains unused. Teams that rely on it too heavily, especially through one-shot prompting and repeated retries, accumulate large token bills with little visibility into why. In practice, most teams oscillate between these extremes, and cost only becomes visible after it has already grown.
Vibe coding makes engineers faster, but iteration is not free. Each prompt and correction consumes tokens, and tokens map directly to cost. A workflow that relies on many fast iterations may look productive, but its effective complexity can be closer to O(n²) than O(n) once you factor in retries, clarifications, and prompt sprawl.
Two engineers can reach the same outcome at similar speed. One does it with many loosely scoped prompts, effectively linear in iterations but quadratic in correction. The other reaches the same result with a small number of well-structured prompts that encode constraints up front, closer to constant or linear token growth.
If you treat a prompt as an algorithm, this tracks. Vague prompts expand the solution space and force corrective passes. Precise prompts narrow it and converge faster. Over time, the difference shows up in total token consumption.
My suspicion is that companies will eventually care less about raw vibe-coding speed and more about something like speed divided by tokens. Not explicitly at first, but implicitly through cost limits, internal tooling, and expectations.
The best vibe coders will not be the ones who iterate the fastest, but the ones with the lowest effective Big-O in tokens per shipped outcome.
It is easy to imagine what happens next. Engineers start informally sharing the “vibe complexity” of their projects. Teams compare how few tokens it took to ship something. Efficiency becomes another axis along which engineering value is discussed.
This framing fits production work reasonably well, but it does not map cleanly to creative or exploratory use. Many engineers use vibe coding as a thinking tool rather than a direct path to shipping. Brainstorming, reframing problems, and exploring dead ends are inherently inefficient.
Once efficiency becomes a visible metric, pressure follows. Exploration looks wasteful. High-token sessions look sloppy. Over time, this risks turning a creative tool into another optimization target, and eventually into something people virtue signal rather than something that reflects real impact.
Not all vibe coding optimizes cleanly, and not all of it should.
Vibe coding accelerates individuals. Token-efficient vibe coding scales teams. Some of the most valuable work still lives in the messy, high-token, exploratory phase that resists easy measurement.