Everyone talks about how much time AI saves. Nobody talks about what it costs. Not just money — attention, quality, technical debt. After nine articles exploring how to integrate AI into my workflow, it's time to be honest with the numbers. Because if you don't know what you're paying, you can't know whether the investment is worth it.
The money: tokens cost real money
Every line your agent generates consumes tokens. Every file it reads to understand your codebase, every iteration to adjust a component, every long conversation that gets compacted and restarted. All of that has a price.
A heavy coding session — refactoring a complete module, building a new feature, or debugging a complex problem — can cost between $5 and $15. Seems small, but if you use AI daily as your primary tool, the monthly total for a solo developer lands between $100 and $200. Not pocket change.
Is it expensive? Depends on what you compare it to. An IDE with Copilot is $19/month. Claude Pro is $20. But if you use the API directly, the most capable models with long contexts, and you have multi-hour sessions, the meter runs fast. The question isn't whether it costs money, but whether what you get back justifies the spend.
The time equation
AI doesn't eliminate work. It shifts it. You spend less time writing code and more time reviewing it. Less time searching for solutions and more time deciding between the ones it proposes. Less time on mechanical tasks and more on judgment calls.
The net is positive, but not as dramatic as the marketing suggests. My real experience: on mechanical tasks — boilerplate, CRUD, configuration, standard components — AI saves me 40-60% of the time. On tasks requiring judgment — architecture, complex debugging, design decisions — the savings drop to 10-20%, and sometimes go negative because you have to fix what the agent got wrong.
The realistic average, counting everything, sits around 25-35% time savings. Not the 80% you see in Twitter demos. Not the 10x the ads promise. A third more productivity, which is significant but requires honesty.
The quality tradeoff
AI-generated code works. It passes types, compiles, does what you asked. But it's not always the code you would have written. Over time, you accumulate patterns in your codebase that aren't yours.
The cost: a project that feels foreign. You open a file you "wrote" two months ago and don't recognize the decisions. Not because they're wrong — but because they don't reflect your judgment. They're the decisions a model trained on millions of repos would make, not yours.
The fix I've found: strong conventions. A detailed CLAUDE.md, a well-defined design system, and clear specs before coding. The more context you give the agent about how you think, the more its output resembles what you would have done yourself. The article on spec-driven development covers this in depth.
The attention cost
This is the least visible and most real cost. Every time you direct an agent, you switch modes. You stop thinking about the problem and start thinking about how to communicate the problem. Prompt engineering is a cognitive tax.
Crafting a good prompt, reviewing the output, iterating when it's not what you wanted, deciding whether it's worth asking for another round or just doing it yourself. That cycle has an attention cost that doesn't appear on any invoice.
It gets better with practice. Skills, conventions, templates — everything we explored in this series — exists to minimize this overhead. But it never reaches zero. There's always a gap between "think and write" and "think, translate to prompt, review, iterate."
The hidden benefit: what you learn
And here's what balances the books in a way few people mention. AI exposes you to patterns you wouldn't have tried. Libraries you didn't know. Approaches from different domains.
Over these months, Claude Code has shown me ways to structure components I wouldn't have discovered on my own. TypeScript patterns that don't appear in the usual tutorials. Accessibility solutions that weren't on my radar. The educational value of working with an agent that has processed millions of repos is hard to quantify, but it's real.
It doesn't replace studying or direct experience. But it's a learning accelerator that's underpriced in the cost conversation.
My real numbers
After using AI as my primary tool for these months, here are the numbers without polish:
- Monthly cost: ~$150 between API, subscriptions, and complementary tools
- Time saved: ~15-20 hours per month (measured with real tracking, not optimistic estimates)
- Output quality: ~85% of what I'd write manually. Improves with better specs and more mature conventions
- Attention: the first weeks cost more than expected. Now the prompt engineering overhead is integrated into my natural flow
- Learning: unquantifiable, but significant. Patterns, libraries, and approaches I wouldn't have explored on my own
If I bill at $50/h, those 15-20 saved hours represent $750-1,000 in value. Against $150 in costs, the ROI is clear. But it's not magic — it's an investment with variable returns that depends on how much you invest in the system surrounding the agent.
The investment, not the shortcut
AI for development is an investment. Like any investment, it requires initial capital (money, time learning, configuration), maintenance (updating conventions, improving specs, adapting workflows), and carries risk (dependency, technical debt, rising costs).
The return is real. But only if you do honest accounting. Not the demo accounting where everything works perfectly. The day-to-day accounting where the agent loses track, where you rewrite the output, where the session drags on longer than necessary.
This is the tenth and final article in the series. The complete journey:
- My AI Stack — the tools
- Why Your Agent Loses Track — tokens and memory
- From Tools to System — reusable skills
- From Prompt to Component — Figma to code
- MCP Servers — connecting tools
- Designing with AI Without Losing Judgment — design decisions
- Design Systems for Agents — systems AI understands
- Spec-Driven Development — plan before coding
- Testing and Code Review with AI — automated quality
- The Real Cost — this article
If you're starting with AI for development, don't start with the tools. Start with conventions. And do the math from day one.