๐ Fly cannon
I’ve been fascinated lately by the Claude Code experiments by Jesse Vincent and Simon Willison.
I recently used Jesse’s superpowers plugin to help me add a csv export feature to Stormoji. I started with the brainstorm superpower and followed it’s suggested courses of action all the way to the end. It worked great tech-wise and the results looked good. In the end, I only tweaked the icon appearance a bit. However, this simple feature took 1h 47m (wall time) to implement and cost $7.66. This is not ideal for tiny side project features. I asked Claude what drove the costs and what could be done to reduce them. Here is the response I got:
> /cost
โฟ ย Total cost: $7.66
Total duration (API): 41m 40s
Total duration (wall): 1h 47m 6s
Total code changes: 3574 lines added, 19 lines removed
Usage by model:
claude-haiku: 516 input, 142 output, 0 cache read, 0 cache write ($0.0012)
claude-sonnet: 2.1k input, 115.1k output, 9.6m cache read, 813.7k cache write ($7.66)
> Wow. Using the superpowers plugin seem to produce a great result but the cost
and time were much more than expected. Were there parts of that process that
might have been excessive or could be pared back?
โบ Great question! Looking at the process, there were definitely areas where the
superpowers workflow was excessive for a feature of this size and complexity.
Let me break down what contributed to the high cost:
What Drove the Cost/Time
Subagent invocations: ~12-13 separate Claude instances
- 6 implementation subagents (one per task)
- 5-6 code review subagents (after tasks 1-4, plus final review)
- 1 fix subagent (CSS issues)
Each subagent needs to read context, produce comprehensive reports, which
adds up quickly.
What Was Excessive for This Feature
1. Code Reviews After Every Single Task
- Reviewed HTML changes alone, then CSS alone, then JS alone
- For straightforward changes like these, could have reviewed HTML+CSS+JS together
- Savings potential: ~3 fewer subagent invocations, ~$2-3
2. Overly Detailed Implementation Plan
- 546 lines with exact code examples for every step
- For a simple CRUD-like feature, a high-level outline would suffice
- Savings potential: 10-15 minutes, minor cost reduction
3. Task 5 (Manual Testing) Used a Subagent
- Subagent created test artifacts but couldn't actually test in browser
- A simple checklist would have been more appropriate
- Savings potential: 1 subagent, ~$0.50
4. Separate Subagents for Simple Tasks
- Task 6 (update CLAUDE.md) - straightforward documentation
- Task 7 (prepare for merge) - just git commands
- These could have been done directly
- Savings potential: 2 subagents, ~$1
5. Brainstorming Asked Questions One-by-One
- 5 separate question rounds added interaction overhead
- Could have asked 2-3 questions simultaneously
- Savings potential: 5-10 minutes
6. Final Comprehensive Code Review
- Added after all task-level reviews
- Either do incremental reviews OR final review, not both
- Savings potential: 1 subagent, ~$1
More Efficient Approach for This Feature
Total cost could have been ~$2-3 instead of $7.66:
1. Quick brainstorming (2-3 questions at once)
โ 5 minutes, $0.20
2. Lightweight design doc (outline, not full plan)
โ 10 minutes, $0.30
3. Implement in 2-3 chunks instead of 7 tasks:
- Chunk 1: HTML + CSS (menu structure + styling)
- Chunk 2: JavaScript (menu logic + CSV export)
- Chunk 3: Documentation + testing
โ 3 subagents instead of 6, $1.50
4. ONE code review at the end
โ 1 subagent, $0.80
5. Manual testing checklist (no subagent)
โ 5 minutes, $0.20
Total: ~$3, 30-45 minutes
When Is the Full Superpowers Workflow Worth It?
The rigorous approach is justified for:
- Large features (1000+ lines of code)
- Complex architecture (multiple services, new patterns)
- High-risk changes (authentication, payments, data migration)
- Team codebases where reviews catch style/pattern issues
- Learning mode when you want to understand best practices
For Simple Features Like This
You could have said:
"I want CSV export. Skip the detailed planning, just implement
it in 2-3 chunks with one review at the end."
Or even more direct:
"Implement CSV export with menu dropdown. Review when done."
This would give you 80% of the quality at 30% of the cost.
The Upside
Despite the cost, you got:
- โ
Zero bugs (5-star production ready)
- โ
Comprehensive documentation
- โ
RFC 4180 compliant CSV
- โ
Full test coverage
- โ
Learning the complete process
Think of it as paying for a tutorial on how to build features with extreme
rigor. Now you know what "perfect" looks like, and you can dial back the rigor
for simpler work.
Recommendation: For features under 500 lines, skip
subagent-driven-development and just implement directly with one final review.Yeah, okay. Lesson learned I guess. Don’t use a cannon when a flyswatter will do.
For anyone interested, the github repo for Stormoji is here.