My AI philosophy

AI as leverage, not autopilot
As AI programming tools become increasingly integrated into our development workflows, I believe it’s important to establish a thoughtful philosophy for their use. This post is my attempt to explain how I think about AI-assisted programming, and how I actually use these tools day to day. First, let’s establish some context.
Does an AI tool have a bigger impact on productivity than solving the right customer problem? No. Is it more important than human interactions for producing the best outcomes? Definitely not. It’s a tool, not a panacea.
I’m mostly optimistic about AI tools. I use them all the time. I also think they’re easy to misuse, and that misuse shows up very quickly on real teams.
What follows isn’t a list of tools or prompts. It’s a philosophy.
The hard part of programming hasn’t changed
There’s a persistent idea that AI is “doing the programming for us now.” That was never really true.
The hard part of programming has never been typing code. The hard part is turning vague, sometimes contradictory human intent into something precise enough that a computer can execute it reliably.
That observation isn’t new. Dijkstra observed it nearly 50 years ago, long before LLMs entered the picture. The tools change; the core difficulty doesn’t.
AI removes a lot of labor. It removes friction. It removes the parts of the job that many of us never particularly enjoyed: boilerplate, busywork, hunting through documentation, or debugging trivial mistakes.
What it does not remove is the need to know exactly what you’re asking for.
AI amplifies expertise; it doesn’t replace it
A good mental model for an LLM is that of a very smart junior developer who has read every textbook in the world and none of your internal documentation.
It knows patterns. It knows syntax. It knows how similar problems are usually solved.
What it doesn’t know is:
- why your system is shaped the way it is
- which tradeoffs you already rejected
- what constraints actually matter in your environment
That context has to come from somewhere. In practice, it comes from experienced engineers.
This lines up with what people like Will Larson and Simon Willison have been saying from different angles: real AI leverage happens when domain knowledge, tool fluency, and production experience intersect.
AI is an amplifier. If those things are present, it multiplies their impact. If they’re missing, it mostly amplifies confusion.
My job is managing context, not writing lines of code
When I work with AI tools, I spend far less time typing code and far more time doing things like:
- writing clear specifications
- deciding what information belongs in context
- removing information that doesn’t belong in context
- reviewing output carefully
This shift in programming becomes a bit less about expression and a bit more about curation.
If you just keep piling context into a conversation, performance degrades. The model gets slower, more expensive, and—counterintuitively—less useful. So I reset often. I summarize. I prune aggressively.
That work feels very familiar to me as a senior engineer. It’s the same skill set as writing a good design doc or onboarding a new teammate: decide what matters, explain it clearly, and remove everything else.
If that doesn’t sound enjoyable, AI-assisted programming probably won’t feel very fun.
Outcome-driven engineering matters more than ever
One shift I’ve noticed is a growing split between outcome-driven and process-driven engineering.
Several writers have pointed out this tension recently, including Ben Werdmuller. AI makes it much cheaper to get something testable in front of users. That’s exciting if you care about learning quickly. It’s threatening if you get your meaning primarily from the act of engineering itself.
Speed matters here, not because fast is virtuous, but because slow work locks in bad decisions. As Daniel Lemire has argued in a different context, expensive change encourages people to cling to somewhat obsolete solutions simply because replacing them would hurt.
Lowering the cost of building lowers the cost of being wrong.
That’s a feature.
Responsibility does not get delegated
One thing I feel very strongly about: “the AI wrote it” is not an excuse.
If I submit code, I am responsible for that code. Full stop.
This concern has been showing up more often in industry writing, especially from people who spend a lot of time reviewing code. When AI output is accepted uncritically, the burden shifts to reviewers, who become the first line of quality control instead of one of the last.
In practice, AI tools increase the importance of careful review. They can generate large amounts of plausible-looking output very quickly, including:
- dead code
- hallucinated APIs
- subtle logic errors
A developer who blindly accepts that output is pushing quality control onto their teammates. That creates cognitive debt and burns trust.
Used well, AI clears space for better judgment. Used poorly, it moves even more work onto others.
What AI actually frees up: judgment and tradeoffs
At my experience level, typing code is not where I add the most value.
The value is in decisions:
- how a system should be structured
- which constraints matter
- which tradeoffs are acceptable
- what not to build
AI tools are excellent at handling mechanical expression. That’s a good thing. It leaves more room for the parts of the job that actually require experience.
When I spend long stretches manually typing code these days, it often feels like I’m not getting the most value out of my time.
Juniors, seniors, and AI
I’ve watched junior developers use AI well, and it’s impressive.
The biggest benefit isn’t that the AI “does the work.” It’s that it collapses the search space. Instead of spending hours figuring out which API to use, juniors can spend that time evaluating options the model surfaces.
When that time savings is invested in learning instead of feature churn, ramp-up can increase dramatically.
The flip side is also true: AI can hide shallow understanding. That makes mentorship, review, and clear standards more important, not less.
More work will exist, not less
One thing I don’t buy (long term) is the idea that AI’s primary impact is replacing knowledge workers.
Making work cheaper doesn’t usually reduce how much work gets done. It increases it.
AI enables projects that wouldn’t have been started at all before. Experiments that would’ve been too expensive. Ideas that would’ve died in a backlog.
The real risk isn’t scarcity of work. It’s lack of focus.
When everything feels like it’s “just a prompt away,” discipline becomes a core skill.
How this shows up in my day-to-day work
Some simple rules I tend to follow:
- optimize for fast feedback loops
- write specs before prompts
- reset context aggressively
- review everything I submit
- treat AI output as a draft, not a decision
- use AI to buy time for thinking, not to avoid it
What this means for teams I join
Practically, this philosophy shows up in how I work with others.
I’m comfortable leading and reviewing AI-assisted work, but I’m not interested in teams where responsibility is fuzzy or quality is optional. I expect engineers to understand the code they ship, even when an AI helped produce it.
I’m outcome-oriented, but not reckless. I like fast feedback loops, clear ownership, and early validation. I’m also happy doing the unglamorous work of review, pruning, and course correction that keeps teams healthy as velocity increases.
If a team wants to use AI tools seriously, I bring experience operating at that layer: writing specs, managing context, designing guardrails, and keeping humans firmly in the loop.
Closing
The field of AI is rapidly evolving, and I’m sure my approach to using AI tools will evolve as well.
Today, I don’t think being “AI-native” means handing the wheel to a model.
It means knowing what to delegate, what to keep, and where human judgment is irreplaceable.
Used well, AI makes strong engineers more effective. Used poorly, it exposes weak habits very quickly.
I aim for the former.