Steps to AI Engineer
Engineering using AI is a different discipline. Some old skills matter less. Some matter far more. Most of us haven't figured out which yet.
I've felt a lot of cognitive dissonance in this area recently, so I'm jotting down some quick thoughts. These are pretty raw, and will definitely change / grow / adapt over time. See also LLM thoughts and Software Convictions, Mostly Earned..
- Don't outbuild your own experience.
Building things you don't understand leaves you unable to validate the thing you've helped build.
- Building the thing helps you understand the thing.
When you build something yourself, you build a mental model of it. Skip that and you're left maintaining something you never actually understood.
Build in pieces. Each step is a checkpoint where you can still reason about the whole.
- Don't argue with the AI
If the AI is continually veering in directions you don't want, change your harness and start over. Arguing with an AI to put it back on the right track is much more work than fixing the harness in which the agent runs.
- Stop blaming the agent.
"Claude did" is not the phrasing. "I let Claude do" is more accurate. You are ultimately in charge of, and responsible for the actions of your AI agents.
- More agents, more context switches.
Every agent you spin up is another thread you're managing in your head. You can keep notes, write docs, build systems around it. But your own working memory is the bottleneck, not the agent's. Know when you're saturated.
- Speed without understanding is unsustainable.
Going fast feels productive. But velocity without comprehension is just building debt you can't see yet. Slow down when you stop being able to explain what your code does.
- Coding etiquette still matters, more so with agentic development.
A 400-line PR with no structure, no commit narrative, and AI-generated variable names is not a gift to your team. If you wouldn't want to review it, don't send it.
- Sending raw AI output is rude.
Don't be that guy. If you don't have the time, or the wherewithal to write something, pasting the output of an AI is not going to be helpful. If I wanted ChatGPT's opinion, I'd ask it for it.
- Testing is more important, not less.
When you're not sure what the AI wrote, tests are the only thing standing between you and production bugs.
In brownfield situations, if you have built up the context to rewrite that painful method, you also have the context to test the before and after.
- Read before you accept
The muscle to read code critically is more valuable now than the muscle to write it. A lot of people accept diffs they don't fully understand. It ran. The tests passed. Ship it. That's how bugs hide.
- Context is the skill
Knowing what to tell the AI, what to leave out, how to frame the problem. The prompt isn't the skill. The context curation is.
- AI amplifies your taste, not your talent.
If you know what good code looks like, AI gets you there faster. If you don't, it gets you to bad code faster.
- Throwaway code is fine. Throwaway understanding isn't.
Spike with AI all day. Delete it, learn from it, move on. The problem is when the spike becomes the product and nobody noticed. Ship something bad, then make it better.