Agentic Coding Made Me a Better Product Manager

There’s a certain irony in how agentic coding has rekindled my appreciation for disciplined product management. The tools that promise to let you skip the boring parts—just describe what you want and watch it materialize—have reminded me why we have those boring parts in the first place.
The Journey from Discipline to Feral
In a previous role, I worked under rigorously enforced scrum practices. Ceremonies were sacred. Story points were debated. Definition of Done was a living document that people actually referenced. At times it felt excessive—I got the distinct impression that some team members cared more about their adherence to the methodology than they did about the product or its users. Scrum-for-scrum’s-sake.
But that experience planted something valuable: a deep appreciation for what good product management practices bring to a project, especially in a team setting. The discipline exists for reasons that aren’t always visible until it’s absent.
Then came a startup role where the product lifecycle was much earlier—true startup mode. I tried to evangelise some semblance of good practices to the much smaller team, but concessions had to be made. We assumed technical debt to deliver quickly. Perfectly tested code gave way to “good enough to ship” code. Given the context, I’d make that tradeoff again. Startup product bugs in a launching product carry different stakes than bugs in a globally adopted enterprise lynchpin. Move fast, break things, fix them when they matter.
But over time, operating in that mode made me a bit feral. The discipline atrophied. Requirements lived in messaging threads and my head. Acceptance criteria were implicit. Testing was… optimistic.
The Agentic Awakening
Enter agentic coding tools—Cursor, Claude Code, and the like. My early experiences followed a predictable arc:
Fire up the tool. Give it a prompt to build something moderately complex. Watch in amazement as it generates volumes of code that look remarkably competent. Think “this is the future.” Then actually run it. Or read it carefully. And discover the gaps.
The best analogy I’ve found is managing an exceptionally fast junior developer with the memory of a goldfish. The speed is real. The capability is real. But several prompts in, the model starts ignoring firm requirements from earlier in the conversation as they fall out of its limited context window. Binding constraints become suggestions. Explicit decisions get overwritten by plausible-sounding alternatives.
If you’ve tried AI image generation, you know the particular frustration of iterating toward something specific. Text-to-code has similar dynamics—the model is always willing to help, always confident, and sometimes confidently wrong in ways that take longer to debug than to have written correctly in the first place.
What Agentic Coding Exposes
The failure modes of agentic coding are illuminating:
Assumption-making under ambiguity. When requirements are vague, LLMs don’t ask clarifying questions—they fill in the blanks. Sometimes they guess right. Often they don’t. And the guesses are plausible enough that you might not notice until later.
Context decay. That carefully crafted constraint from your first prompt? Gone by prompt five. The model isn’t being lazy; it’s operating within architectural limits. But the effect is the same: requirements drift unless they’re anchored somewhere more permanent than a chat history.
Apparent completion masking actual gaps. The code compiles. The happy path works. But edge cases, error handling, and the implicit requirements you never stated explicitly? Missing.
These are the same failure modes you get from a human team without good product management practices. The difference is speed. An LLM will produce a flawed implementation in seconds; a human team might take a sprint. But the root cause is identical: insufficient specification.
The Return to Discipline
What’s worked for me is a return to highly disciplined story writing—not because I’m nostalgic for scrum ceremonies, but because it’s the only way to get consistent results from agentic tools.
The pattern:
-
Write a proper user story. Background, context, the problem being solved. Not “add a feature” but “users currently can’t X, which causes Y, and we need Z.”
-
Define foreseeable challenges. What might be tricky? What edge cases exist? What constraints should the implementation respect?
-
Write strong acceptance criteria. Specific, testable, unambiguous. If I can’t explain what “done” looks like precisely enough for an LLM to verify it, I don’t understand it well enough.
-
Instruct for TDD. Ask the model to write unit tests first, before implementation. This does two things: it forces the model to encode the requirements in executable form, and it creates documentation that persists beyond the context window. Anyone who’s done real TDD knows the tests aren’t just for quality—they’re specifications that compile.
This approach has consistently yielded better outcomes than the “agentic coding” alternative of iterative prompting toward a moving target.
The Meta-Realization
Here’s what I find genuinely funny about this: I’ve become a better product manager because I’m working with a tool that exposes the cost of vague requirements faster than any human team ever could.
When requirements are unclear with a human team, you get meetings. Discussions. Slack threads. Eventually someone builds something, and during review you realize you wanted something different. The feedback loop is measured in days or weeks.
With an LLM, that same feedback loop happens in seconds. Write a vague prompt, get a vague implementation, realize you weren’t specific enough, try again. The compression of the loop makes the pattern undeniable: clarity in, quality out. Ambiguity in, plausible garbage out.
The Good News
If writing user stories feels like a chore, there’s an ironic solution: LLMs are remarkably good at writing them.
Give an LLM the rough context—what you’re trying to accomplish, who it’s for, what constraints exist—and ask it to draft a user story with acceptance criteria. The output usually needs editing, but it’s a solid starting point. You’re using the model’s strength (generating structured text from loose input) to compensate for its weakness (operating without structure).
The workflow becomes: rough idea → LLM-generated story → human refinement → LLM implementation → human review. Each step plays to the strengths of who’s doing it.
What I’m Taking Forward
The scrum practices I once found excessive now look like exactly what you need to work effectively with agentic tools. Not the ceremonies themselves—though even sprint planning flows smoother with an agentic product owner—but the underlying discipline:
- Clear requirements before implementation
- Explicit acceptance criteria
- Testable definitions of done
- Documentation that outlives the conversation
The tools have changed. The principles haven’t. And it took an AI assistant with the memory of a goldfish to remind me why those principles matter.
A note on tooling: This post assumes familiarity with connecting MCP (Model Context Protocol) servers to your agentic coding environment. If you’re not doing this yet, you’re leaving significant capability on the table—connecting LLMs (with appropriate least-privilege scoping) to your company’s actual tools is what transforms them from clever autocomplete into genuine implementation agents. Let me know in the comments if practical examples of MCP integration would be useful as a follow-up post.
Related
- Agentic X — Earlier thinking on LLMs as implementation agents
- Leadership — How I approach technical decision-making
- Architecture Patterns — Where human judgment still matters