I Named My AI Engineering Lead After the Father of Product Management
Why I gave my AI agent orchestration layer a proper name, what it means for role separation in AI systems, and what a 1931 P&G memo has to do with modern agent architecture.
Last Saturday I opened two terminal windows on my MacBook, typed a question into each one, and watched two AI agents have completely different reactions to the same codebase.
Terminal one, the Product Manager: "That's engineering work. Not my job."
Terminal two, the Engineering Lead: "Ready for a spec when you are."
That was the moment I knew the system worked. Not because the code ran. Because the roles held.
Why an AI agent needs a name
The engineering lead is called Leroy. Named after Neil H. McElroy, the Procter & Gamble executive who wrote the "Brand Men" memo in 1931. That memo created the concept of product management as we know it. One person owns one product. Full accountability. Clear boundaries. No committee diffusion.
McElroy. Leroy. Also carries a bit of Leroy Jenkins energy: charge in, execute, figure it out.
The name matters because names create expectations. When I type into the PM terminal, I'm talking to an agent that plans, specs, and delegates. It cannot run code. It cannot edit files. Those tools are literally removed from its context. When I type into Leroy's terminal, I'm talking to an agent that decomposes specs, spawns sub-agents, and ships code. It does not set requirements. It does not make product decisions.
This is not a gimmick. This is enforcement. The same way McElroy's memo prevented brand managers from stepping on each other's territory, the system prompt and tool restrictions prevent my PM from writing code and my engineer from changing requirements.
The bridge problem nobody talks about
Before Leroy, I tried the obvious approach: two Claude Code sessions, one for planning and one for coding, with file-based handoff between them. Drop a spec in a shared folder. Have the coder pick it up.
It failed completely. The coding session cannot sit and monitor a file system. I had to manually tell it "go check the folder" every single time. I was the bridge. The entire point was to remove me as the bottleneck, and instead I became the postal service.
The fix came from an unexpected direction. Google's A2A protocol, the same agent-to-agent communication standard I was already using for cross-organization connections with a partner's security operations center, works just as well for internal agent routing. Same JSON-RPC. Same task lifecycle. Same message format. Just pointed at localhost instead of across the internet.
A2A gives you something file-based handoff never can: task state. A spec goes from submitted to working to completed (or failed, or needs-input). The PM can check status without interrupting the build. The engineering lead can ask clarifying questions that surface as decision gates. Neither agent needs me in the middle.
The real unlock is not AI. It is role separation.
Most people building with AI agents make them do everything. One super-agent that plans, codes, tests, deploys, and writes documentation. It works for small tasks. It falls apart for anything real, because the agent has no boundaries. It optimizes for completion, not correctness. It skips the review step because the review step is also its job.
McElroy understood this in 1931. One brand, one owner. Not because people are incompetent, but because accountability requires boundaries. When everyone is responsible, nobody is responsible.
The PM agent has access to the memory system, the task manager, and agent-to-agent communication. That is it. It cannot accidentally deploy something. It cannot quietly fix a bug without a spec. It is structurally prevented from crossing the line.
Leroy has full tool access: shell, file system, SSH, the works. But his system prompt says he does not set requirements, does not make product decisions without PM approval, and does not change scope mid-sprint. When he hits a decision point, he opens a gate and waits.
I tested it live. "Who are you?" Leroy described his full role, what he does, what he does not do, and that he reports to the PM. "Run echo hello." Executed. "Query the memory system." Connected, returned 14,651 knowledge chunks across four collections. Everything worked exactly as designed.
What this actually means for operators
There is a practical lesson here that has nothing to do with AI agent frameworks.
The organizations that struggle most with AI adoption are the ones that hand a single AI tool to a team and say "figure it out." No role definition. No boundaries. No accountability structure. Just vibes and a subscription.
That is the equivalent of hiring someone and not giving them a job description. You would never do that with a person. You should not do it with an agent.
If you are building anything with AI that touches real business operations, start with the org chart. Who plans? Who builds? Who reviews? Who decides? Then enforce those roles structurally, not with guidelines that can be ignored, but with tool access and system prompts that make crossing the line impossible.
McElroy did not write a suggestion memo. He wrote a policy. The brands at P&G did not overlap because the system would not allow it. Ninety-five years later, the same principle applies to AI agents.
Name your agents. Define their roles. Enforce the boundaries.
The work gets better when the system knows what it is not allowed to do.