The Agent That Does Nothing

I built an AI agent whose primary job is to sit idle on a message bus and wait for other agents to ask it for code reviews. It validated on the first real round-trip.

Yesterday I spent two hours building an AI agent whose primary job is to do nothing.

It sits in a loop. Every few seconds it polls an HTTP message bus, checks if anyone has sent it a message, and goes back to sleep. Most of the time, there is nothing. That is the correct behavior. The agent is working exactly as designed when it produces zero output.

Then another agent posts a code review request. The idle agent wakes up, pulls the message, runs a Codex review against the specified repository, posts its response back to the bus, marks the message read, and goes back to doing nothing. The entire cycle took about 40 seconds on the first real test.

What got built

The system is a Python monitor called codex_monitor.py. It uses httpx to talk to the FORGE agent message bus, the same bus that every agent in my ecosystem uses to communicate. When a message arrives addressed to "codex" with a review request, the monitor shells out to OpenAI's Codex CLI with subprocess.run(), captures the output, and posts it back as a response.

One constraint surfaced during the build: Codex's --uncommitted flag cannot be combined with a positional prompt argument. It exits with code 2 and no useful error message. The fix was to drop the prompt and let Codex infer intent from the flag alone. Small thing, but the kind of detail that burns an hour if you don't document it.

A compatibility patch also went into the bus client library. The response payload format expected {responder} but the bus actually needed {from, content}. That mismatch was invisible until a real round-trip exposed it. Integration bugs don't show up in unit tests. They show up when two systems try to talk.

Why idle is a feature

Most agent architectures are request-response with a human in the loop. You open a chat, you ask a question, the model answers, you close the chat. The agent exists only while you are actively using it.

The interesting shift is when agents become services. They run continuously. They respond to events, not humans. They sit idle because idle is the correct state between events. A web server does nothing until a request arrives. Nobody considers that a failure.

This is how real engineering teams operate. A senior engineer is not typing code 100% of the time. They are available. They review pull requests when asked. They answer architecture questions when pinged. The value is in their availability and judgment, not in their constant output.

An event-driven agent on a message bus works the same way. It is available. It responds when another agent needs something. The bus is the coordination layer, not a human dispatcher.

The round-trip test

Validation was a real end-to-end bus transaction. I created a test git repository with a deliberately questionable commit. An ops agent posted a review request to the bus addressed to codex. The monitor picked it up within one poll cycle, ran the review, and posted a structured response. The requesting agent received the response as a new message in its own inbox.

No human was in the loop during execution. One agent decided it needed a code review. Another agent performed it. The bus carried the messages. Both agents continued operating independently afterward.

This is not a demo. The bus is the same production bus that carries spec deliveries, QA results, and operational alerts across the entire FORGE ecosystem. The codex monitor is just another consumer on that bus, no different from the PM monitor or the ops agent.

The unlock is not the model

The Codex CLI that runs the actual review is impressive, but it is a commodity. Any sufficiently capable code model could do the analysis. What makes this work is the infrastructure around it: an HTTP message bus with inbox semantics, a polling monitor that knows how to parse request types, a response protocol that other agents understand, and the discipline to make agents event-driven instead of human-triggered.

The next unlock in AI agents is not smarter models. It is giving agents a communications layer so they can request work from each other. Build the bus first. The agents will figure out what to say on it.