The AI Found the Bug I Missed
My engineering agent discovered a SOQL SELECT field omission that had been silently corrupting data for weeks. The bug was invisible to humans because the WHERE clause worked fine.
I had a data gap in my Salesforce pipeline. Closed-won and closed-lost opportunities were landing in Postgres with is_closed = false and is_won = false. Every single one. Thousands of records, all wrong.
I knew the data was off because the materialized views downstream were empty. Win rates showing zero. Year-to-date closed revenue showing nothing. The numbers were there in Salesforce. They just weren't making it through.
My first instinct was to re-run the extract. Flush the data, pull fresh, let the upsert do its thing. So I wrote a spec for Leroy, my AI engineering agent, to re-run the SFDC extract on the production server.
Leroy started the task. And then, instead of just executing blindly, he read the code first.
The Bug Nobody Saw
Here's what Leroy found: the SOQL queries for closed-won and closed-lost opportunities were filtering on IsClosed and IsWon in the WHERE clause, but those fields were never included in the SELECT.
The Python code downstream called r.get('IsClosed', False) and r.get('IsWon', False). Since the fields weren't in the API response, every record defaulted to False. The WHERE clause worked perfectly. The data came back. But the critical boolean flags that the entire downstream pipeline depended on were silently zeroed out.
The closed-lost query also omitted Type from SELECT, so those records arrived with type = null. Three missing fields in two queries. A three-line fix.
Why This Matters
I had looked at this code. Multiple times. The queries were long, the field lists were dense, and the WHERE clause was correct, so the results looked right at first glance. The bug was in the gap between what you ask for and what you use. The filter worked. The projection didn't match.
Leroy caught it because he read the extract script, traced the data flow into the upsert function, and noticed the mismatch. He didn't just run the command. He diagnosed the root cause and escalated a blocker: "The spec says DO NOT modify sfdc_extract.py, but without fixing 3 SELECT clauses, the data gap cannot be resolved by re-running alone."
That's a sentence I want to sit with. The AI agent identified that my spec was wrong, explained why, proposed the minimal fix, and asked for authorization to deviate.
The Uncomfortable Part
I'm the operator. I wrote the original extract script. I built the upsert logic. I knew every field in the schema. And I missed this. Not because it was complex. Because it was boring. Dense SOQL field lists are the kind of thing your eyes glaze over. You check the WHERE clause, confirm the right records are coming back, and move on.
The agent didn't glaze over. Agents don't get bored. They don't pattern-match on "this looks right" and skip ahead. They trace every reference.
The Takeaway
The most dangerous bugs aren't the ones that throw errors. They're the ones where the system runs fine and the data looks plausible but the semantics are wrong. Silent data corruption.
AI agents aren't replacing the thinking. They're catching the things you stop thinking about. The field list you've read twelve times. The default value that kicks in when a key is missing. The gap between your query and your code.
I approved the three-line fix. Leroy applied it, re-ran the extract, and 2,247 closed-won and 1,715 closed-lost opportunities landed with correct flags. Materialized views populated. Win rates showed up. The data pipeline is clean.
All because the agent read the code before running it.