From Flat Files to Postgres in One Sprint
Rewired a production dashboard from JSON flat files to Postgres in a single sprint. 101 Playwright tests. Four QA failures found, fixed, and verified before deployment.
I had a dashboard running on JSON flat files. It worked. Served its purpose for the prototype phase. But flat files don't do joins. They don't do materialized views. They don't do concurrent access without race conditions. And once I had 7,000+ Salesforce opportunities flowing into Postgres from earlier pipeline work, running a dashboard off JSON exports of that same data was just technical debt with a user interface.
So I wrote a spec: rewire server.js from flat file reads to Postgres queries. Keep every existing API endpoint. Keep the frontend unchanged. Swap the data layer underneath without breaking anything above it.
One sprint. One agent. One day.
The Build
Leroy took the spec and rewired the server. Every fs.readFile became a pool.query. The cache layer moved from file-stat checks to TTL-based memory cache backed by SQL. Fallback logic: if Postgres goes down, the dashboard degrades gracefully to cached data instead of crashing.
The part I cared about most was the materialized views. Year-to-date pipeline, win rates, revenue by category. Views that refresh on a schedule and serve pre-computed aggregates instead of running expensive queries on every page load. Six views, all wired into the API.
The QA Cycle
First QA pass: 9 out of 14 criteria passed. Four real failures.
The auto-recovery mechanism was missing a setInterval probe. If the Postgres connection pool threw an error, the server flagged pgAvailable = false but never proactively checked if the connection came back. It would only recover on the next user request that happened to try a query. Fix: dedicated health check interval that probes with SELECT 1 and restores the flag.
Cache TTL was set to 90 seconds instead of the spec's 60 seconds. Two places in the code. Minor but wrong.
The index.html had uncommitted changes from a previous branch. git checkout fixed it.
Query pattern recording was broken because the SFDC proxy endpoint bypassed the SQL path entirely. The /api/query-sfdc handler was passing requests straight to Salesforce without touching the pattern recorder. Fix: add the record_query_pattern call in that handler.
Four failures. Four fixes. Second QA pass: all four targeted criteria passed. Playwright suite: 101 out of 101.
Deployment
Rsync from the dev machine to the production server. npm install. Environment variables set for the new directory structure. Server started on port 8401. launchd plist deployed with KeepAlive and RunAtLoad so it survives reboots.
One Playwright test had flagged a pre-existing issue: dollar amounts not rendering on the default tab load. Not a regression. Pre-existing. Noted, not blocked.
Why This Matters
The interesting thing about this sprint wasn't the technology. Postgres is not news. The interesting thing was the process.
I wrote a spec. An AI agent built it. A different QA pass found four real bugs. The agent fixed all four. A second QA pass verified the fixes. Then the ops agent deployed it to production and set up persistence. Total calendar time: about 6 hours. Total human time: writing the spec and reviewing the QA reports.
That's the leverage. Not "AI writes code." AI executes a complete engineering workflow: build, test, fix, verify, deploy. With real QA that catches real bugs. Not a rubber stamp. Not "looks good to me." Actual test criteria evaluated against actual behavior.
The flat files are gone. The dashboard runs on Postgres. And I have 101 green tests to prove it works.