← Back to Transmissions

Voice AI · Systems Architecture

You didn't build a system. You built a script.

Most AI systems collapse in production for a simple reason: they were designed to be controlled, not operated.

Most AI systems collapse in production for a simple reason. They're designed to be controlled. Not operated.

We embed everything into a single prompt — the role, the logic, the dialogue, the edge cases — then hand that prompt to people who need to change it constantly. We're surprised when it breaks the moment someone adjusts a sentence.

The problem isn't the model. The problem is the assumption that control has to live inside the prompt. It doesn't. And when you build like it does, you haven't built a system. You've built a very expensive script.

"Most AI systems fail the moment you hand them to operators. Not because the model breaks — but because the system was designed to be protected from the people who need to use it."

Colin P. Highland, PT, DPT

01 — The Problem

Nobody breaks the AI on purpose.

Here's what actually happens in production. A campaign team needs to shift messaging mid-week. Polling moved. The news cycle changed. A persuasion angle isn't landing with a specific voter segment. They need to adjust — today, not after a sprint.

So someone touches the prompt. And the system wobbles.

Because everything is in there. The identity. The tone. The logic. The compliance guardrails. The conversation flow. All of it packed into one document that was never designed to be edited by anyone who isn't the person who built it.

If your system breaks when someone edits a sentence, you didn't build a system. You built a script.

Teams end up with one of two choices. Lock it down: engineering owns the prompt, changes go through a queue, the campaign waits, the moment passes. Or open it up: anyone can edit, consistency disappears, the agent starts behaving in ways no one planned.

Both choices are symptoms of the same design failure. The system put control in the wrong place.

02 — A Framework

Separate behavior from direction.

Not one prompt — a system with distinct layers, each doing something different, each owned by the right person.

Layer 1 — Never changes · Architect-owned

The Agent — Residency

The operating system. Defines how the agent speaks, listens, handles ambiguity, and what it will and won't do. Encodes professionalism, compliance, pacing, discipline. This layer is not the campaign's to edit.

Layer 2 — Fully editable · Operator-owned

The Cartridge — Direction

The call objective. What is this conversation trying to accomplish? What questions get asked? What message gets delivered? Fully editable by campaign staff — without engineering — on a Tuesday morning when the message needs to change.

Layer 3 — Living document · Campaign-owned

The Knowledge Base — Context

What's true. Approved messaging. Policy positions. Candidate biography. Issue framing. The answers the agent is allowed to give. This layer evolves with the campaign — not a locked configuration.

Three layers. Three owners. None of them in conflict. Control isn't centralized. It's appropriately distributed.

03 — In Practice

What this looks like when it works.

Every deployment runs on four components: an outbound voice agent, an inbound support agent, a shared knowledge base, and an operations folder where the cartridges live. The agent knows how to be an agent — consistent tone, professional behavior, compliance guardrails baked into the foundation. None of that is up for negotiation.

The cartridge tells the agent what to do on this call. Voter survey. Volunteer recruitment. Donor thank-you. GOTV reminder. Persuasion call on a specific issue. Each one is a separate document. Each one is campaign-controlled.

When messaging needs to shift — and in a political campaign, it always does — the team doesn't file a ticket. They open the folder. They update the cartridge. They deploy.

The agent knows how to run a conversation. The campaign decides what the conversation is about. That distinction sounds small. In practice, it's everything.

The goal was never to build smarter prompts.

The goal was to build a system people could actually operate.

That required a different question at the start. Not: how do we control what the AI says? But: who should control what — and where does that control actually live? When you answer that honestly, the architecture follows. And you stop building scripts that pretend to be systems.

Ask Loom anything
VIP Visitor Portal Password
>_
_