Vol. XXI · No. 255 · May 13, 2026

A runtime that decides.

A runtime that sits between every agent and every tool, deciding which actions ship and which get clarified, repaired, or held. One contract on the action path, across your entire stack.

Every proposed action is reasoned against the policies you wrote, the evidence the agent has gathered, and the prior actions in the same session. Grounded actions ship. Ungrounded ones are rewritten, clarified, repaired, or escalated, with structured feedback the model can read and act on. Commit-time becomes the moment of judgment.
I. Inside the box.

Not a rule list. A reasoning engine.

Salus is not a regex on tool arguments. Each proposed action is reasoned against four streams at once: the policies you authored, the evidence the agent has gathered in the current session, the state of prior actions and their outcomes, and the context of the user, channel, and intent. Out comes a structured verdict: allow, clarify, repair, rewrite, escalate, with the policy that fired and the evidence that mattered.

II. What it does

Five capabilities, shipped today.

i.

Shadow replay.

Pull traces directly from the observability stack you already run — LangSmith, Langfuse, Datadog, Helicone, OpenTelemetry, or your own warehouse. Works against any tool-calling agent: OpenAI, Anthropic, LangChain, LangGraph, CrewAI, Retell, Vapi, custom. Salus replays the sessions, extracts the attempted actions, and shows you, side by side, what would have been allowed, clarified, repaired, rewritten, or escalated, and how the conversation would have ended. No customer is touched. No agent is rewired.

ii.

Shadow mode.

Run Salus alongside your live agent without changing what happens. Every proposed action is checked and logged, nothing is blocked. You see what would have been decided differently, before any customer does.

iii.

Preflight enforcement.

On the action path. Safe actions ship. Unsafe ones come back with a structured verdict the model can read: which policy fired, which evidence was missing, what a grounded version would look like. The agent rewrites and retries; Salus re-checks. Anything that can't be grounded is escalated, not silently dropped. ClaimGuard runs the same loop on speech: what the agent is about to say, checked against what actually happened, before TTS.

iv.

Synthetic red-team.

Offline testing grounded in your real tool calls and agent objectives. Salus generates persona profiles — the confused customer, the abusive caller, the social engineer, the regulator-on-a-test-line, the persistent retry-er — and stress-tests your agent against thousands of variants of the workflows that actually matter. Find the failure modes before production does. Every run produces a regression suite; every policy edit reruns it.

v.

Observability & audit.

Every check, every allow, every refusal, every escalation, every retry, every supervisor handoff. Full session timelines with the policy that fired, the evidence cited, the verdict returned, the model's response, and the final outcome. Queryable, exportable, retained on your schedule, streamed to your SIEM if you want it there. Not a dashboard for after the fact. A ledger of who decided what, and why.

III. Get in touch

This is most of what we do.

We are early, and shipping fast. The honest way to learn the rest is a thirty-minute call. We will set Salus up against an agent of yours, in shadow mode, and you will see what it catches.

Book a demo
founders@usesalus.ai