Your agents are live.
Are you watching?

Most teams find out an agent broke when a customer tells them. Signal shows you every decision, every cost, every failure. Before anyone notices.

You shipped agents to production. Now what?

Silent Failures001

Things break at 3am.

An agent hallucinates a response, loops 47 times, picks the wrong tool. Nobody knows until Monday.

Cost Leaks002

$2,400 in tokens. One afternoon.

A single agent loop ran for 6 hours. No alert. No budget cap. The invoice was the first sign something went wrong.

No Audit Trail003

"What did the agent do?" No one knows.

Something went wrong with a customer request. You check the logs. There are no logs. There's nothing to replay.

Drift004

Last week it worked fine.

A model update changed the output format. A prompt tweak dropped accuracy by 12%. You found out two weeks later from a support ticket.

Track every agent in real time.

Agent Monitor Live
0 online3,811 runs today

Three steps. Full picture.

01

Install the SDK

One line. Works with LangChain, CrewAI, AutoGen, or raw API calls. No config files.

$ pnpm add @signal/sdk
✓ Added @signal/sdk@2.1.0
$ signal.init({ key: "sk_..." })
✓ Signal connected
02

Agents report in

Every run, every tool call, every LLM request is captured automatically. No manual instrumentation.

checkout-agentconnected
support-agentconnected
research-agentconnected
data-pipelineconnected
email-drafterconnected
03

You see everything

Traces, costs, quality scores, and alerts. Live. From the first run. Set a threshold, get a Slack ping.

Traces2,386
Cost today$127
Avg eval0.94
Avg latency312ms
Alerts3
Uptime99.8%

What you get on day one.

[ 01 ]

Trace trees

Follow a request through 5 agents and 12 tool calls. See where it branched, where it waited, where it went wrong.

[ 02 ]

Alerts that matter

Latency spikes, cost jumps, quality drops. Slack, PagerDuty, or webhook. You pick the threshold, we watch.

[ 03 ]

Replay any run

Pick a run from last Tuesday. Step through it. See the input, the reasoning, the output. Find the bug in 4 minutes.

[ 04 ]

Built-in evals

Hallucination checks. Format validation. Safety scoring. Runs on every output automatically. Add your own in 10 lines.

[ 05 ]

Cost breakdown

This agent costs $0.03 per run. That one costs $1.20. This run used GPT-4o for 14 calls when GPT-4o-mini would do.

[ 06 ]

Structured logs

Every event is structured JSON. Filter by agent, model, status, or custom tags. Search across 50K concurrent runs in under a second.

Agents Monitored
K+
Across 200+ teams
Traces / Day
M
And growing
Avg Latency
<ms
SDK overhead
Uptime
%
SLA guaranteed

Common questions.

Signal is an observability tool for AI agents. You add two lines of code to your agent and get traces, cost tracking, quality scores, and alerts. Think Datadog, but built specifically for agent workflows.
All of them. LangChain, CrewAI, AutoGen, custom Python scripts, raw API calls. The SDK wraps your agent runner. It does not care what's inside.
Under 3ms per trace. Events are batched and sent asynchronously. Your agent never waits for Signal. We tested it with 50K concurrent runs and the overhead was the same.
Encrypted in transit and at rest. SOC 2 compliant. We offer data residency in the US and EU. If you need full control, there's a self-hosted option.
Free up to 10K traces per month. After that, pay per trace. Early access members get the first 6 months free on any tier.
Now, for early access teams. Public launch is Q3 2026. If you're running agents in production today, you can start this week.

Your agents are running right now.
What are they doing?

Get Early Access

Free to start. Takes 2 minutes.