KT
Dashboard · Executive view

The agent is working for the business.

Last 30 days. Volume, speed, cost, and capacity returned to the team. Customer-satisfaction numbers below are proxies — derived from our own data, not customer survey responses.

Volume handled313 drafted · 0 sent
Send-through rate0%sent by reviewer / drafted
Avg first reply4mSLA target 60m · hit 100%
Reviewer hours saved0.23 drafts × 4m each
Inbound volume · 30 days
Demand signal — is volume growing, stable, or spiking?
28 Mar
inbound / day
26 Apr
Cost so far
Daily ceiling $5
Today
$0.02
This period
$0.02
Projected monthly
$0.08

Per-draft cost assumes Haiku 4.5 (classifier) + Sonnet 4.6 (drafter) model-family pricing. Cache-read tokens counted as regular input for demo-grade accuracy.

Last 7 days cost
Classifier (Haiku) vs. drafter (Sonnet)
04/20
04/21
04/22
04/23
04/24
04/25
04/26
Haiku (classifier)Sonnet (drafter)
Capacity returned to the team
Reviewer time that would have been spent drafting from scratch
0.2hours
Drafts produced
3
Min / draft (assumed)
4m
Avg latency
4m

The minutes-per-reply assumption is adjustable on the endpoint. A conservative 4 min / draft means 0.0h/day of reviewer capacity on average.

Customer-satisfaction proxies
Directional signals — not a survey-based CSAT / NPS
Angry or rude on arrival0%

% of inbound with angry/rude tone. Lower = customers arrive calmer.

Reviewer-marked 'good' draft rate0%

% of reviewed drafts the reviewer flagged as good. Rising = agent is learning the right voice.

Re-open rate15%

% of threads where customer wrote back after our reply. High = first-reply didn't resolve.

Roadmap

Real CSAT / NPS requires post-send survey capture. Planned for V2 — one-click inline rating in the reply itself.

What's on customers' minds
Top categories in the last 30 days
  • refund_request1 · 33%
  • sales1 · 33%
  • account_access1 · 33%