B2B LLM digital twin
Mock helpdesk or payroll assistant: natural language on the left, fictional business rows on the right. Each reply triggers a server-side carbon-llm track call (token metadata only) when CARBON_LLM_API_KEY is set — use a test key for day-to-day, live for a final smoke check. Optional: OPENAI_API_KEY or GEMINI_API_KEY for real model replies.
How this maps to your stack
After each LLM response, your backend reads
usage from the provider and calls POST /api/v1/track with tenant_id = end customer. This demo runs that call server-side using CARBON_LLM_API_KEY — never exposed to the browser. See Platform integrations.Production key check (server env)
Set
CARBON_LLM_API_KEY in .env (never commit it). Prefer an isv_test_sk_… key for repeated tries; use isv_live_sk_… only for a final smoke test against production (CARBON_LLM_BASE_URL=https://carbon-llm.com) — live quota applies. Confirm events under the chosen tenant_id in the dashboard.Natural-language layer
Pick a scenario and an end-customer tenant. Messages stay in this page; the server calls
/v1/track with token metadata only.Mock tickets on the right — ask for triage or reply drafts in plain language.
Try: “What should we reply to the export timeout?” or “Any payroll anomalies this month?”
Mock business data
Fictional rows for UI context only.
- TKT-1042
Bulk export times out after 2 min
open · P2
- TKT-1038
SSO redirect loop on Safari
waiting · P1