B2B SaaS: per-customer AI emissions without prompt logging
When your product embeds ChatGPT-class features, buyers and regulators ask how much CO₂e that creates — often broken down by customer or workspace. carbon-llm maps model id and token counts to documented coefficients, with tenant_id so finance and sustainability can align with how you already bill.
Scope 3 for purchased or embedded AI usually sits in purchased goods and services or use-of-sold-product narratives, depending on your boundary — we stay methodology-consistent with how LLMs map to GHG Protocol categories, without replacing your materiality assessment.
Per-tenant_id totals match how ISVs think about cost and support: the same key you use in POST /track rolls up to monthly exports and customer-facing share links when you need proof in a renewal cycle.
- Engineering — wire model + usage from OpenAI-style responses (or your gateway) into /track; no change to prompt storage policy.
- Product & CS — optional share URLs and PDFs for enterprise accounts that ask for evidence in onboarding.
- Sustainability / Finance — same numbers as engineering; methodology PDF reduces back-and-forth with auditors compared to spreadsheet estimates.
A full Scope 3 suite for every procurement category, or a replacement for generalist carbon accounting platforms. It is a focused layer for LLM inference where token-level activity data exists — the line item missing from many horizontal tools.