Carbon-LLM/Use case · SaaS
Back

B2B SaaS: per-customer AI emissions without prompt logging

When your product embeds ChatGPT-class features, buyers and regulators ask how much CO₂e that creates — often broken down by customer or workspace. carbon-llm maps model id and token counts to documented coefficients, with tenant_id so finance and sustainability can align with how you already bill.

What procurement and enterprise customers expect
RFPs and security questionnaires increasingly include AI sustainability lines: inference footprint, data residency, and whether prompts are retained. carbon-llm answers the footprint part with activity data engineers already have (tokens), not subjective “we use green AI” claims.

Scope 3 for purchased or embedded AI usually sits in purchased goods and services or use-of-sold-product narratives, depending on your boundary — we stay methodology-consistent with how LLMs map to GHG Protocol categories, without replacing your materiality assessment.

Per-tenant_id totals match how ISVs think about cost and support: the same key you use in POST /track rolls up to monthly exports and customer-facing share links when you need proof in a renewal cycle.

Who connects the pipes
  • Engineering — wire model + usage from OpenAI-style responses (or your gateway) into /track; no change to prompt storage policy.
  • Product & CS — optional share URLs and PDFs for enterprise accounts that ask for evidence in onboarding.
  • Sustainability / Finance — same numbers as engineering; methodology PDF reduces back-and-forth with auditors compared to spreadsheet estimates.
What this is not

A full Scope 3 suite for every procurement category, or a replacement for generalist carbon accounting platforms. It is a focused layer for LLM inference where token-level activity data exists — the line item missing from many horizontal tools.