Carbon-LLM/Use case · Consulting
Back

Consultancies & ESN: client-ready LLM emissions without spreadsheet guesswork

Your clients' suppliers now ask for AI sustainability lines in questionnaires. carbon-llm gives you token-based activity data, documented coefficients, and exportable evidence — the same stack product teams use when they answer RFPs themselves.

Where this shows up in your engagements
CSRD double materiality, ESRS E1 climate disclosures, and procurement “green IT” questionnaires all converge on one ask: show your work for generative AI — not marketing copy, but activity data and factors auditors can trace.

Supplier / RFP support — Help clients answer lines on inference footprint, model choice, and region when their own vendors use OpenAI-class APIs. Point to methodology and token-based estimates instead of generic “AI is X% of cloud” slides.

Scope 3 for purchased AI — Align narratives with GHG Protocol Scope 3 framing for LLMs so Category 1 / upstream service stories stay consistent between your deck and what the client’s sustainability team files.

Evidence your auditors expect

ESRS E1 and GHG reporting reward documented methodologies and reproducible numbers. For a focused read on what to prepare, see ESRS E1: auditor evidence checklist for LLM emissions.

  • Model id + token counts as activity data (no prompt content).
  • Versioned coefficients and methodology PDF for the file.
  • Optional API integration if the client already centralizes usage in a gateway.
What this is not

A replacement for your firm’s climate strategy practice or for full enterprise carbon accounting suites. carbon-llm is a narrow layer for LLM inference where token metadata exists — the piece that horizontal tools often skip.