Track LLM Carbon Emissions in Next.js — Step-by-Step Guide

Instrument your Next.js app to measure and log the CO₂ footprint of every OpenAI, Anthropic, or Mistral API call using carbon-llm. Zero prompt logging, ESRS E1-ready.

Why track LLM emissions in Next.js?

The EU Corporate Sustainability Reporting Directive (CSRD) and its ESRS E1 standard require companies to disclose Scope 3 greenhouse gas emissions — including purchased digital services. LLM inference is categorised under Scope 3 Category 1 (purchased goods and services) or Category 11 (use of sold products) depending on your business model.

Next.js is the dominant framework for production AI applications: server actions, API routes, and streaming endpoints all call LLM providers. Instrumentation belongs at that layer — not in the client — because the client never sees token counts.

carbon-llm is designed for exactly this: a single POST per LLM call, no prompt data, returns immediately, and batches asynchronously so it never adds latency to your responses.

Installation

Add the SDK and set your API key:

Terminal

npm install @carbon-llm/sdk
# or
pnpm add @carbon-llm/sdk
Environment variables

Add your API key to .env.local. Next.js will not expose variables without the NEXT_PUBLIC_ prefix to the browser, so your key is safe.

.env.local

CARBON_LLM_API_KEY=clm_live_xxxxxxxxxxxx
Server action example (App Router)

Instrument a Next.js Server Action that calls OpenAI. The track() call fires after the response so it never blocks the user:

app/actions/chat.ts

"use server"
import OpenAI from "openai"
import { CarbonLLM } from "@carbon-llm/sdk"

const openai = new OpenAI()
const carbon = new CarbonLLM({ apiKey: process.env.CARBON_LLM_API_KEY! })

export async function chat(prompt: string) {
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: prompt }],
  })

  // Fire-and-forget — never awaited, never blocks the user
  carbon.track({
    model: response.model,
    inputTokens: response.usage?.prompt_tokens ?? 0,
    outputTokens: response.usage?.completion_tokens ?? 0,
  }).catch(() => {}) // silent on network errors

  return response.choices[0].message.content
}
API route example (Pages Router or Edge)

If you use Pages Router or need an Edge runtime endpoint, the pattern is identical — call track() after collecting the usage object:

pages/api/chat.ts

import type { NextApiRequest, NextApiResponse } from "next"
import Anthropic from "@anthropic-ai/sdk"
import { CarbonLLM } from "@carbon-llm/sdk"

const anthropic = new Anthropic()
const carbon = new CarbonLLM({ apiKey: process.env.CARBON_LLM_API_KEY! })

export default async function handler(req: NextApiRequest, res: NextApiResponse) {
  const { prompt } = req.body as { prompt: string }

  const message = await anthropic.messages.create({
    model: "claude-sonnet-4-6",
    max_tokens: 1024,
    messages: [{ role: "user", content: prompt }],
  })

  carbon.track({
    model: message.model,
    inputTokens: message.usage.input_tokens,
    outputTokens: message.usage.output_tokens,
  }).catch(() => {})

  res.json({ text: (message.content[0] as { text: string }).text })
}
Streaming responses

With streaming, token counts are only available after the stream completes. Accumulate them and call track() once the stream closes:

app/api/chat/route.ts — streaming

import { openai } from "@ai-sdk/openai"
import { streamText } from "ai"
import { CarbonLLM } from "@carbon-llm/sdk"

const carbon = new CarbonLLM({ apiKey: process.env.CARBON_LLM_API_KEY! })

export const runtime = "edge"

export async function POST(req: Request) {
  const { messages } = await req.json()

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    onFinish({ usage }) {
      // Called once the stream completes — safe to track
      carbon.track({
        model: "gpt-4o",
        inputTokens: usage.promptTokens,
        outputTokens: usage.completionTokens,
      }).catch(() => {})
    },
  })

  return result.toDataStreamResponse()
}
Multi-tenant: track per customer

If you are building a SaaS and need per-customer emission breakdowns (required for multi-client CSRD reports), pass a tenantId:

Per-tenant tracking

carbon.track({
  model: response.model,
  inputTokens: response.usage?.prompt_tokens ?? 0,
  outputTokens: response.usage?.completion_tokens ?? 0,
  tenantId: session.organizationId, // your own customer/org identifier
})
Verify and export

After instrumenting, open the carbon-llm dashboard and trigger a real LLM call from your app. You should see the event appear in the Events feed within a few seconds.

Once data is accumulating, go to Reports → Generate ESRS E1 report. Choose a date range, select your tenant(s), and export as PDF or JSON. The PDF report includes the emission factor sources, methodology, and Scope 3 attribution required by ESRS E1.

Ready to start tracking?

Free up to 100 000 events/month — no card required.