This post documents how we use the Cursor MCP server "user-linkup" (linkup-search with depth: deep) to refresh EU AI Act and CSRD angles for LLM inference metering— then we cross-check timelines against the Commission's official AI Act Service Desk before publishing. Not legal advice.
1. Research workflow (Linkup + primary sources)
In authoring sessions, we run Linkup searches for natural-language questions (e.g. August 2026 application dates, GPAI transparency, penalty caps). If the MCP returns Unauthorized, Linkup credentials are not configured in Cursor — we still publish by anchoring claims to primary URLs (Commission timeline, EUR-Lex). The research log lives at docs/linkup-research-eu-ai-csrd-llm-metering.md in the repository clone.
2. What the official timeline highlights
The European Commission's AI Act Service Desk publishes a phased implementation timeline. As of our review, it stresses staggered entry into application through 2 August 2027 for full roll-out, with major milestones including:
- 2 August 2025 — obligations for general-purpose AI models (Chapter V) and EU governance infrastructure; Member States designate authorities.
- 2 August 2026 — the majority of the Regulation's rules apply for many use cases: including high-risk AI systems listed in Annex III (where applicable), Article 50 transparency rules for certain systems, innovation measures (e.g. regulatory sandboxes), and enforcement at national and EU level. Treat Commission dates as your planning baseline; sector and product context changes outcomes.
- 2 August 2027 — rules for high-risk AI embedded in regulated products (as described on the Service Desk). The Digital Omnibus may adjust backstops — confirm against current Commission FAQs.
Penalties are tiered by infringement type in the Regulation (see Articles on penalties in the consolidated text). Your counsel maps facts to caps — we do not quote fines as legal advice here.
3. Two laws, one meter: why LLM token evidence shows up twice
CSRD / ESRS E1 asks organisations for structured greenhouse gas data and transparent assumptions (activity × factor, boundaries, assurance). The AI Act asks different questions: trustworthy AI, documentation, transparency to users, and (for certain actors) resource and documentation expectations around general-purpose and high-risk systems.
The overlap for software teams is operational: if you already meter model id and token counts per tenant for carbon (Scope 3–style narratives), that same structured usage log supports governance and diligence questions about how generative AI is consumed in production — without conflating carbon estimates with AI conformity.
Methodology explains how carbon-llm turns tokens into labelled CO₂e; Compliance export packages events for review workflows (sign-in required).
4. Disclaimer
This article is educational, not legal advice. Timelines and obligations depend on your role (provider, deployer, importer), product category, and jurisdiction. Verify against the official Journal and your advisors.
Sources & further reading
- European Commission — AI Act implementation timeline (AI Act Service Desk)
- EUR-Lex — Regulation (EU) 2024/1689 (Artificial Intelligence Act)
- European Commission — Corporate sustainability reporting (CSRD)
- carbon-llm — EU AI Act, environment, and LLM disclosure (longer read)
External pages are independent; carbon-llm does not endorse or control third-party content.