diff --git a/examples/decision_lineage_demo/README.md b/examples/decision_lineage_demo/README.md
index 95e596ca..16a2ecc0 100644
--- a/examples/decision_lineage_demo/README.md
+++ b/examples/decision_lineage_demo/README.md
@@ -155,6 +155,10 @@ A talk track and a click-by-click walkthrough live alongside:
`DecisionOption`, `OptionOutcome`, and `DropReason` included),
step by step:
[`DATA_LINEAGE.md`](DATA_LINEAGE.md)
+- Leadership-ready slide deck (Marp; renders to PDF / PPTX / HTML
+ via `marp SLIDES.md --html --pdf`) covering the same 5-question
+ narrative with the ads-domain vocabulary:
+ [`SLIDES.md`](SLIDES.md)
## File map
@@ -166,6 +170,9 @@ decision_lineage_demo/
├── BQ_STUDIO_WALKTHROUGH.md # click-by-click in BQ Studio
├── DEMO_QUESTIONS.md # 5 EU-compliance questions: BQ CA vs. direct GQL
├── DATA_LINEAGE.md # canonical graph + richer demo graph lineage
+├── SLIDES.md # Marp deck source — leadership / Google-Next-style
+├── SLIDES.html # rendered HTML (open in browser, no install needed)
+├── SLIDES.pptx # rendered PowerPoint (editable in Google Slides / Keynote / PPT)
├── setup.sh # one-shot bootstrap
├── reset.sh # tear down dataset + rendered files
├── render_queries.sh # sed-renders the .gql template
diff --git a/examples/decision_lineage_demo/SLIDES.html b/examples/decision_lineage_demo/SLIDES.html
new file mode 100644
index 00000000..af5752e0
--- /dev/null
+++ b/examples/decision_lineage_demo/SLIDES.html
@@ -0,0 +1,10909 @@
+
Decision Lineage with BigQuery Context Graphs
Copyright 2026 Google LLC
+Licensed under the Apache License, Version 2.0.
+
+Decision Lineage with BigQuery Context Graphs — leadership deck.
+
+Render with Marp (https://marp.app):
+ marp SLIDES.md --html --pdf # SLIDES.pdf
+ marp SLIDES.md --html --pptx # SLIDES.pptx
+ marp SLIDES.md --html # SLIDES.html
+ marp SLIDES.md --watch --html # live preview
+
+`SLIDES.html` and `SLIDES.pptx` are checked in alongside this
+source so reviewers can open the deck without installing Marp.
+After editing this file, regenerate both with:
+
+ npx -y @marp-team/marp-cli@latest SLIDES.md --html
+ npx -y @marp-team/marp-cli@latest SLIDES.md --html --pptx --no-stdin
+
+(or `marp SLIDES.md --html --pptx` if you have Marp installed
+globally via `npm install -g @marp-team/marp-cli`).
SPEAKER NOTE — 30s
+Open with business pressure, not fear. The point is not "this certifies
+compliance"; the point is "we can now show our work with data."
Source: https://ai-act-service-desk.ec.europa.eu/en/ai-act/eu-ai-act-implementation-timeline
+Article 99: 7% applies to prohibited practices; 3% applies to many
+non-Article-5 obligations. Keep the wording precise.
Talk track: real diversity — different brand, audience, budget, season.
+If live extraction changes, keep the exact counts in the metric row aligned
+with the latest verified dataset or switch them to approximate language.
Explorer vocabulary in the graph pane:
+CampaignRun, AgentStep, MediaEntity, PlanningDecision, DecisionOption,
+DecisionCategory, OptionOutcome, DropReason; edges CampaignActivity,
+NextStep, DecidedAt, ConsideredEntity, CampaignDecision, WeighedOption,
+HasOutcome, RejectedBecause, InCategory.
Wrap with: "Three things to take with you. (1) Compliance posture as
+a query. (2) The schema generalizes. (3) Composes with what we run.
+Open for questions."
+
+### Regulation
+EU AI Act obligations phase in through **2 Aug 2027**; many high-risk and transparency rules start **2 Aug 2026**.
+
+
+
+
+
+### Trust
+Most agent demos produce an answer and ask reviewers to trust a transcript. That does not scale to audit.
+
+
+
+
+
+### Operations
+Product, legal, and engineering need the same answer: *what happened, why, and where is the evidence?*
+
+
+
+
+
+
+Risk framing: AI Act fines reach up to 7% of worldwide annual turnover for prohibited practices, and up to 3% for many other operator / transparency obligations. The useful response is evidence, not screenshots.
+
+
+
Sources: EU AI Act Service Desk implementation timeline; AI Act Article 99 penalty tiers.
+
+
+
+---
+
+# What we built
+
+
Working demo, not mock data
+
+Take a real ADK media-planning agent, attach the **BigQuery Agent Analytics Plugin**, use `AI.GENERATE` to extract the decisions and options present in the traces, then publish the result as a **BigQuery Property Graph** for GQL in BigQuery Studio.
+
+
+
+Each row is one ADK invocation against `gemini-2.5-pro`; each invocation produced **27 plugin-recorded spans** in the verified demo run.
+
+
+
+---
+
+# Three writers populate seven tables
+
+
+
+
+
BQ AA Plugin
+Writes `agent_events`
+Raw trace evidence from the live ADK runner.
+
+Writes `context_cross_links`, `made_decision_edges`, `candidate_edges`
+Graph edges with stable keys.
+
+
+
+
+
`AI.GENERATE` runs twice across all sessions: business entities first, then decision options and rationale.
+
+---
+
+# The graph the demo presents
+
+The SDK ships the canonical graph. The demo adds **ads-domain labels** so BigQuery Studio reads like the business process:
+
+
+
+
+
+### Primary audit path
+`CampaignRun` → `PlanningDecision` → `DecisionOption` → `OptionOutcome`
+
+`DecisionOption` → `DropReason`
+
+A campaign has planning decisions; each decision weighed options; every option has an outcome and dropped options have rationale.
+
+
+
+
+
+### Evidence path
+`CampaignRun` → `AgentStep` → `PlanningDecision`
+
+`AgentStep` → `MediaEntity`
+
+`PlanningDecision` → `DecisionCategory`
+
+The business answer stays tied to the span, entity, and category that produced it.
+
+
+
+
+
+
+
+---
+
+
+
+
Section 2 of 4
+
+# Five regulator-shaped questions, answered live
+
+---
+
+# Q1 — Right to explanation
+
+
EU AI Act Art. 86
GDPR Art. 22
+
+> *"For the Nike Summer Run campaign, what audience did the AI pick, what alternatives did it consider, and why did it reject the others?"*
+
+```sql
+GRAPH `
+
+**The empty result is the audit artifact.** *"We ran the human-oversight predicate against the entire portfolio for this period. The trigger never fired."* Tighten the threshold to 0.85 → instant new at-risk list.
+
+---
+
+# Q4 — Decision reproducibility
+
+
EU AI Act Art. 12
EU AI Act Art. 13
+
+> *"Subpoena: produce the full audit trail for the Adidas creative-theme decision."*
+
+```sql
+GRAPH `
..rich_agent_context_graph`
+MATCH (step:AgentStep)-[:DecidedAt]->(dp:PlanningDecision)
+ -[:WeighedOption]->(opt:DecisionOption)
+WHERE dp.session_id = ''
+ AND LOWER(dp.decision_type) LIKE '%creative%'
+ AND step.event_type = 'LLM_RESPONSE'
+RETURN DISTINCT dp.span_id, opt.status, opt.name, opt.score, opt.rejection_rationale
+ORDER BY opt.status DESC, opt.score DESC;
+```
+
+**Three rows.** Selected: *"Built for a New Record"* @ 0.97. Two dropped with rationale, both pointing back to the same `evidence_span_id`. That span lives in `agent_events` with content, timestamp, and latency. **Record-keeping becomes a queryable trail.**
+
+---
+
+# Q5 — Systemic / pattern audit
+
+
EU AI Act Art. 17
EU AI Act Art. 60
+
+> *"Where in our portfolio does the AI reject candidates least decisively?"*
+
+```sql
+GRAPH `
Audience Selection has the lowest average dropped score. Reuse the Q2 filter for repeatable bias review.
+
+---
+
+
+
+
Section 3 of 4
+
+# What this unlocks
+
+---
+
+# Three takeaways for leadership
+
+
+
+
+
+### 1. Audit posture as a query
+
+The audit surface for our agent platform is now a **live BigQuery query**, not a slide deck or quarterly review.
+
+### 2. The schema generalizes
+
+Brand-neutral, channel-neutral, decision-type-neutral. Point an instrumented agent at the same pipeline and the same audit patterns apply.
+
+
+
+
+
+### 3. Composes with what we run
+
+- **BQ AA Plugin** — the instrumentation path
+- **SDK extraction** — one method call: `build_context_graph(use_ai_generate=True, include_decisions=True)`
+- **BigQuery** — the warehouse you already pay for
+
+No new graph service. No separate audit datastore.
+
+
+
+
+
+---
+
+# Compliance-anchor map
+
+| Question | EU AI Act | GDPR | DSA |
+|---|---|---|---|
+| **Q1** Right to explanation | Art. 86 | Art. 22 | Art. 26 |
+| **Q2** Bias / fairness audit | Art. 10, Art. 71 | Art. 5(1)(d), Art. 22 | Art. 26 |
+| **Q3** Human oversight | Art. 14 | Art. 22 | — |
+| **Q4** Reproducibility | Art. 12, Art. 13 | Art. 30 | Art. 26 |
+| **Q5** Systemic-pattern audit | Art. 17, Art. 60 | Art. 35 | Art. 26 |
+
+Five queries against one graph create reusable **evidence hooks** for three EU regulatory conversations. This is not a compliance certification; it is the data substrate a compliance review needs.
+
+---
+
+# How fast can we ship this?
+
+
+
+### Cost per setup run
+- 6 live `gemini-2.5-pro` invocations
+- 2 `AI.GENERATE` extraction queries
+- A few hundred BQ rows + one property graph
+
+**Order-of-cents for the verified demo run.** Queries against the prebuilt graph are lightweight.
+
+### From zero to leadership demo
+**Under 10 minutes**, fully reproducible, on any GCP project.
+
+
+
+
+
+---
+
+
+
+
Section 4 of 4
+
+# Where to go next
+
+---
+
+# Open source — try it on your project
+
+
+
+
+
+### Repository
+[`GoogleCloudPlatform/BigQuery-Agent-Analytics-SDK`](https://github.com/GoogleCloudPlatform/BigQuery-Agent-Analytics-SDK)
+
+### The demo bundle
+`examples/decision_lineage_demo/`
+
+### What ships in the bundle
+- `setup.sh` / `reset.sh` — one-shot bootstrap + tear-down
+- `agent/` + `campaigns.py` — real ADK agent + 6 briefs
+- `run_agent.py` + `build_graph.py` + `build_rich_graph.py`
+- `bq_studio_queries.gql` (rendered) — six GQL blocks
+- `property_graph.gql` (rendered) — recreate-from-tables DDL
+
+
+
+
+
+### Documentation
+- [`README.md`](README.md) — orientation
+- [`SETUP_NEW_PROJECT.md`](SETUP_NEW_PROJECT.md) — clean-project reproduction
+- [`DEMO_NARRATION.md`](DEMO_NARRATION.md) — 5-min leadership talk track
+- [`BQ_STUDIO_WALKTHROUGH.md`](BQ_STUDIO_WALKTHROUGH.md) — click-by-click in BQ Studio
+- [`DEMO_QUESTIONS.md`](DEMO_QUESTIONS.md) — the 5 EU questions with verified GQL
+- [`DATA_LINEAGE.md`](DATA_LINEAGE.md) — how the 7 tables produce the graph
+- [`SLIDES.md`](SLIDES.md) — this deck
+
+### License
+Apache 2.0
+
+
+
+
+
+---
+
+# Anticipated questions
+
+| Q | Short answer |
+|---|---|
+| **How long to deploy?** | Plugin integration, SDK call, committed DDL. **Days, not quarters**, for a pilot. |
+| **Other agents we haven't instrumented?** | Same plugin, same plug-in point. Schema doesn't care which agent wrote the spans. |
+| **Cost?** | Two `AI.GENERATE` calls per build + standard BQ query cost. **Order-of-cents for this verified demo run.** |
+| **What if `AI.GENERATE` misses a decision?** | Build script reports per-session counts. Re-run extraction without re-running the agent. Talk track is count-agnostic by design. |
+| **Does this expose PII?** | Stores trace-derived rationale text. PII handling follows the existing `agent_events` retention and access policy. |
+| **Who operates this?** | The team that owns the agent platform. Plugin write path + SDK read path both already on-call. |
+
+---
+
+
+
+
+# Q&A
+
+## From agent behavior to auditable evidence
+
+
+
+
diff --git a/examples/decision_lineage_demo/SLIDES.pptx b/examples/decision_lineage_demo/SLIDES.pptx
new file mode 100644
index 00000000..d3e76d2f
Binary files /dev/null and b/examples/decision_lineage_demo/SLIDES.pptx differ