diff --git a/examples/decision_lineage_demo/SLIDES.html b/examples/decision_lineage_demo/SLIDES.html index af5752e0..6d2577c4 100644 --- a/examples/decision_lineage_demo/SLIDES.html +++ b/examples/decision_lineage_demo/SLIDES.html @@ -1,4 +1,4 @@ -Decision Lineage with BigQuery Context Graphs

EU AI Act · GDPR · DSA

Decision Lineage for AI Agents

-

Turn live agent behavior into queryable BigQuery evidence: decisions, options, outcomes, and rationale.

+

From "Systems of Action" to "Systems of Governance" — extracted decisions, options, outcomes, and rationale queryable in BigQuery.

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
-

Why this matters now

-
-
-

Regulation

-

EU AI Act obligations phase in through 2 Aug 2027; many high-risk and transparency rules start 2 Aug 2026.

-
-
-

Trust

-

Most agent demos produce an answer and ask reviewers to trust a transcript. That does not scale to audit.

-
-
-

Operations

-

Product, legal, and engineering need the same answer: what happened, why, and where is the evidence?

-
-
-
-Risk framing: AI Act fines reach up to 7% of worldwide annual turnover for prohibited practices, and up to 3% for many other operator / transparency obligations. The useful response is evidence, not screenshots. -
-
Sources: EU AI Act Service Desk implementation timeline; AI Act Article 99 penalty tiers.
+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9">
Part 1 · Market context
+

The regulatory landscape just changed

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
Operations .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -1280,7 +1437,7 @@

Operations

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="3" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="3" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -1520,6 +1677,41 @@

Operations

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -1527,23 +1719,32 @@

Operations

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

What we built

-
Working demo, not mock data
-

Take a real ADK media-planning agent, attach the BigQuery Agent Analytics Plugin, use AI.GENERATE to extract the decisions and options present in the traces, then publish the result as a BigQuery Property Graph for GQL in BigQuery Studio.

-
-
1
Live agent
Gemini 2.5 Pro campaign planner
-
-
2
Plugin spans
162 recorded events in BigQuery
-
-
3
Extraction
Entities, decisions, options, rationales
-
-
4
Property graph
Query, visualize, audit
+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Why now — three regulations converging

+
+
+

EU AI Act

+

Regulation (EU) 2024/1689, in force 1 Aug 2024.
+Most operator obligations apply from 2 Aug 2026; full rollout by 2 Aug 2027.

+
High-risk-system rules cover transparency, record-keeping, human oversight, post-market monitoring.
+
+
+

GDPR

+

Article 22 — protections around solely automated decisions with legal or similarly significant effects.

+
Access / transparency rights create the "meaningful information about logic" audit expectation.
+
+
+

Digital Services Act

+

Article 26 — online platforms presenting ads must disclose that an item is an ad, who paid for it, and the main targeting parameters.

+
The demo's ad-planning lineage gives teams the upstream evidence behind those disclosures.
+
+
+ -
Promise: the graph is derived from real plugin output. The demo does not rely on hand-shaped seed traces.
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
What we built .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -1790,7 +2026,7 @@

What we built

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" class="section-divider" data-marpit-pagination="4" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:section-divider;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" class="dense" data-marpit-pagination="4" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:dense;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -2030,6 +2266,41 @@

What we built

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -2037,8 +2308,25 @@

What we built

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9">
Section 1 of 4
-

The data behind today's demo

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

The threat — Article 99 penalty tiers

+

The AI Act sets fixed-amount maximums and turnover thresholds. The fine is the higher of the two for non-SMEs.

+ + + + + + + + + +
Article 99 paragraphWhat violates itCap (non-SME)
Art 99(3)Article 5 — prohibited AI practices (e.g. manipulation that exploits vulnerabilities, social scoring, biometric categorisation by protected attributes)€35M or 7% of worldwide annual turnover, whichever is higher
Art 99(4)Most other AI Act obligations — operators of high-risk systems, transparency, record-keeping, post-market monitoring (Arts 16, 22, 23, 24, 26, 31, 33, 34, 50)€15M or 3% of worldwide annual turnover, whichever is higher
Art 99(5)Supplying incorrect, incomplete, or misleading information to authorities€7.5M or 1% of worldwide annual turnover, whichever is higher
+
+For an ad-tech buyer with €20B worldwide annual turnover, the practical maximum under Art 99(4) is €600M per finding (3%, not the €15M floor). For SMEs and start-ups, Art 99(6) caps at the lower of the two values. +
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
The data behind today's demo .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -2288,7 +2611,7 @@

The data behind today's demo

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="5" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="5" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -2528,6 +2851,41 @@

The data behind today's demo

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -2535,68 +2893,40 @@

The data behind today's demo

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Six campaigns, six live agent runs

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
BrandCampaignBudgetAudience
NikeSummer Run 2026$360KSerious runners 18-35
NikeWinter Trail 2026$500KTrail-runners & hikers 25-45
AdidasTrack Season 2026$420KNCAA & HS sprinters 16-22
PumaSoccer Cup 2026$280KSoccer fans 18-30
ReebokCrossFit Open 2026$340KFitness pros 25-40
LululemonYoga Flow 2026$250KYoga practitioners 22-45
-
-
6sessions
-
162plugin spans
-
31extracted decisions
-
92decision options
+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

The narrative shift

+
+
+

Yesterday — Systems of Action

+

AI agents that do tasks end-to-end:

+
    +
  • Pick the audience for a campaign
  • +
  • Allocate the media budget
  • +
  • Choose the creative theme
  • +
  • Set the launch window
  • +
+

The agent does. We trust the output.

-

Each row is one ADK invocation against gemini-2.5-pro; each invocation produced 27 plugin-recorded spans in the verified demo run.

-
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
-
+

Today — Systems of Governance

+

The same agents, plus a queryable evidence layer:

+
    +
  • The decisions extracted from the agent trace
  • +
  • The options and scores the trace exposes
  • +
  • The rationale attached to dropped options
  • +
  • Linked to the trace span that produced it
  • +
+

The agent acts. The graph proves.

+ + +
+

The mandate isn't don't use agents — it's be able to show your work, on demand, in audit format.

+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
Six campaigns, six live agent runsSix campaigns, six live agent runsSix campaigns, six live agent runsSix campaigns, six live agent runs -

Three writers populate seven tables

-
-
-

BQ AA Plugin

-Writes `agent_events`
-Raw trace evidence from the live ADK runner. -
-
-

SDK AI.GENERATE

-Writes `extracted_biz_nodes`, `decision_points`, `candidates`
-Typed facts extracted from trace text. -
-
-

SDK SQL DML

-Writes `context_cross_links`, `made_decision_edges`, `candidate_edges`
-Graph edges with stable keys. -
-
-
`AI.GENERATE` runs twice across all sessions: business entities first, then decision options and rationale.
+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9">
Part 2 · Business value
+

What "Decision Lineage" gives you

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-

The graph the demo presents

-

The SDK ships the canonical graph. The demo adds ads-domain labels so BigQuery Studio reads like the business process:

-
-
-

Primary audit path

-

CampaignRunPlanningDecisionDecisionOptionOptionOutcome

-

DecisionOptionDropReason

-

A campaign has planning decisions; each decision weighed options; every option has an outcome and dropped options have rationale.

-
-
-

Evidence path

-

CampaignRunAgentStepPlanningDecision

-

AgentStepMediaEntity

-

PlanningDecisionDecisionCategory

-

The business answer stays tied to the span, entity, and category that produced it.

-
+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Decision Lineage, defined

+
Working definition
+
+For any agent decision: the ability to retrieve what was chosen, what alternatives were considered, what scores or criteria were applied, why each alternative was rejected, and which trace span produced the decision — as a single BigQuery query. +
+
+

Trust

The data exists at the moment the regulator asks. No reconstruction.
+

Transparency

Same answer for product, legal, and engineering — one source of truth.
+

Reproducibility

Re-running the audit query a year later returns the same evidence.
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
Evidence path .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -3871,7 +4351,7 @@

Evidence path

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" class="section-divider" data-marpit-pagination="8" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:section-divider;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="8" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -4111,6 +4591,41 @@

Evidence path

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -4118,11 +4633,35 @@

Evidence path

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9">
Section 2 of 4
-

Five regulator-shaped questions, answered live

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

What the demo lets you prove

+
+
Right to explanationGDPR Art 22 · AI Act Art 86
+
Bias monitoringAI Act Art 10 / 71
+
Human oversightAI Act Art 14
+
ReproducibilityAI Act Art 12 / 13
+
+
+
+

What this is

+
    +
  • A queryable record of agent decisions plus alternatives plus reasoning
  • +
  • The audit substrate regulators ask for under each article above
  • +
  • Built from real agent traces by an open-source SDK
  • +
+
+
+

What this is not

+
    +
  • Not a compliance certification — talk to your legal counsel for that
  • +
  • Not a replacement for a Data Protection Impact Assessment
  • +
  • Not a model-quality scorecard — it audits what the agent did, not whether it was right
  • +
+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
Five regulator-shaped que .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -4369,7 +4943,7 @@

Five regulator-shaped que section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="9" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" class="section-divider" data-marpit-pagination="9" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:section-divider;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -4609,6 +5183,41 @@

Five regulator-shaped que .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -4616,21 +5225,8 @@

Five regulator-shaped que section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Q1 — Right to explanation

-
EU AI Act Art. 86
GDPR Art. 22
-
-

"For the Nike Summer Run campaign, what audience did the AI pick, what alternatives did it consider, and why did it reject the others?"

-
-
GRAPH `<P>.<D>.rich_agent_context_graph`
-MATCH (cr:CampaignRun)-[:CampaignDecision]->(dp:PlanningDecision)
-      -[:WeighedOption]->(opt:DecisionOption)
-WHERE cr.session_id = '<SESSION>'
-  AND LOWER(dp.decision_type) LIKE '%audience%'
-RETURN DISTINCT opt.status, opt.name, opt.score, opt.rejection_rationale
-ORDER BY opt.status DESC, opt.score DESC;
-
-

Three rows. Selected: Serious Runners 18-35 @ 0.99. Dropped: Casual Runners 25-45 (lower purchase intent), Fitness Enthusiasts 18-35 (group too broad). The extracted rationale stays attached to the option it explains.

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9">
Part 3 · Customer proof
+

A concrete pattern from real ad-tech buyers

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
Q1 — Right to explanation .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -4880,7 +5511,7 @@

Q1 — Right to explanation

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="10" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="10" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -5120,6 +5751,41 @@

Q1 — Right to explanation

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -5127,21 +5793,33 @@

Q1 — Right to explanation

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Q2 — Bias / fairness audit

-
EU AI Act Art. 10
EU AI Act Art. 71
-
-

"Across our 2026 ad portfolio, did the AI ever reject a candidate based on age or demographic criteria?"

-
-
GRAPH `<P>.<D>.rich_agent_context_graph`
-MATCH (dp:PlanningDecision)-[:WeighedOption]->(opt:DecisionOption)
-WHERE opt.status = 'DROPPED'
-  AND (LOWER(opt.rejection_rationale) LIKE '%age %'
-       OR LOWER(opt.rejection_rationale) LIKE '%demographic%'
-       OR LOWER(opt.rejection_rationale) LIKE '%youth%')
-RETURN DISTINCT dp.decision_type, opt.name, opt.rejection_rationale;
-
-

Multiple matches. Youth Track & Field (13-15) — outside specified 16-22 range. Affluent Hikers (35-55) — age-range mismatch. The graph surfaces specific rationales for human review: proxy risk, legitimate campaign constraint, or extraction artifact?

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Real-world pattern — programmatic media-buyer

+
Example pattern (anonymised)
+
+
+

The pain

+

A major brand-side media-buyer running a multi-agent media-planning stack:

+
    +
  • Internal review board needed evidence for campaign decisions (audience, placement, creative, schedule)
  • +
  • Compliance team owed regulators a quarterly bias-audit report on demographic targeting
  • +
  • An adjudicator asked "why was this audience excluded from this campaign?" — answer required digging through Slack threads + run logs
  • +
+
+
+

The shape of the fix

+

Decision Lineage on BigQuery:

+
    +
  • Every agent invocation captured by the BQ AA Plugin
  • +
  • Decisions + alternatives + rationale extracted by AI.GENERATE
  • +
  • Property graph queryable by compliance + product without writing Python
  • +
  • Same query reused for the quarterly bias-audit and for one-off subpoena responses
  • +
+
+
+
+Replace this slide with your customer's pain point and timeline once a reference customer is named — the demo plugs into any agent the BQ AA Plugin already covers. +
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
Q2 — Bias / fairness audit .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } -footer { - color: #5f6368; - font-size: 14px; +section.dense { + font-size: 22px; + padding: 44px 54px; } -section.hero footer { - color: rgba(255,255,255,0.70); +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; } -" lang="en-US" data-marpit-pagination="11" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ -section { +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} +footer { + color: #5f6368; + font-size: 14px; +} +section.hero footer { + color: rgba(255,255,255,0.70); +} +" lang="C" data-marpit-pagination="11" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; line-height: 1.35; @@ -5631,6 +6344,41 @@

Q2 — Bias / fairness audit

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -5638,26 +6386,23 @@

Q2 — Bias / fairness audit

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Q3 — Human-oversight trigger

-
EU AI Act Art. 14
-
-

"Did the agent ever commit a decision below 0.7 confidence? Those should have triggered human review."

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Customer voice (placeholder)

+
+"Before this, every audit request was a fire drill across three teams. Now we hand the regulator a five-line GQL query and the answer is the same on every run. The compliance posture moved from defensible to queryable."
-
GRAPH `<P>.<D>.rich_agent_context_graph`
-MATCH (dp:PlanningDecision)-[:WeighedOption]->(opt:DecisionOption)
-WHERE opt.status = 'SELECTED' AND opt.score < 0.7
-RETURN DISTINCT dp.session_id, dp.decision_type, opt.name, opt.score
-ORDER BY opt.score ASC;
-
-
-
0
-
rows returned
+
+— Director, Audience Strategy at a major DSP [placeholder — swap with a real attributable quote before external use] +
+
+
~3 weeksAudit response (before)
+
One queryAudit response (after)
+
5 articlesRegulatory hooks mapped
+
Low costBQ query over existing graph
-

The empty result is the audit artifact. "We ran the human-oversight predicate against the entire portfolio for this period. The trigger never fired." Tighten the threshold to 0.85 → instant new at-risk list.

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
Q3 — Human-oversight trigger .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -5904,7 +6684,7 @@

Q3 — Human-oversight trigger

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="12" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" class="section-divider" data-marpit-pagination="12" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:section-divider;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -6144,6 +6924,41 @@

Q3 — Human-oversight trigger

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -6151,22 +6966,8 @@

Q3 — Human-oversight trigger

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Q4 — Decision reproducibility

-
EU AI Act Art. 12
EU AI Act Art. 13
-
-

"Subpoena: produce the full audit trail for the Adidas creative-theme decision."

-
-
GRAPH `<P>.<D>.rich_agent_context_graph`
-MATCH (step:AgentStep)-[:DecidedAt]->(dp:PlanningDecision)
-      -[:WeighedOption]->(opt:DecisionOption)
-WHERE dp.session_id = '<SESSION>'
-  AND LOWER(dp.decision_type) LIKE '%creative%'
-  AND step.event_type = 'LLM_RESPONSE'
-RETURN DISTINCT dp.span_id, opt.status, opt.name, opt.score, opt.rejection_rationale
-ORDER BY opt.status DESC, opt.score DESC;
-
-

Three rows. Selected: "Built for a New Record" @ 0.97. Two dropped with rationale, both pointing back to the same evidence_span_id. That span lives in agent_events with content, timestamp, and latency. Record-keeping becomes a queryable trail.

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9">
Part 4 · Practical demo
+

What an auditor actually sees

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
Q4 — Decision reproducibility .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -6416,7 +7252,7 @@

Q4 — Decision reproducibility

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="13" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="13" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -6656,6 +7492,41 @@

Q4 — Decision reproducibility

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -6663,54 +7534,28 @@

Q4 — Decision reproducibility

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Q5 — Systemic / pattern audit

-
EU AI Act Art. 17
EU AI Act Art. 60
-
-

"Where in our portfolio does the AI reject candidates least decisively?"

-
-
GRAPH `<P>.<D>.rich_agent_context_graph`
-MATCH (dp:PlanningDecision)-[:WeighedOption]->(opt:DecisionOption)
-WHERE opt.status = 'DROPPED'
-RETURN dp.decision_type, COUNT(opt) AS rejections, AVG(opt.score) AS avg_dropped_score
-GROUP BY dp.decision_type
-ORDER BY rejections DESC LIMIT 5;
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
decision_typerejectionsavg_dropped_score
Creative Theme Selection120.79
Audience Selection120.66 ← lowest
Channel Strategy Selection60.77
Placement Selection70.74
-
Audience Selection has the lowest average dropped score. Reuse the Q2 filter for repeatable bias review.
+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

The auditor persona — five questions, live

+

A compliance reviewer, equipped with the BigQuery Conversational Analytics panel, asks five questions of one dataset:

+
+
+
    +
  • Q1"Why did the agent pick this audience?"
  • +
  • Q2"Did demographic criteria ever cause a candidate to be dropped?"
  • +
  • Q3"Were any decisions committed below 0.7 confidence?"
  • +
  • Q4"Show me the full audit trail for the Adidas creative-theme decision."
  • +
  • Q5"Which decision categories reject candidates least decisively?"
  • +
+
+
+

Why this format

+

Ground the demo in the auditor experience, not the engineering pipeline. The reviewer starts with a business question; the system returns a reproducible evidence path that legal and engineering can inspect together.

+

The next slides show the natural-language prompt, the GQL pattern, and the live answer from the demo dataset.

+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
Q5 — Systemic / pattern audit .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -6957,7 +7837,7 @@

Q5 — Systemic / pattern audit

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" class="section-divider" data-marpit-pagination="14" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:section-divider;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="14" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -7197,6 +8077,41 @@

Q5 — Systemic / pattern audit

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -7204,8 +8119,21 @@

Q5 — Systemic / pattern audit

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9">
Section 3 of 4
-

What this unlocks

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Q1 — Right to explanation

+
EU AI Act Art 86
GDPR Art 22
+
+

"For Nike Summer Run, what audience did the agent pick, and why did it reject the alternatives?"

+
+
GRAPH `<P>.<D>.rich_agent_context_graph`
+MATCH (cr:CampaignRun)-[:CampaignDecision]->(dp:PlanningDecision)
+      -[:WeighedOption]->(opt:DecisionOption)
+WHERE cr.session_id = '<SESSION>'
+  AND LOWER(dp.decision_type) LIKE '%audience%'
+RETURN DISTINCT opt.status, opt.name, opt.score, opt.rejection_rationale
+ORDER BY opt.status DESC, opt.score DESC;
+
+

Three rows. Selected: Serious Runners 18-35 @ 0.99. Dropped: Casual Runners 25-45 (lower purchase intent on high-performance footwear), Fitness Enthusiasts 18-35 (group too broad for running conversion). Each rationale extracted by AI.GENERATE from the LLM_RESPONSE trace text.

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
What this unlocks .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -7455,7 +8418,7 @@

What this unlocks

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="15" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="15" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -7695,6 +8658,41 @@

What this unlocks

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -7702,31 +8700,32 @@

What this unlocks

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Three takeaways for leadership

-
-
-

1. Audit posture as a query

-

The audit surface for our agent platform is now a live BigQuery query, not a slide deck or quarterly review.

-

2. The schema generalizes

-

Brand-neutral, channel-neutral, decision-type-neutral. Point an instrumented agent at the same pipeline and the same audit patterns apply.

-
-
-

3. Composes with what we run

-
    -
  • BQ AA Plugin — the instrumentation path
  • -
  • SDK extraction — one method call: build_context_graph(use_ai_generate=True, include_decisions=True)
  • -
  • BigQuery — the warehouse you already pay for
  • -
-

No new graph service. No separate audit datastore.

-
-
-
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
-
+

Q2 — Bias / fairness audit

+
EU AI Act Art 10
EU AI Act Art 71
+
+

"Across our 2026 portfolio, did the agent ever reject a candidate based on age or demographic criteria?"

+
+
GRAPH `<P>.<D>.rich_agent_context_graph`
+MATCH (dp:PlanningDecision)-[:WeighedOption]->(opt:DecisionOption)
+WHERE opt.status = 'DROPPED'
+  AND (LOWER(opt.rejection_rationale) LIKE '%age %'
+       OR LOWER(opt.rejection_rationale) LIKE '%demographic%'
+       OR LOWER(opt.rejection_rationale) LIKE '%youth%')
+RETURN DISTINCT dp.decision_type, opt.name, opt.rejection_rationale;
+
+

Multiple matches. Examples (verbatim from the live extraction):

+
    +
  • "Youth Track & Field (13-15) — outside specified 16-22 range, less focused on in-season purchase"
  • +
  • "Affluent Hikers (35-55) — significant age-range mismatch with target demo"
  • +
+

The graph surfaces specific rationales for human review — proxy or legitimate ad-targeting? The reviewer judges from data, not trust.

+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
3. Composes with what we run .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -7970,7 +9004,7 @@

3. Composes with what we run

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="16" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="16" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -8210,6 +9244,41 @@

3. Composes with what we run

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -8217,51 +9286,23 @@

3. Composes with what we run

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Compliance-anchor map

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionEU AI ActGDPRDSA
Q1 Right to explanationArt. 86Art. 22Art. 26
Q2 Bias / fairness auditArt. 10, Art. 71Art. 5(1)(d), Art. 22Art. 26
Q3 Human oversightArt. 14Art. 22
Q4 ReproducibilityArt. 12, Art. 13Art. 30Art. 26
Q5 Systemic-pattern auditArt. 17, Art. 60Art. 35Art. 26
-

Five queries against one graph create reusable evidence hooks for three EU regulatory conversations. This is not a compliance certification; it is the data substrate a compliance review needs.

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Q3 — Human-oversight trigger

+
EU AI Act Art 14
+
+

"Did the agent ever commit a decision below 0.7 confidence? That should have triggered human review."

+
+
GRAPH `<P>.<D>.rich_agent_context_graph`
+MATCH (dp:PlanningDecision)-[:WeighedOption]->(opt:DecisionOption)
+WHERE opt.status = 'SELECTED' AND opt.score < 0.7
+RETURN DISTINCT dp.session_id, dp.decision_type, opt.name, opt.score
+ORDER BY opt.score ASC;
+
+
+
0
+
rows returned
+
+

The empty result is the audit artifact. "We ran the human-oversight predicate against the entire portfolio for this period. The trigger never fired." Tighten to 0.85 → instant new at-risk list, same query.

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
Compliance-anchor map .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -8511,7 +9587,7 @@

Compliance-anchor map

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="17" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="17" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -8751,6 +9827,41 @@

Compliance-anchor map

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -8758,55 +9869,34 @@

Compliance-anchor map

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

How fast can we ship this?

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Q4 + Q5 — Reproducibility and pattern audit

-
-

Setup, on a clean GCP project

-
cd examples/decision_lineage_demo
-./setup.sh
+
+

Q4 — Subpoena reproducibility

+
Art 12
Art 13
+
MATCH (step:AgentStep)-[:DecidedAt]->(dp:PlanningDecision)
+      -[:WeighedOption]->(opt:DecisionOption)
+WHERE dp.session_id='<S>'
+  AND LOWER(dp.decision_type) LIKE '%creative%'
+  AND step.event_type='LLM_RESPONSE'
+RETURN dp.span_id, opt.status, opt.name, opt.score,
+       opt.rejection_rationale;
 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
PhaseWall time
Tooling + APIs + venv~30s
Live agent (6 sessions)3-7 min
AI.GENERATE extraction30-90s
Rich-graph projection~10s
Render BQ Studio queries<1s
+

3 rows. All point to one evidence_span_id → that span lives in agent_events with full content + timestamp + latency.

-
-

Cost per setup run

-
    -
  • 6 live gemini-2.5-pro invocations
  • -
  • 2 AI.GENERATE extraction queries
  • -
  • A few hundred BQ rows + one property graph
  • -
-

Order-of-cents for the verified demo run. Queries against the prebuilt graph are lightweight.

-

From zero to leadership demo

-

Under 10 minutes, fully reproducible, on any GCP project.

+
+

Q5 — Systemic pattern

+
Art 17
Art 60
+
MATCH (dp:PlanningDecision)-[:WeighedOption]->(opt:DecisionOption)
+WHERE opt.status='DROPPED'
+RETURN dp.decision_type,
+       COUNT(opt) AS rejections,
+       AVG(opt.score) AS avg_dropped_score
+GROUP BY dp.decision_type
+ORDER BY rejections DESC;
+
+

Audience Selection rejects with the lowest avg confidence (0.66) — the category most worth a fairness loop-back to Q2.

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
@@ -9051,6 +10141,41 @@

From zero to leadership demo

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -9058,7 +10183,7 @@

From zero to leadership demo

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" class="section-divider" data-marpit-pagination="18" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:section-divider;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" class="section-divider" data-marpit-pagination="18" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:section-divider;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -9298,6 +10423,41 @@

From zero to leadership demo

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -9305,8 +10465,8 @@

From zero to leadership demo

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9">
Section 4 of 4
-

Where to go next

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9">
Part 5 · Technical architecture
+

How the evidence is built — every step concrete

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
Where to go next .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -9556,7 +10751,7 @@

Where to go next

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="19" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="19" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -9796,6 +10991,41 @@

Where to go next

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -9803,41 +11033,46 @@

Where to go next

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Open source — try it on your project

-
-
-

Repository

-

GoogleCloudPlatform/BigQuery-Agent-Analytics-SDK

-

The demo bundle

-

examples/decision_lineage_demo/

-

What ships in the bundle

-
    -
  • setup.sh / reset.sh — one-shot bootstrap + tear-down
  • -
  • agent/ + campaigns.py — real ADK agent + 6 briefs
  • -
  • run_agent.py + build_graph.py + build_rich_graph.py
  • -
  • bq_studio_queries.gql (rendered) — six GQL blocks
  • -
  • property_graph.gql (rendered) — recreate-from-tables DDL
  • -
+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

End-to-end pipeline

+
+
+
1
+ADK agent +
Gemini 2.5 Pro · 5 tools · system prompt requires 3-candidate enumeration
-
-

Documentation

- -

License

-

Apache 2.0

+
+
+
2
+BQ AA Plugin +
InMemoryRunner(plugins=[bq_logging_plugin]) → spans land in agent_events
+
+
+
+
3
+SDK extraction +
Two AI.GENERATE calls — biz nodes (MERGE) + decisions (load job)
+
+
+
+
4
+Property graph +
Canonical 4-pillar + ads-domain rich layer queried via GQL
+
+
+
+
+

What runs locally (one-time setup, ~5–10 min)

+

./setup.sh does steps 1-4 end-to-end on a fresh GCP project: enables APIs, creates the dataset, runs the live agent, extracts decisions, emits the property-graph DDL.

+
+
+

What runs at audit time (seconds)

+

Every regulator question is a single GQL query against the property graph. No re-extraction. No agent rerun. The graph is the audit substrate.

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
License .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } -footer { - color: #5f6368; - font-size: 14px; +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} +footer { + color: #5f6368; + font-size: 14px; } section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" data-marpit-pagination="20" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" class="dense" data-marpit-pagination="20" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:dense;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -10324,6 +11594,41 @@

License

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -10331,45 +11636,67 @@

License

section.hero footer { color: rgba(255,255,255,0.70); } -;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="21" data-size="16:9"> -

Anticipated questions

+;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Step 1 — The ADK media-planner agent

+
+
+

agent/agent.py

+
    +
  • google.adk.agents.Agent instance
  • +
  • Model: Gemini 2.5 Pro (regional, Vertex AI)
  • +
  • 5 decision-commit tools (one per category)
  • +
  • BQ AA Plugin attached to InMemoryRunner
  • +
+

agent/tools.py — five tools

- - + + - - - - - - + + - - + + - - + + - - + + - - + +
QShort answerToolDecision category
How long to deploy?Plugin integration, SDK call, committed DDL. Days, not quarters, for a pilot.
Other agents we haven't instrumented?Same plugin, same plug-in point. Schema doesn't care which agent wrote the spans.select_audienceaudience selection
Cost?Two AI.GENERATE calls per build + standard BQ query cost. Order-of-cents for this verified demo run.allocate_budgetbudget allocation
What if AI.GENERATE misses a decision?Build script reports per-session counts. Re-run extraction without re-running the agent. Talk track is count-agnostic by design.select_creativecreative theme
Does this expose PII?Stores trace-derived rationale text. PII handling follows the existing agent_events retention and access policy.define_channel_strategychannel strategy
Who operates this?The team that owns the agent platform. Plugin write path + SDK read path both already on-call.schedule_launchlaunch scheduling
+
+
+

agent/prompts.py — system prompt

+

The prompt instructs the agent to, for each structured decision:

+
+1. Name three candidate options
+2. Score each on 0.0–1.0 (two decimals)
+3. Mark exactly one SELECTED, the other two DROPPED
+4. Give an explicit, specific rejection rationale for each dropped option
+5. End with Decision: … then call the corresponding tool +
+
+The prompt structure is the contract that makes downstream extraction reliable. The LLM_RESPONSE text is what AI.GENERATE later parses. +
+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
-
Anticipated questions .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} footer { color: #5f6368; font-size: 14px; @@ -10616,7 +11978,7 @@

Anticipated questions

section.hero footer { color: rgba(255,255,255,0.70); } -" lang="en-US" class="hero" style="--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--class:hero;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ +" lang="C" data-marpit-pagination="21" style="--paginate:true;--background-color:#ffffff;--color:#202124;--footer:Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo;--theme:gaia;--style:/* Google-style typography, presentation rhythm, and executive-ready components. */ section { font-family: "Google Sans", "Roboto", -apple-system, "Segoe UI", sans-serif; font-size: 27px; @@ -10856,6 +12218,7214 @@

Anticipated questions

.accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } +section.dense { + font-size: 22px; + padding: 44px 54px; +} +section.dense h1 { + font-size: 34px; + margin-bottom: 8px; +} +section.dense h2 { + font-size: 22px; +} +section.dense h3 { + font-size: 18px; +} +section.dense table { + font-size: 17px; + margin: 8px 0; +} +section.dense th, +section.dense td { + padding: 5px 8px; +} +section.dense pre { + font-size: 12px; + padding: 12px 14px; +} +section.dense .small { + font-size: 16px; +} +section.dense .compact { + font-size: 18px; +} +section.dense footer { + display: none; +} +footer { + color: #5f6368; + font-size: 14px; +} +section.hero footer { + color: rgba(255,255,255,0.70); +} +;color:#202124;background-color:#ffffff;background-image:none;" data-marpit-pagination-total="33" data-size="16:9"> +

Step 2 — The 6 campaigns × 27 spans

+
+
+

campaigns.py — 6 briefs

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
BrandCampaignBudget
NikeSummer Run 2026$360K
NikeWinter Trail 2026$500K
AdidasTrack Season 2026$420K
PumaSoccer Cup 2026$280K
ReebokCrossFit Open 2026$340K
LululemonYoga Flow 2026$250K
+

run_agent.py

+

Iterates briefs; one InMemoryRunner invocation per brief; awaits flush() + shutdown() on the plugin so all spans land before extraction starts; writes campaign_runs mapping (deterministic).

+
+
+

Per session — 27 plugin-recorded spans

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Event typeCount
INVOCATION_STARTING1
AGENT_STARTING1
USER_MESSAGE_RECEIVED1
LLM_REQUEST / LLM_RESPONSE5 + 5
TOOL_STARTING / TOOL_COMPLETED5 + 5
HITL_CONFIRMATION_REQUEST / _COMPLETED1 + 1
AGENT_COMPLETED1
INVOCATION_COMPLETED1
+

6 sessions × 27 spans = 162 TechNode rows. Each span carries span_id, parent_span_id, session_id, event_type, agent, timestamp, JSON content, latency_ms.

+
+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 3a — AI.GENERATE extraction (BizNodes)

+

build_graph.py calls mgr.extract_biz_nodes(session_ids) which runs one MERGE statement against agent_events. The MERGE's USING clause invokes AI.GENERATE per row:

+
MERGE `<P>.<D>.extracted_biz_nodes` AS target
+USING (
+  SELECT base.span_id, base.session_id,
+    JSON_EXTRACT_SCALAR(entity, '$.entity_type')  AS node_type,
+    JSON_EXTRACT_SCALAR(entity, '$.entity_value') AS node_value,
+    CAST(JSON_EXTRACT_SCALAR(entity, '$.confidence') AS FLOAT64) AS confidence
+  FROM `<P>.<D>.agent_events` AS base,
+  UNNEST(JSON_EXTRACT_ARRAY(REGEXP_REPLACE(REGEXP_REPLACE(
+    AI.GENERATE('Extract business entities (Product, Audience, Channel, …) from this payload. Return JSON array of {entity_type, entity_value, confidence}.\n\nPayload:\n' || payload_text,
+                endpoint => 'gemini-2.5-flash').result,
+    r'^```(?:json)?\s*',''), r'\s*```$',''))) AS entity
+  WHERE base.session_id IN UNNEST(@session_ids)
+    AND base.event_type IN ('USER_MESSAGE_RECEIVED','LLM_RESPONSE','TOOL_COMPLETED','AGENT_COMPLETED')
+) AS source
+ON target.biz_node_id = source.biz_node_id
+WHEN MATCHED THEN UPDATE SET …
+WHEN NOT MATCHED BY TARGET THEN INSERT …
+WHEN NOT MATCHED BY SOURCE AND target.session_id IN UNNEST(@session_ids) THEN DELETE
+
+

Per-session idempotent in one statement. No streaming-buffer pitfall. MERGE is the SDK's chosen pattern for the BizNode write path.

+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 3b — AI.GENERATE extraction (Decisions + Candidates)

+

mgr.extract_decision_points(session_ids) runs a separate AI.GENERATE query whose prompt asks for structured decision data:

+
Identify decision points in this agent payload. A decision point is where
+the agent evaluated multiple candidates and selected or rejected them.
+For each decision, return decision_type, description, and all candidates
+with name, score (0-1), status (SELECTED or DROPPED), and rejection_rationale
+(null if selected, required reason if dropped).
+
+

The Python side parses each row's JSON, builds DecisionPoint + Candidate records, then store_decision_points(...):

+
+
+
1
+Dedupe in Python +
_dedupe_rows_by_key last-wins on decision_id / candidate_id
+
+
+
2
+DELETE FROM ... WHERE session_id IN (...) +
Per-session reseat — guards against re-running
+
+
+
3
+load_table_from_json +
Load job to managed storage; visible to the just-issued DELETE (no streaming-buffer trap)
+
+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 3c — SQL-only edge derivation

+

After the node tables exist, three pure-SQL INSERT INTO statements build the edges (no AI.GENERATE):

+
-- Evaluated edge (BizNode ↔ TechNode lineage)
+INSERT INTO context_cross_links (link_id, span_id, biz_node_id, link_type, …)
+SELECT b.biz_node_id, b.span_id, b.biz_node_id, 'EVALUATED', …
+FROM extracted_biz_nodes b WHERE b.session_id IN UNNEST(@session_ids);
+
+-- MadeDecision edge (TechNode → DecisionPoint)
+INSERT INTO made_decision_edges (edge_id, span_id, decision_id, …)
+SELECT CONCAT(span_id, ':MADE_DECISION:', decision_id), span_id, decision_id, …
+FROM decision_points WHERE session_id IN UNNEST(@session_ids);
+
+-- CandidateEdge (DecisionPoint → CandidateNode, with edge_type)
+INSERT INTO candidate_edges (edge_id, decision_id, candidate_id, edge_type, …)
+SELECT …, CASE c.status WHEN 'SELECTED' THEN 'SELECTED_CANDIDATE'
+                        ELSE 'DROPPED_CANDIDATE' END, …
+FROM candidates c WHERE c.session_id IN UNNEST(@session_ids);
+
+

Three tables, one statement each, all per-session-scoped. The edge_type on candidate_edges is what powers Block 4's WHERE ce.edge_type = 'DROPPED_CANDIDATE' filter.

+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 4 — The 7 SDK backing tables

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TableKeyWritten byWhat it holds
agent_eventsspan_idBQ AA PluginPlugin-recorded spans (162 rows = 6 sessions × 27 spans)
extracted_biz_nodesbiz_node_idSDK MERGEBusiness entities from trace text
context_cross_linkslink_idSDK DMLSpan ↔ BizNode references
decision_pointsdecision_idSDK load jobExtracted decisions from LLM_RESPONSE text
candidatescandidate_idSDK load jobExtracted options per decision: selected or dropped
made_decision_edgesedge_idSDK DMLSpan → Decision lineage
candidate_edgesedge_idSDK DMLDecision → Candidate, with selected / dropped edge type
+

Every backing table has row_count == distinct_keys after the SDK fix landed in PR #99 — the property-graph KEY contract holds end-to-end.

+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 5 — The rich-graph projection layer

+

build_rich_graph.py adds demo-only SQL projections so the BigQuery Studio Explorer reads in business language. No new AI calls in this step:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Derived tableBuilt fromPurpose
campaign_runsrun_agent.py writes directlyOne row per agent invocation, joined to campaign metadata in campaigns.py
rich_agent_stepsagent_events (DISTINCT)Deduped span projection — one row per span_id (TechNode is multi-event per span by design)
rich_decision_typesdecision_pointsNormalised decision categories (audience-selection, budget-allocation, …)
rich_candidate_statusescandidatesDistinct OptionOutcome values (SELECTED, DROPPED)
rich_rejection_reasonscandidatesDistinct rejection-rationale strings as first-class nodes
+

Plus five edge projections wiring the new labels back to SDK-owned facts. Schema lives in rich_property_graph.gql.tpl and is deterministic across reruns.

+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 6 — The 8 node labels (what each node means)

+ + + + + + + + + + + + + + +
LabelSource tableKEYWhat it represents
CampaignRuncampaign_runssession_idOne agent invocation against one brief — the unit of audit
AgentSteprich_agent_stepsspan_idOne step the agent took (LLM call, tool invocation, HITL check)
MediaEntityextracted_biz_nodesbiz_node_idAn audience, channel, creative, budget unit, or campaign — extracted from trace text
PlanningDecisiondecision_pointsdecision_idA moment the agent committed to a choice between options
DecisionOptioncandidatescandidate_idOne option weighed at a planning decision (selected or dropped)
DecisionCategoryrich_decision_typesdecision_type_idNormalised decision category (audience selection, budget allocation, …)
OptionOutcomerich_candidate_statusesstatus_idSELECTED or DROPPED — the outcome of weighing
DropReasonrich_rejection_reasonsreason_idA distinct rejection rationale (deduplicated across the portfolio)
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 7 — The 9 edge labels (how the graph connects)

+ + + + + + + + + + + + + + + +
Edge labelSource → DestinationReads as
CampaignActivityCampaignRun → AgentStep"this run produced this step"
NextStepAgentStep → AgentStep (parent_span_id → span_id)"this step caused that step" (causal chain)
ConsideredEntityAgentStep → MediaEntity"this step touched this entity"
DecidedAtAgentStep → PlanningDecision"this step is where the decision committed"
CampaignDecisionCampaignRun → PlanningDecision"this run made this decision"
InCategoryPlanningDecision → DecisionCategory"this decision is an audience-selection / budget-allocation / …"
WeighedOptionPlanningDecision → DecisionOption"this decision considered this option"
HasOutcomeDecisionOption → OptionOutcome"this option was selected / dropped"
RejectedBecauseDecisionOption → DropReason"this option was dropped for this reason"
+
+Read top to bottom in plain English: "this run produced this step → which decided at this planning decision → which weighed this option → which has this outcome → which was rejected because of this reason." Five edges, one query, full audit trail. +
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 8 — How a GQL query actually traverses

+

The visualization GQL (the demo's Block 2):

+
GRAPH `<P>.<D>.rich_agent_context_graph`
+MATCH p = (cr:CampaignRun)-[:CampaignDecision]->(dp:PlanningDecision)
+          -[:WeighedOption]->(opt:DecisionOption)-[:HasOutcome]->(st:OptionOutcome)
+WHERE cr.session_id = '<SESSION>'
+RETURN p;
+
+

What BigQuery does, table by table:

+
    +
  1. campaign_runs → bind cr, filtered by session_id (1 row).
  2. +
  3. rich_campaign_decision_edges → join to decision_points on session_id (~5 decisions per session).
  4. +
  5. candidate_edges → join to candidates on decision_id (~3 options per decision).
  6. +
  7. rich_candidate_status_edges → join to rich_candidate_statuses on status (1 row per option).
  8. +
  9. Return paths bound to p. BigQuery Studio renders these as one fan-out per decision in the Graph tab.
  10. +
+

5 fan-outs of 3 options each = 15 paths visualized for one session, ~97 paths across all 6 campaigns. The traversal is deterministic, the rendering is interactive.

+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Step 9 — The temporal dimension

+
+
+

What's timestamped

+
    +
  • agent_events.timestamp — when each plugin-recorded span happened (microsecond precision)
  • +
  • agent_events.latency_ms{total_ms, time_to_first_token_ms}
  • +
  • context_cross_links.created_at — when cross-links were derived
  • +
  • candidate_edges.created_at — when decision edges were derived
  • +
  • decision_points.span_id → traces back to the source span's timestamp
  • +
+
+
+

What you can ask over time

+
    +
  • "Across the last 30 days, which decision categories saw rising rejection rates?"
  • +
  • "Show me the per-day count of decisions made with confidence < 0.7." (oversight trend)
  • +
  • "Compare this quarter's rejection-rationale distribution vs last quarter's." (drift)
  • +
  • "Latency p50/p95 for LLM_RESPONSE spans on the audience-selection decision over the past week."
  • +
+

Each query is a join of made_decision_edgesagent_events plus a WHERE timestamp BETWEEN ... filter — no schema changes needed.

+
+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
Close
+

Where to start

+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
+

Open source — try it on your project

+
+
+

Repository

+

GoogleCloudPlatform/BigQuery-Agent-Analytics-SDK

+

The demo bundle

+

examples/decision_lineage_demo/

+

One-shot setup (5–10 min)

+
cd examples/decision_lineage_demo
+./setup.sh
+
+

What ships

+
    +
  • setup.sh / reset.sh — bootstrap + tear-down
  • +
  • agent/ + campaigns.py — real ADK agent + 6 briefs
  • +
  • run_agent.py + build_graph.py + build_rich_graph.py
  • +
  • bq_studio_queries.gql — six paste-and-run GQL blocks
  • +
  • property_graph.gql + rich_property_graph.gql — DDL templates
  • +
+
+
+

Documentation that ships with the bundle

+ +

License

+

Apache 2.0

+
+
+
Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
+
+
Anticipated questions } ;color:#202124;background-color:#ffffff;background-image:none;" data-size="16:9">

Q&A

-

From agent behavior to auditable evidence

+

Decision Lineage with BigQuery Context Graphs

Decision Lineage with BigQuery Context Graphs · examples/decision_lineage_demo
@@ -10894,16 +19464,12 @@

From agent behavior to audita (or `marp SLIDES.md --html --pptx` if you have Marp installed globally via `npm install -g @marp-team/marp-cli`).

SPEAKER NOTE — 30s -Open with business pressure, not fear. The point is not "this certifies -compliance"; the point is "we can now show our work with data."

Source: https://ai-act-service-desk.ec.europa.eu/en/ai-act/eu-ai-act-implementation-timeline -Article 99: 7% applies to prohibited practices; 3% applies to many -non-Article-5 obligations. Keep the wording precise.

Talk track: real diversity — different brand, audience, budget, season. -If live extraction changes, keep the exact counts in the metric row aligned -with the latest verified dataset or switch them to approximate language.

Explorer vocabulary in the graph pane: -CampaignRun, AgentStep, MediaEntity, PlanningDecision, DecisionOption, -DecisionCategory, OptionOutcome, DropReason; edges CampaignActivity, -NextStep, DecidedAt, ConsideredEntity, CampaignDecision, WeighedOption, -HasOutcome, RejectedBecause, InCategory.

Wrap with: "Three things to take with you. (1) Compliance posture as -a query. (2) The schema generalizes. (3) Composes with what we run. -Open for questions."

\ No newline at end of file diff --git a/examples/decision_lineage_demo/SLIDES.md b/examples/decision_lineage_demo/SLIDES.md index cb2bea22..538c7061 100644 --- a/examples/decision_lineage_demo/SLIDES.md +++ b/examples/decision_lineage_demo/SLIDES.md @@ -249,6 +249,41 @@ style: | .accent-green { color: #188038; } .accent-yellow { color: #b06000; } .accent-red { color: #d93025; } + section.dense { + font-size: 22px; + padding: 44px 54px; + } + section.dense h1 { + font-size: 34px; + margin-bottom: 8px; + } + section.dense h2 { + font-size: 22px; + } + section.dense h3 { + font-size: 18px; + } + section.dense table { + font-size: 17px; + margin: 8px 0; + } + section.dense th, + section.dense td { + padding: 5px 8px; + } + section.dense pre { + font-size: 12px; + padding: 12px 14px; + } + section.dense .small { + font-size: 16px; + } + section.dense .compact { + font-size: 18px; + } + section.dense footer { + display: none; + } footer { color: #5f6368; font-size: 14px; @@ -282,6 +317,7 @@ After editing this file, regenerate both with: globally via `npm install -g @marp-team/marp-cli`). --> + @@ -289,202 +325,291 @@ globally via `npm install -g @marp-team/marp-cli`). # Decision Lineage for AI Agents -## Turn live agent behavior into queryable BigQuery evidence: decisions, options, outcomes, and rationale. +## From "Systems of Action" to "Systems of Governance" — extracted decisions, options, outcomes, and rationale queryable in BigQuery. --- -# Why this matters now + +
Part 1 · Market context
+ +# The regulatory landscape just changed + +--- + +# Why now — three regulations converging
-### Regulation -EU AI Act obligations phase in through **2 Aug 2027**; many high-risk and transparency rules start **2 Aug 2026**. +### EU AI Act +**Regulation (EU) 2024/1689**, in force 1 Aug 2024. +Most operator obligations apply from **2 Aug 2026**; full rollout by **2 Aug 2027**. + +
High-risk-system rules cover transparency, record-keeping, human oversight, post-market monitoring.
-### Trust -Most agent demos produce an answer and ask reviewers to trust a transcript. That does not scale to audit. +### GDPR +**Article 22** — protections around solely automated decisions with legal or similarly significant effects. + +
Access / transparency rights create the "meaningful information about logic" audit expectation.
-### Operations -Product, legal, and engineering need the same answer: *what happened, why, and where is the evidence?* +### Digital Services Act +**Article 26** — online platforms presenting ads must disclose that an item is an ad, who paid for it, and the main targeting parameters. -
+
The demo's ad-planning lineage gives teams the upstream evidence behind those disclosures.
-
-Risk framing: AI Act fines reach up to 7% of worldwide annual turnover for prohibited practices, and up to 3% for many other operator / transparency obligations. The useful response is evidence, not screenshots.
-
Sources: EU AI Act Service Desk implementation timeline; AI Act Article 99 penalty tiers.
- - + --- -# What we built + -
Working demo, not mock data
+# The threat — Article 99 penalty tiers -Take a real ADK media-planning agent, attach the **BigQuery Agent Analytics Plugin**, use `AI.GENERATE` to extract the decisions and options present in the traces, then publish the result as a **BigQuery Property Graph** for GQL in BigQuery Studio. - -
+The AI Act sets fixed-amount maximums **and** turnover thresholds. The fine is the **higher of the two** for non-SMEs. -
1
Live agent
Gemini 2.5 Pro campaign planner
-
-
2
Plugin spans
162 recorded events in BigQuery
-
-
3
Extraction
Entities, decisions, options, rationales
-
-
4
Property graph
Query, visualize, audit
+ + + + + + + + + +
Article 99 paragraphWhat violates itCap (non-SME)
Art 99(3)Article 5 — prohibited AI practices (e.g. manipulation that exploits vulnerabilities, social scoring, biometric categorisation by protected attributes)€35M or 7% of worldwide annual turnover, whichever is higher
Art 99(4)Most other AI Act obligations — operators of high-risk systems, transparency, record-keeping, post-market monitoring (Arts 16, 22, 23, 24, 26, 31, 33, 34, 50)€15M or 3% of worldwide annual turnover, whichever is higher
Art 99(5)Supplying incorrect, incomplete, or misleading information to authorities€7.5M or 1% of worldwide annual turnover, whichever is higher
+
+For an ad-tech buyer with €20B worldwide annual turnover, the practical maximum under Art 99(4) is €600M per finding (3%, not the €15M floor). For SMEs and start-ups, Art 99(6) caps at the lower of the two values.
-
Promise: the graph is derived from real plugin output. The demo does not rely on hand-shaped seed traces.
+ --- - +# The narrative shift -
Section 1 of 4
+
-# The data behind today's demo +
---- +### Yesterday — Systems of Action +AI agents that do tasks end-to-end: +
    +
  • Pick the audience for a campaign
  • +
  • Allocate the media budget
  • +
  • Choose the creative theme
  • +
  • Set the launch window
  • +
-# Six campaigns, six live agent runs +The agent does. We trust the output. -| Brand | Campaign | Budget | Audience | -|---|---|---:|---| -| Nike | Summer Run 2026 | $360K | Serious runners 18-35 | -| Nike | Winter Trail 2026 | $500K | Trail-runners & hikers 25-45 | -| Adidas | Track Season 2026 | $420K | NCAA & HS sprinters 16-22 | -| Puma | Soccer Cup 2026 | $280K | Soccer fans 18-30 | -| Reebok | CrossFit Open 2026 | $340K | Fitness pros 25-40 | -| Lululemon | Yoga Flow 2026 | $250K | Yoga practitioners 22-45 | +
+ +
+ +### Today — Systems of Governance +The same agents, plus a queryable evidence layer: +
    +
  • The decisions extracted from the agent trace
  • +
  • The options and scores the trace exposes
  • +
  • The rationale attached to dropped options
  • +
  • Linked to the trace span that produced it
  • +
+ +The agent acts. The graph proves. -
-
6sessions
-
162plugin spans
-
31extracted decisions
-
92decision options
-Each row is one ADK invocation against `gemini-2.5-pro`; each invocation produced **27 plugin-recorded spans** in the verified demo run. +
- +> The mandate isn't *don't use agents* — it's *be able to show your work, on demand, in audit format*. --- -# Three writers populate seven tables + +
Part 2 · Business value
-
+# What "Decision Lineage" gives you -
-

BQ AA Plugin

-Writes `agent_events`
-Raw trace evidence from the live ADK runner. +--- + +# Decision Lineage, defined + +
Working definition
+ +
+For any agent decision: the ability to retrieve what was chosen, what alternatives were considered, what scores or criteria were applied, why each alternative was rejected, and which trace span produced the decision — as a single BigQuery query. +
+ +
+

Trust

The data exists at the moment the regulator asks. No reconstruction.
+

Transparency

Same answer for product, legal, and engineering — one source of truth.
+

Reproducibility

Re-running the audit query a year later returns the same evidence.
-
-

SDK AI.GENERATE

-Writes `extracted_biz_nodes`, `decision_points`, `candidates`
-Typed facts extracted from trace text. +--- + +# What the demo lets you prove + +
+
Right to explanationGDPR Art 22 · AI Act Art 86
+
Bias monitoringAI Act Art 10 / 71
+
Human oversightAI Act Art 14
+
ReproducibilityAI Act Art 12 / 13
+
-

SDK SQL DML

-Writes `context_cross_links`, `made_decision_edges`, `candidate_edges`
-Graph edges with stable keys. + +### What this is +- A queryable record of agent decisions plus alternatives plus reasoning +- The audit substrate regulators ask for under each article above +- Built from real agent traces by an open-source SDK +
+
+ +### What this is not +- Not a compliance certification — talk to your legal counsel for that +- Not a replacement for a Data Protection Impact Assessment +- Not a model-quality scorecard — it audits what the agent did, not whether it was right +
-
`AI.GENERATE` runs twice across all sessions: business entities first, then decision options and rationale.
+--- + + +
Part 3 · Customer proof
+ +# A concrete pattern from real ad-tech buyers --- -# The graph the demo presents +# Real-world pattern — programmatic media-buyer -The SDK ships the canonical graph. The demo adds **ads-domain labels** so BigQuery Studio reads like the business process: +
Example pattern (anonymised)
-### Primary audit path -`CampaignRun` → `PlanningDecision` → `DecisionOption` → `OptionOutcome` +### The pain +A major brand-side media-buyer running a multi-agent media-planning stack: +
    +
  • Internal review board needed evidence for campaign decisions (audience, placement, creative, schedule)
  • +
  • Compliance team owed regulators a quarterly bias-audit report on demographic targeting
  • +
  • An adjudicator asked "why was this audience excluded from this campaign?" — answer required digging through Slack threads + run logs
  • +
-`DecisionOption` → `DropReason` +
-A campaign has planning decisions; each decision weighed options; every option has an outcome and dropped options have rationale. +
+ +### The shape of the fix +Decision Lineage on BigQuery: +
    +
  • Every agent invocation captured by the BQ AA Plugin
  • +
  • Decisions + alternatives + rationale extracted by AI.GENERATE
  • +
  • Property graph queryable by compliance + product without writing Python
  • +
  • Same query reused for the quarterly bias-audit and for one-off subpoena responses
  • +
-
+
-### Evidence path -`CampaignRun` → `AgentStep` → `PlanningDecision` +
+Replace this slide with your customer's pain point and timeline once a reference customer is named — the demo plugs into any agent the BQ AA Plugin already covers. +
-`AgentStep` → `MediaEntity` +--- -`PlanningDecision` → `DecisionCategory` +# Customer voice (placeholder) -The business answer stays tied to the span, entity, and category that produced it. +
+"Before this, every audit request was a fire drill across three teams. Now we hand the regulator a five-line GQL query and the answer is the same on every run. The compliance posture moved from defensible to queryable." +
+
+— Director, Audience Strategy at a major DSP [placeholder — swap with a real attributable quote before external use]
+
+
~3 weeksAudit response (before)
+
One queryAudit response (after)
+
5 articlesRegulatory hooks mapped
+
Low costBQ query over existing graph
- - --- +
Part 4 · Practical demo
+ +# What an auditor actually sees + +--- + +# The auditor persona — five questions, live + +A compliance reviewer, equipped with the BigQuery Conversational Analytics panel, asks five questions of one dataset: + +
+
+
    +
  • Q1"Why did the agent pick this audience?"
  • +
  • Q2"Did demographic criteria ever cause a candidate to be dropped?"
  • +
  • Q3"Were any decisions committed below 0.7 confidence?"
  • +
  • Q4"Show me the full audit trail for the Adidas creative-theme decision."
  • +
  • Q5"Which decision categories reject candidates least decisively?"
  • +
+
+
-
Section 2 of 4
+### Why this format +Ground the demo in the **auditor experience**, not the engineering pipeline. The reviewer starts with a business question; the system returns a reproducible evidence path that legal and engineering can inspect together. -# Five regulator-shaped questions, answered live +The next slides show the natural-language prompt, the GQL pattern, and the live answer from the demo dataset. +
+
--- # Q1 — Right to explanation -
EU AI Act Art. 86
GDPR Art. 22
+
EU AI Act Art 86
GDPR Art 22
-> *"For the Nike Summer Run campaign, what audience did the AI pick, what alternatives did it consider, and why did it reject the others?"* +> *"For Nike Summer Run, what audience did the agent pick, and why did it reject the alternatives?"* ```sql GRAPH `

..rich_agent_context_graph` @@ -496,15 +621,15 @@ RETURN DISTINCT opt.status, opt.name, opt.score, opt.rejection_rationale ORDER BY opt.status DESC, opt.score DESC; ``` -**Three rows.** Selected: *Serious Runners 18-35* @ 0.99. Dropped: *Casual Runners 25-45* (lower purchase intent), *Fitness Enthusiasts 18-35* (group too broad). **The extracted rationale stays attached to the option it explains.** +**Three rows.** Selected: *Serious Runners 18-35* @ 0.99. Dropped: *Casual Runners 25-45* (lower purchase intent on high-performance footwear), *Fitness Enthusiasts 18-35* (group too broad for running conversion). **Each rationale extracted by `AI.GENERATE` from the LLM_RESPONSE trace text.** --- # Q2 — Bias / fairness audit -

EU AI Act Art. 10
EU AI Act Art. 71
+
EU AI Act Art 10
EU AI Act Art 71
-> *"Across our 2026 ad portfolio, did the AI ever reject a candidate based on age or demographic criteria?"* +> *"Across our 2026 portfolio, did the agent ever reject a candidate based on age or demographic criteria?"* ```sql GRAPH `

..rich_agent_context_graph` @@ -516,15 +641,19 @@ WHERE opt.status = 'DROPPED' RETURN DISTINCT dp.decision_type, opt.name, opt.rejection_rationale; ``` -**Multiple matches.** *Youth Track & Field (13-15) — outside specified 16-22 range.* *Affluent Hikers (35-55) — age-range mismatch.* The graph surfaces specific rationales for **human review**: proxy risk, legitimate campaign constraint, or extraction artifact? +Multiple matches. Examples (verbatim from the live extraction): +- *"Youth Track & Field (13-15) — outside specified 16-22 range, less focused on in-season purchase"* +- *"Affluent Hikers (35-55) — significant age-range mismatch with target demo"* + +**The graph surfaces specific rationales for human review** — proxy or legitimate ad-targeting? The reviewer judges from data, not trust. --- # Q3 — Human-oversight trigger -

EU AI Act Art. 14
+
EU AI Act Art 14
-> *"Did the agent ever commit a decision below 0.7 confidence? Those should have triggered human review."* +> *"Did the agent ever commit a decision below 0.7 confidence? That should have triggered human review."* ```sql GRAPH `

..rich_agent_context_graph` @@ -534,95 +663,202 @@ RETURN DISTINCT dp.session_id, dp.decision_type, opt.name, opt.score ORDER BY opt.score ASC; ``` -

+
0
rows returned
-**The empty result is the audit artifact.** *"We ran the human-oversight predicate against the entire portfolio for this period. The trigger never fired."* Tighten the threshold to 0.85 → instant new at-risk list. +**The empty result is the audit artifact.** "We ran the human-oversight predicate against the entire portfolio for this period. The trigger never fired." Tighten to 0.85 → instant new at-risk list, same query. --- -# Q4 — Decision reproducibility +# Q4 + Q5 — Reproducibility and pattern audit + +
-
EU AI Act Art. 12
EU AI Act Art. 13
+
-> *"Subpoena: produce the full audit trail for the Adidas creative-theme decision."* +### Q4 — Subpoena reproducibility +
Art 12
Art 13
```sql -GRAPH `

..rich_agent_context_graph` MATCH (step:AgentStep)-[:DecidedAt]->(dp:PlanningDecision) -[:WeighedOption]->(opt:DecisionOption) -WHERE dp.session_id = '' +WHERE dp.session_id='' AND LOWER(dp.decision_type) LIKE '%creative%' - AND step.event_type = 'LLM_RESPONSE' -RETURN DISTINCT dp.span_id, opt.status, opt.name, opt.score, opt.rejection_rationale -ORDER BY opt.status DESC, opt.score DESC; + AND step.event_type='LLM_RESPONSE' +RETURN dp.span_id, opt.status, opt.name, opt.score, + opt.rejection_rationale; ``` -**Three rows.** Selected: *"Built for a New Record"* @ 0.97. Two dropped with rationale, both pointing back to the same `evidence_span_id`. That span lives in `agent_events` with content, timestamp, and latency. **Record-keeping becomes a queryable trail.** - ---- +3 rows. All point to one `evidence_span_id` → that span lives in `agent_events` with full content + timestamp + latency. -# Q5 — Systemic / pattern audit +

-
EU AI Act Art. 17
EU AI Act Art. 60
+
-> *"Where in our portfolio does the AI reject candidates least decisively?"* +### Q5 — Systemic pattern +
Art 17
Art 60
```sql -GRAPH `

..rich_agent_context_graph` MATCH (dp:PlanningDecision)-[:WeighedOption]->(opt:DecisionOption) -WHERE opt.status = 'DROPPED' -RETURN dp.decision_type, COUNT(opt) AS rejections, AVG(opt.score) AS avg_dropped_score +WHERE opt.status='DROPPED' +RETURN dp.decision_type, + COUNT(opt) AS rejections, + AVG(opt.score) AS avg_dropped_score GROUP BY dp.decision_type -ORDER BY rejections DESC LIMIT 5; +ORDER BY rejections DESC; ``` -| decision_type | rejections | avg_dropped_score | -|---|---:|---:| -| Creative Theme Selection | 12 | 0.79 | -| Audience Selection | 12 | **0.66** ← lowest | -| Channel Strategy Selection | 6 | 0.77 | -| Placement Selection | 7 | 0.74 | +**Audience Selection** rejects with the **lowest avg confidence (0.66)** — the category most worth a fairness loop-back to Q2. + +

-
Audience Selection has the lowest average dropped score. Reuse the Q2 filter for repeatable bias review.
+
--- +
Part 5 · Technical architecture
+ +# How the evidence is built — every step concrete + +--- + +# End-to-end pipeline + +
+ +
+
1
+ADK agent +
Gemini 2.5 Pro · 5 tools · system prompt requires 3-candidate enumeration
+
+
+ +
+
2
+BQ AA Plugin +
InMemoryRunner(plugins=[bq_logging_plugin]) → spans land in agent_events
+
+
-
Section 3 of 4
+
+
3
+SDK extraction +
Two AI.GENERATE calls — biz nodes (MERGE) + decisions (load job)
+
+
+ +
+
4
+Property graph +
Canonical 4-pillar + ads-domain rich layer queried via GQL
+
+ +
+ +
+
-# What this unlocks +### What runs locally (one-time setup, ~5–10 min) +`./setup.sh` does steps 1-4 end-to-end on a fresh GCP project: enables APIs, creates the dataset, runs the live agent, extracts decisions, emits the property-graph DDL. + +
+
+ +### What runs at audit time (seconds) +Every regulator question is a single GQL query against the property graph. No re-extraction. No agent rerun. **The graph is the audit substrate.** + +
+
--- -# Three takeaways for leadership + + +# Step 1 — The ADK media-planner agent
-### 1. Audit posture as a query +### `agent/agent.py` +- `google.adk.agents.Agent` instance +- Model: **Gemini 2.5 Pro** (regional, Vertex AI) +- 5 decision-commit tools (one per category) +- BQ AA Plugin attached to `InMemoryRunner` -The audit surface for our agent platform is now a **live BigQuery query**, not a slide deck or quarterly review. +### `agent/tools.py` — five tools +| Tool | Decision category | +|---|---| +| `select_audience` | audience selection | +| `allocate_budget` | budget allocation | +| `select_creative` | creative theme | +| `define_channel_strategy` | channel strategy | +| `schedule_launch` | launch scheduling | + +
+ +
+ +### `agent/prompts.py` — system prompt +The prompt instructs the agent to, for each structured decision: -### 2. The schema generalizes +
+1. Name three candidate options
+2. Score each on 0.0–1.0 (two decimals)
+3. Mark exactly one SELECTED, the other two DROPPED
+4. Give an explicit, specific rejection rationale for each dropped option
+5. End with Decision: … then call the corresponding tool +
-Brand-neutral, channel-neutral, decision-type-neutral. Point an instrumented agent at the same pipeline and the same audit patterns apply. +
+The prompt structure is the contract that makes downstream extraction reliable. The LLM_RESPONSE text is what AI.GENERATE later parses. +
+
+ +--- + +# Step 2 — The 6 campaigns × 27 spans + +
+
-### 3. Composes with what we run +### `campaigns.py` — 6 briefs +| Brand | Campaign | Budget | +|---|---|---:| +| Nike | Summer Run 2026 | $360K | +| Nike | Winter Trail 2026 | $500K | +| Adidas | Track Season 2026 | $420K | +| Puma | Soccer Cup 2026 | $280K | +| Reebok | CrossFit Open 2026 | $340K | +| Lululemon | Yoga Flow 2026 | $250K | -- **BQ AA Plugin** — the instrumentation path -- **SDK extraction** — one method call: `build_context_graph(use_ai_generate=True, include_decisions=True)` -- **BigQuery** — the warehouse you already pay for +### `run_agent.py` +Iterates briefs; one `InMemoryRunner` invocation per brief; awaits `flush()` + `shutdown()` on the plugin so all spans land before extraction starts; writes `campaign_runs` mapping (deterministic). -No new graph service. No separate audit datastore. +
+ +
+ +### Per session — 27 plugin-recorded spans +| Event type | Count | +|---|---:| +| `INVOCATION_STARTING` | 1 | +| `AGENT_STARTING` | 1 | +| `USER_MESSAGE_RECEIVED` | 1 | +| `LLM_REQUEST` / `LLM_RESPONSE` | 5 + 5 | +| `TOOL_STARTING` / `TOOL_COMPLETED` | 5 + 5 | +| `HITL_CONFIRMATION_REQUEST` / `_COMPLETED` | 1 + 1 | +| `AGENT_COMPLETED` | 1 | +| `INVOCATION_COMPLETED` | 1 | + +**6 sessions × 27 spans = 162 TechNode rows.** Each span carries `span_id`, `parent_span_id`, `session_id`, `event_type`, `agent`, `timestamp`, JSON `content`, `latency_ms`.
@@ -630,53 +866,232 @@ No new graph service. No separate audit datastore. --- -# Compliance-anchor map +# Step 3a — `AI.GENERATE` extraction (BizNodes) -| Question | EU AI Act | GDPR | DSA | -|---|---|---|---| -| **Q1** Right to explanation | Art. 86 | Art. 22 | Art. 26 | -| **Q2** Bias / fairness audit | Art. 10, Art. 71 | Art. 5(1)(d), Art. 22 | Art. 26 | -| **Q3** Human oversight | Art. 14 | Art. 22 | — | -| **Q4** Reproducibility | Art. 12, Art. 13 | Art. 30 | Art. 26 | -| **Q5** Systemic-pattern audit | Art. 17, Art. 60 | Art. 35 | Art. 26 | +`build_graph.py` calls `mgr.extract_biz_nodes(session_ids)` which runs **one MERGE** statement against `agent_events`. The MERGE's USING clause invokes `AI.GENERATE` per row: -Five queries against one graph create reusable **evidence hooks** for three EU regulatory conversations. This is not a compliance certification; it is the data substrate a compliance review needs. +```sql +MERGE `

..extracted_biz_nodes` AS target +USING ( + SELECT base.span_id, base.session_id, + JSON_EXTRACT_SCALAR(entity, '$.entity_type') AS node_type, + JSON_EXTRACT_SCALAR(entity, '$.entity_value') AS node_value, + CAST(JSON_EXTRACT_SCALAR(entity, '$.confidence') AS FLOAT64) AS confidence + FROM `

..agent_events` AS base, + UNNEST(JSON_EXTRACT_ARRAY(REGEXP_REPLACE(REGEXP_REPLACE( + AI.GENERATE('Extract business entities (Product, Audience, Channel, …) from this payload. Return JSON array of {entity_type, entity_value, confidence}.\n\nPayload:\n' || payload_text, + endpoint => 'gemini-2.5-flash').result, + r'^```(?:json)?\s*',''), r'\s*```$',''))) AS entity + WHERE base.session_id IN UNNEST(@session_ids) + AND base.event_type IN ('USER_MESSAGE_RECEIVED','LLM_RESPONSE','TOOL_COMPLETED','AGENT_COMPLETED') +) AS source +ON target.biz_node_id = source.biz_node_id +WHEN MATCHED THEN UPDATE SET … +WHEN NOT MATCHED BY TARGET THEN INSERT … +WHEN NOT MATCHED BY SOURCE AND target.session_id IN UNNEST(@session_ids) THEN DELETE +``` + +**Per-session idempotent in one statement.** No streaming-buffer pitfall. `MERGE` is the SDK's chosen pattern for the BizNode write path. --- -# How fast can we ship this? +# Step 3b — `AI.GENERATE` extraction (Decisions + Candidates) -

+`mgr.extract_decision_points(session_ids)` runs a separate `AI.GENERATE` query whose prompt asks for structured decision data: -
+```text +Identify decision points in this agent payload. A decision point is where +the agent evaluated multiple candidates and selected or rejected them. +For each decision, return decision_type, description, and all candidates +with name, score (0-1), status (SELECTED or DROPPED), and rejection_rationale +(null if selected, required reason if dropped). +``` -### Setup, on a clean GCP project -```bash -cd examples/decision_lineage_demo -./setup.sh +The Python side parses each row's JSON, builds `DecisionPoint` + `Candidate` records, then `store_decision_points(...)`: + +
+ +
+
1
+Dedupe in Python +
_dedupe_rows_by_key last-wins on decision_id / candidate_id
+
+ +
+
2
+DELETE FROM ... WHERE session_id IN (...) +
Per-session reseat — guards against re-running
+
+ +
+
3
+load_table_from_json +
Load job to managed storage; visible to the just-issued DELETE (no streaming-buffer trap)
+
+ +
+ +--- + +# Step 3c — SQL-only edge derivation + +After the node tables exist, three pure-SQL `INSERT INTO` statements build the edges (no `AI.GENERATE`): + +```sql +-- Evaluated edge (BizNode ↔ TechNode lineage) +INSERT INTO context_cross_links (link_id, span_id, biz_node_id, link_type, …) +SELECT b.biz_node_id, b.span_id, b.biz_node_id, 'EVALUATED', … +FROM extracted_biz_nodes b WHERE b.session_id IN UNNEST(@session_ids); + +-- MadeDecision edge (TechNode → DecisionPoint) +INSERT INTO made_decision_edges (edge_id, span_id, decision_id, …) +SELECT CONCAT(span_id, ':MADE_DECISION:', decision_id), span_id, decision_id, … +FROM decision_points WHERE session_id IN UNNEST(@session_ids); + +-- CandidateEdge (DecisionPoint → CandidateNode, with edge_type) +INSERT INTO candidate_edges (edge_id, decision_id, candidate_id, edge_type, …) +SELECT …, CASE c.status WHEN 'SELECTED' THEN 'SELECTED_CANDIDATE' + ELSE 'DROPPED_CANDIDATE' END, … +FROM candidates c WHERE c.session_id IN UNNEST(@session_ids); ``` -| Phase | Wall time | -|---|---| -| Tooling + APIs + venv | ~30s | -| Live agent (6 sessions) | 3-7 min | -| `AI.GENERATE` extraction | 30-90s | -| Rich-graph projection | ~10s | -| Render BQ Studio queries | <1s | +Three tables, one statement each, all per-session-scoped. The `edge_type` on `candidate_edges` is what powers Block 4's `WHERE ce.edge_type = 'DROPPED_CANDIDATE'` filter. + +--- + + + +# Step 4 — The 7 SDK backing tables + +| Table | Key | Written by | What it holds | +|---|---|---|---| +| `agent_events` | `span_id` | BQ AA Plugin | Plugin-recorded spans (162 rows = 6 sessions × 27 spans) | +| `extracted_biz_nodes` | `biz_node_id` | SDK MERGE | Business entities from trace text | +| `context_cross_links` | `link_id` | SDK DML | Span ↔ BizNode references | +| `decision_points` | `decision_id` | SDK load job | Extracted decisions from `LLM_RESPONSE` text | +| `candidates` | `candidate_id` | SDK load job | Extracted options per decision: selected or dropped | +| `made_decision_edges` | `edge_id` | SDK DML | Span → Decision lineage | +| `candidate_edges` | `edge_id` | SDK DML | Decision → Candidate, with selected / dropped edge type | + +Every backing table has `row_count == distinct_keys` after the SDK fix landed in PR #99 — the property-graph KEY contract holds end-to-end. + +--- + + +# Step 5 — The rich-graph projection layer + +`build_rich_graph.py` adds **demo-only** SQL projections so the BigQuery Studio Explorer reads in business language. **No new AI calls** in this step: + +| Derived table | Built from | Purpose | +|---|---|---| +| `campaign_runs` | `run_agent.py` writes directly | One row per agent invocation, joined to campaign metadata in `campaigns.py` | +| `rich_agent_steps` | `agent_events` (DISTINCT) | Deduped span projection — one row per `span_id` (TechNode is multi-event per span by design) | +| `rich_decision_types` | `decision_points` | Normalised decision categories (`audience-selection`, `budget-allocation`, …) | +| `rich_candidate_statuses` | `candidates` | Distinct `OptionOutcome` values (SELECTED, DROPPED) | +| `rich_rejection_reasons` | `candidates` | Distinct rejection-rationale strings as first-class nodes | + +Plus five edge projections wiring the new labels back to SDK-owned facts. Schema lives in `rich_property_graph.gql.tpl` and is deterministic across reruns. + +--- + + + +# Step 6 — The 8 node labels (what each node means) + + + + + + + + + + + + + + + +
LabelSource tableKEYWhat it represents
CampaignRuncampaign_runssession_idOne agent invocation against one brief — the unit of audit
AgentSteprich_agent_stepsspan_idOne step the agent took (LLM call, tool invocation, HITL check)
MediaEntityextracted_biz_nodesbiz_node_idAn audience, channel, creative, budget unit, or campaign — extracted from trace text
PlanningDecisiondecision_pointsdecision_idA moment the agent committed to a choice between options
DecisionOptioncandidatescandidate_idOne option weighed at a planning decision (selected or dropped)
DecisionCategoryrich_decision_typesdecision_type_idNormalised decision category (audience selection, budget allocation, …)
OptionOutcomerich_candidate_statusesstatus_idSELECTED or DROPPED — the outcome of weighing
DropReasonrich_rejection_reasonsreason_idA distinct rejection rationale (deduplicated across the portfolio)
+ +--- + +# Step 7 — The 9 edge labels (how the graph connects) + + + + + + + + + + + + + + + + +
Edge labelSource → DestinationReads as
CampaignActivityCampaignRun → AgentStep"this run produced this step"
NextStepAgentStep → AgentStep (parent_span_id → span_id)"this step caused that step" (causal chain)
ConsideredEntityAgentStep → MediaEntity"this step touched this entity"
DecidedAtAgentStep → PlanningDecision"this step is where the decision committed"
CampaignDecisionCampaignRun → PlanningDecision"this run made this decision"
InCategoryPlanningDecision → DecisionCategory"this decision is an audience-selection / budget-allocation / …"
WeighedOptionPlanningDecision → DecisionOption"this decision considered this option"
HasOutcomeDecisionOption → OptionOutcome"this option was selected / dropped"
RejectedBecauseDecisionOption → DropReason"this option was dropped for this reason"
+ +
+Read top to bottom in plain English: "this run produced this step → which decided at this planning decision → which weighed this option → which has this outcome → which was rejected because of this reason." Five edges, one query, full audit trail.
-
+--- + +# Step 8 — How a GQL query actually traverses + +The visualization GQL (the demo's Block 2): + +```sql +GRAPH `

..rich_agent_context_graph` +MATCH p = (cr:CampaignRun)-[:CampaignDecision]->(dp:PlanningDecision) + -[:WeighedOption]->(opt:DecisionOption)-[:HasOutcome]->(st:OptionOutcome) +WHERE cr.session_id = '' +RETURN p; +``` -### Cost per setup run -- 6 live `gemini-2.5-pro` invocations -- 2 `AI.GENERATE` extraction queries -- A few hundred BQ rows + one property graph +What BigQuery does, table by table: -**Order-of-cents for the verified demo run.** Queries against the prebuilt graph are lightweight. +

    +
  1. campaign_runs → bind cr, filtered by session_id (1 row).
  2. +
  3. rich_campaign_decision_edges → join to decision_points on session_id (~5 decisions per session).
  4. +
  5. candidate_edges → join to candidates on decision_id (~3 options per decision).
  6. +
  7. rich_candidate_status_edges → join to rich_candidate_statuses on status (1 row per option).
  8. +
  9. Return paths bound to p. BigQuery Studio renders these as one fan-out per decision in the Graph tab.
  10. +
-### From zero to leadership demo -**Under 10 minutes**, fully reproducible, on any GCP project. +**5 fan-outs of 3 options each = 15 paths visualized for one session, ~97 paths across all 6 campaigns.** The traversal is deterministic, the rendering is interactive. + +--- + +# Step 9 — The temporal dimension + +
+ +
+ +### What's timestamped +- `agent_events.timestamp` — when each plugin-recorded span happened (microsecond precision) +- `agent_events.latency_ms` — `{total_ms, time_to_first_token_ms}` +- `context_cross_links.created_at` — when cross-links were derived +- `candidate_edges.created_at` — when decision edges were derived +- `decision_points.span_id` → traces back to the source span's timestamp + +
+ +
+ +### What you can ask over time +- *"Across the last 30 days, which decision categories saw rising rejection rates?"* +- *"Show me the per-day count of decisions made with confidence < 0.7."* (oversight trend) +- *"Compare this quarter's rejection-rationale distribution vs last quarter's."* (drift) +- *"Latency p50/p95 for LLM_RESPONSE spans on the audience-selection decision over the past week."* + +Each query is a join of `made_decision_edges` ⋈ `agent_events` plus a `WHERE timestamp BETWEEN ...` filter — no schema changes needed.
@@ -685,10 +1100,9 @@ cd examples/decision_lineage_demo --- +
Close
-
Section 4 of 4
- -# Where to go next +# Where to start --- @@ -704,21 +1118,27 @@ cd examples/decision_lineage_demo ### The demo bundle `examples/decision_lineage_demo/` -### What ships in the bundle -- `setup.sh` / `reset.sh` — one-shot bootstrap + tear-down +### One-shot setup (5–10 min) +```bash +cd examples/decision_lineage_demo +./setup.sh +``` + +### What ships +- `setup.sh` / `reset.sh` — bootstrap + tear-down - `agent/` + `campaigns.py` — real ADK agent + 6 briefs - `run_agent.py` + `build_graph.py` + `build_rich_graph.py` -- `bq_studio_queries.gql` (rendered) — six GQL blocks -- `property_graph.gql` (rendered) — recreate-from-tables DDL +- `bq_studio_queries.gql` — six paste-and-run GQL blocks +- `property_graph.gql` + `rich_property_graph.gql` — DDL templates
-### Documentation +### Documentation that ships with the bundle - [`README.md`](README.md) — orientation - [`SETUP_NEW_PROJECT.md`](SETUP_NEW_PROJECT.md) — clean-project reproduction -- [`DEMO_NARRATION.md`](DEMO_NARRATION.md) — 5-min leadership talk track +- [`DEMO_NARRATION.md`](DEMO_NARRATION.md) — 5-min talk track - [`BQ_STUDIO_WALKTHROUGH.md`](BQ_STUDIO_WALKTHROUGH.md) — click-by-click in BQ Studio - [`DEMO_QUESTIONS.md`](DEMO_QUESTIONS.md) — the 5 EU questions with verified GQL - [`DATA_LINEAGE.md`](DATA_LINEAGE.md) — how the 7 tables produce the graph @@ -733,34 +1153,23 @@ Apache 2.0 --- -# Anticipated questions - -| Q | Short answer | -|---|---| -| **How long to deploy?** | Plugin integration, SDK call, committed DDL. **Days, not quarters**, for a pilot. | -| **Other agents we haven't instrumented?** | Same plugin, same plug-in point. Schema doesn't care which agent wrote the spans. | -| **Cost?** | Two `AI.GENERATE` calls per build + standard BQ query cost. **Order-of-cents for this verified demo run.** | -| **What if `AI.GENERATE` misses a decision?** | Build script reports per-session counts. Re-run extraction without re-running the agent. Talk track is count-agnostic by design. | -| **Does this expose PII?** | Stores trace-derived rationale text. PII handling follows the existing `agent_events` retention and access policy. | -| **Who operates this?** | The team that owns the agent platform. Plugin write path + SDK read path both already on-call. | - ---- - # Q&A -## From agent behavior to auditable evidence +## Decision Lineage with BigQuery Context Graphs diff --git a/examples/decision_lineage_demo/SLIDES.pptx b/examples/decision_lineage_demo/SLIDES.pptx index d3e76d2f..e4852e73 100644 Binary files a/examples/decision_lineage_demo/SLIDES.pptx and b/examples/decision_lineage_demo/SLIDES.pptx differ