Skip to content

Commit c0bc97b

Browse files
authored
demo: add richer decision-lineage graph dataset (#101)
* demo: add richer decision-lineage graph dataset * review: harden and rename rich decision-lineage graph
1 parent 172207d commit c0bc97b

16 files changed

Lines changed: 988 additions & 224 deletions

examples/decision_lineage_demo/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,5 @@
22
.venv/
33
bq_studio_queries.gql
44
property_graph.gql
5+
rich_property_graph.gql
56
__pycache__/

examples/decision_lineage_demo/BQ_STUDIO_WALKTHROUGH.md

Lines changed: 38 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,9 @@ Setup:
2121
2. Calls `mgr.build_context_graph(...)` across every session in
2222
`agent_events` (two `AI.GENERATE` calls — biz nodes, then
2323
decisions, ~30-90s).
24-
3. Renders `bq_studio_queries.gql` next to this file with your
24+
3. Builds `rich_agent_context_graph`, a SQL-only presentation graph
25+
over the canonical SDK tables.
26+
4. Renders `bq_studio_queries.gql` next to this file with your
2527
project / dataset / first-session id inlined.
2628

2729
The demo also assumes:
@@ -39,41 +41,47 @@ The demo also assumes:
3941
1. In a browser, open
4042
`https://console.cloud.google.com/bigquery?project=<YOUR_PROJECT_ID>`.
4143
2. In the **Explorer** pane on the left, expand the
42-
`decision_lineage_demo` dataset.
43-
3. You should see the property graph **`agent_context_graph`** in
44-
the dataset listing alongside seven backing tables
45-
(`agent_events`, `extracted_biz_nodes`, `context_cross_links`,
46-
`decision_points`, `candidates`, `made_decision_edges`,
47-
`candidate_edges`).
44+
`decision_lineage_rich_demo` dataset.
45+
3. You should see the property graph **`rich_agent_context_graph`** in
46+
the dataset listing. The canonical **`agent_context_graph`** and
47+
seven SDK backing tables are also present. The richer graph adds
48+
ads-domain labels for campaign runs, agent steps, media entities,
49+
planning decisions, decision options, option outcomes, and drop
50+
reasons.
4851

4952
> *Optional:* click the property graph itself — Studio shows the
5053
> schema in the details pane.
5154
5255
## Step 1 — Confirm the SDK populated the graph (~45s)
5356

54-
Run blocks 1a → 1e from `bq_studio_queries.gql` to confirm the
57+
Run blocks 1a → 1i from `bq_studio_queries.gql` to confirm the
5558
extraction landed on every layer of the graph.
5659

5760
1. **+ Compose new query** in BigQuery Studio.
58-
2. Paste **block 1a** (TechNode count). **Run.** Expect a TechNode
61+
2. Paste **block 1a** (CampaignRun count). **Run.** Expect 6
62+
campaign runs by default.
63+
3. Paste **block 1b** (AgentStep count). **Run.** Expect an AgentStep
5964
count in the low hundreds — the live agent runs produced this
6065
many spans, deterministic given how many sessions ran.
61-
3. Paste **block 1b** (BizNode count). **Run.** Expect a positive
66+
4. Paste **block 1c** (MediaEntity count). **Run.** Expect a positive
6267
number; exact count varies with `AI.GENERATE`.
63-
4. Paste **block 1c** (DecisionPoint count). **Run.** Expect a
68+
5. Paste **block 1d** (PlanningDecision count). **Run.** Expect a
6469
non-zero count — typically several per session, so a few dozen
6570
total across 6 sessions. Some variance.
66-
5. Paste **block 1d** (CandidateNode count). **Run.** Expect
67-
roughly 3 × the DecisionPoint count.
68-
6. Paste **block 1e** (DecisionPoints per session). **Run.** Expect
71+
6. Paste **block 1e** (DecisionOption count). **Run.** Expect
72+
roughly 3 × the PlanningDecision count.
73+
7. Paste **blocks 1f → 1h**. **Run.** These count the richer
74+
presentation labels: decision categories, option outcomes, and
75+
drop-reason nodes.
76+
8. Paste **block 1i** (PlanningDecisions per session). **Run.** Expect
6977
one row per session that actually produced decisions, with the
7078
per-session decision count in the right column. Note the session
7179
ids — Block 2 visualizes one of them (the first by default; you
7280
can swap in any other).
7381

7482
> Talk track: "Setup ran the live agent against six campaign briefs
7583
> — every span the BQ AA Plugin wrote is here, the SDK extracted
76-
> decisions and candidates from each session, and Block 1e shows
84+
> decisions and candidates from each session, and Block 1i shows
7785
> how many decisions came out of each campaign run."
7886
7987
## Step 2 — Visualize ONE session (~75s)
@@ -88,12 +96,17 @@ an interactive diagram.
8896
4. After the query completes, click the **Graph** tab in the
8997
results pane (next to **Table**, **JSON**, **Execution details**).
9098
5. The result renders as one fan-out per extracted decision in the
91-
chosen session — TechNode → DecisionPoint → CandidateNodes.
92-
6. Click any **CandidateNode**. The right-hand properties pane
99+
chosen session — CampaignRun → PlanningDecision → DecisionOption
100+
→ OptionOutcome.
101+
6. Click any **DecisionOption**. The right-hand properties pane
93102
shows `name`, `score`, `status`, and `rejection_rationale`.
94103
Click a DROPPED candidate to surface the rationale.
95-
7. To visualize a different campaign run instead, swap the
96-
`__SESSION_ID__` value in Block 2 with any other id Block 1e
104+
7. Optionally run the second Block 2 query to show the
105+
DecisionOption → DropReason fan-out. It keeps the main graph
106+
readable but gives reviewers a visible "why rejected" node when
107+
needed.
108+
8. To visualize a different campaign run instead, swap the
109+
`__SESSION_ID__` value in Block 2 with any other id Block 1i
97110
returned and re-run.
98111

99112
> Talk track: "Each fan-out is one decision. Selected and dropped
@@ -110,7 +123,7 @@ The same GQL the SDK ships as `mgr.get_eu_audit_gql(session_id=...)`.
110123
3. Studio shows a tabular result: one row per (decision, candidate)
111124
the SDK extracted for the session, with `decision_type`,
112125
`candidate_name`, `candidate_score`, `candidate_status`,
113-
`rejection_rationale`, and the linked TechNode span info.
126+
`rejection_rationale`, and the linked AgentStep span info.
114127
4. Walk the room through the table from top to bottom. Roughly
115128
two-thirds of rows are DROPPED (the agent's prompt asked for
116129
one SELECTED + two DROPPED per decision, so each decision row
@@ -147,7 +160,7 @@ explanation, bias audit, human-oversight trigger, decision
147160
reproducibility, and systemic-pattern audit.
148161

149162
1. Open the **Gemini / Conversational Analytics** panel in BQ
150-
Studio with the `decision_lineage_demo` dataset selected.
163+
Studio with the `decision_lineage_rich_demo` dataset selected.
151164
2. Open [`DEMO_QUESTIONS.md`](DEMO_QUESTIONS.md) in a side editor.
152165
3. For each Q1-Q5, paste the natural-language prompt into BQ CA,
153166
run, then paste the explicit GQL into a query tab and compare.
@@ -158,11 +171,11 @@ reproducibility, and systemic-pattern audit.
158171

159172
| Symptom | Fix |
160173
|---|---|
161-
| Block 1c (decisions) returns 0 | `AI.GENERATE` returned no decisions; re-run `./.venv/bin/python3 build_graph.py` (no need to re-run the agent) |
162-
| Block 1c is much smaller than 5 × session count | Some sessions had few extracted decisions; either accept it (the talk track is count-agnostic) or re-run `build_graph.py` |
174+
| Block 1d (decisions) returns 0 | `AI.GENERATE` returned no decisions; re-run `./.venv/bin/python3 build_graph.py` and then `./.venv/bin/python3 build_rich_graph.py` (no need to re-run the agent) |
175+
| Block 1d is much smaller than 5 × session count | Some sessions had few extracted decisions; either accept it (the talk track is count-agnostic) or re-run `build_graph.py` and then `build_rich_graph.py` |
163176
| Block 2 shows no Graph tab | Make sure the result rendered; the tab takes a second to appear after the first run |
164-
| Block 2 has nothing to draw | The `__SESSION_ID__` baked in by `setup.sh` may not have produced decisions — swap with a different session id from Block 1e |
165-
| "Property graph not found" | `setup.sh` did not finish; check that `build_graph.py` printed `property_graph_created True` |
177+
| Block 2 has nothing to draw | The `__SESSION_ID__` baked in by `setup.sh` may not have produced decisions — swap with a different session id from Block 1i |
178+
| "Property graph not found" | `setup.sh` did not finish; check that `build_graph.py` printed `property_graph_created True`, then rerun `./.venv/bin/python3 build_rich_graph.py` |
166179
| `run_agent.py` fails with permission errors | Missing `roles/aiplatform.user` on the running identity (the live agent calls Vertex AI) |
167180
| `agent_events` table is empty after `run_agent.py` | The plugin may not have flushed; re-run `./.venv/bin/python3 run_agent.py` and verify the script's "Flushing BQ AA Plugin..." line completes without warnings |
168181

0 commit comments

Comments
 (0)