Skip to content

Unboxing the AI Agent: Decision Lineage with BigQuery Context Graphs #98

@haiyuan-eng-google

Description

@haiyuan-eng-google

Demo Title: Unboxing the AI Agent: Decision Lineage with BigQuery Context Graphs
Target Duration: 3-4 minutes
Focus: BigQuery Property Graphs, GQL, and Decision Semantics for Yahoo's buyer-side agent.

Phase 1: The Graph Schema in BigQuery Studio (45 seconds)

(Matches step a: Visually showing the shape of the context graph)

  • Action: Open BigQuery Studio and display the visual representation or DDL of the 6-Pillar Property Graph.
  • Talk Track: "Today, enterprise AI agents are often black boxes. We are going to show how we track every decision a Yahoo buyer-side agent makes using a Context Graph. Here in BigQuery Studio, you can see our graph schema natively models Decision Semantics. Instead of flat event logs, the graph explicitly connects a DecisionPoint to the Candidate options evaluated, assigning a Score and SelectionOutcome. Crucially, it models the RejectionReason, ContextSnapshot, and ConstraintApplication. This forms a complete, EU-compliant audit trail."

Phase 2: Agent Execution & Background Ingestion (30 seconds)

(Matches step b: Running the BQAA plugin on a simulated Yahoo agent)

  • Action: Execute a simulated ADCP (Ad Context Protocol) campaign brief in the terminal or UI (e.g., "Launch a $50k brand awareness campaign targeting Gen Z").
  • Talk Track: "We are now triggering a simulated Yahoo buyer-side agent to negotiate an ad buy. As the agent plans the campaign, the BigQuery Agent Analytics (BQAA) plugin asynchronously streams the execution telemetry into BigQuery. Behind the scenes, we use AI.GENERATE with a strict JSON output_schema to automatically extract the agent's unstructured reasoning into typed decision nodes and edges, populating the context graph in real-time."

Phase 3: Conversational Analytics & GQL Reasoning (90 seconds)

(Matches step c: Asking business questions and retrieving decision traces)

  • Action: Open Conversational Analytics (CA) in BigQuery or the interactive query explorer. Ask business questions to trace the decision.
  • Question 1: "Which ad candidates were considered for the Nike campaign, and why were the others rejected?"
  • Visual/Query: Show the system translating this into a native GQL traversal query:
    GRAPH `project.dataset.agent_context_graph`
    MATCH (dp:DecisionPoint)-[ce:CandidateEdge]->(cand:CandidateNode)
    WHERE dp.session_id = 'sess-nike-summer' AND ce.edge_type = 'DROPPED_CANDIDATE'
    RETURN cand.name, cand.score, cand.rejection_rationale
  • Talk Track: "Using GQL, we can traverse the graph to answer exactly why the agent acted the way it did. The results show that while the agent selected 'Athletes 18-35', it explicitly dropped 'Fitness Enthusiasts' due to 'Budget constraints'. It tells us exactly why we sold to one target instead of another."

Phase 4: Operations Research & Optimization (Bonus) (45 seconds)

(Matches step d: Where can we do better in this bidding process?)

  • Action: Pivot to an aggregated dashboard or analytical query summarizing RejectionReason nodes across thousands of sessions.
  • Talk Track: "Because these decisions are modeled as a graph, we can move from reactive auditing to proactive Operations Research. If we ask, 'Where can we do better in this bidding process?', we can aggregate the RejectionReason nodes across the graph. If we see a massive cluster of dropped candidates due to 'floor price exceeded', it alerts our yield operations team that our pricing may be misaligned with current market demand. This closes the loop—allowing us to continuously optimize agent actions and update our business constraints based on real-world outcomes."

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions