Commit 1185e9b
feat: Add agent improvement cycle demo (#61)
* feat: Add agent improvement cycle demo and quality report enhancements
Add --app-name and --output-json flags to quality_report.py for filtering
sessions by agent and producing structured JSON output. Add a complete
demo showing the eval -> analyze -> improve cycle with a company info
agent that starts with intentional prompt flaws and self-corrects across
multiple cycles using LLM-as-a-judge quality evaluation.
* fix: Vertex AI auth, prompt sanitization, and orchestrator resilience
Configure improver to use Vertex AI (matching the agent), sanitize
Gemini-generated summaries to prevent Python syntax errors from
multiline comments, add retry on syntax errors, and reduce quality
report limit to avoid stale session accumulation across cycles.
* Redesign v1 prompt for dramatic quality gap, rewrite README
Make v1 actively discourage tool use ("answer from knowledge above,
deflect unknowns to HR") so 7/10 eval cases reliably fail. Redesign
eval cases to cover expenses, benefits, holidays, and date handling.
Rewrite README to emphasize that the cycle learns from real production
sessions logged via BigQueryAgentAnalyticsPlugin, not just static evals,
and that new eval cases are derived from actual field failures.
* Improve README clarity, rename env vars, add quickstart examples
* Fix path resolution for standalone SDK checkout
REPO_ROOT was 3 levels up (../../..) which assumed the demo lived
inside agent-operations/src/. In a standalone SDK checkout the demo
is at examples/agent_improvement_cycle/, so 2 levels up is correct.
Also fixes .env path in agent.py and improve_agent.py and the
quality_report.py path in run_cycle.sh.
* Use local .env inside demo directory instead of SDK root
* Remove google-adk[bigquery] extra that does not exist in ADK 1.31+
* Enable required APIs in setup, document IAM roles
* Suppress pip script-location warnings in setup
* Fix .env: add PROJECT_ID and DATASET_LOCATION, source in run_cycle.sh
* Explain how eval runs locally via ADK InMemoryRunner
* Use PROJECT_ID consistently, drop GOOGLE_CLOUD_PROJECT from .env
* Add --threshold flag for configurable unhelpful rate warning
* Address PR review: add LLM prompt validation, schema checks, retry logic
- Fix --output-json stdout corruption by writing status line to stderr
- Add --output-json help note clarifying it ignores --samples cap
- Add --app-name help note about root_agent_name requirement
- Add LLM-based prompt validation via second Gemini call comparing
original vs improved prompt for content preservation and coherence
- Add eval case schema validation (required: id, question, category,
expected_tool) with skip-and-log on malformed cases
- Replace fixed sleep 5 with retry loop (up to 6 attempts with backoff)
for BigQuery write propagation
- Remove unused UUID generation in run_eval.py
- Soften reproducibility claim to acknowledge LLM non-determinism
- Add type annotations to all functions in improve_agent.py and run_eval.py
- Update README with Guardrails section and expanded Step 3 docs
- Update DEMO_SCRIPT.md with validation talking points
- Update run_cycle.sh comments to document validation steps
* Replace LLM validation with golden eval gate and synthetic traffic generation
Replace the LLM-based prompt sanity check with a behavioral regression
gate: candidate prompts are tested against the full golden eval set
using a throwaway agent before being accepted.
Add Gemini-powered synthetic traffic generation (generate_traffic.py)
that produces diverse user questions each cycle, distinct from the
golden set. Failed synthetic cases are extracted and added to the
golden eval set so regressions are caught in future cycles.
Update run_cycle.sh to a 4-step flow (generate → run → evaluate →
improve), and rewrite README.md and DEMO_SCRIPT.md accordingly.
* Add --golden eval mode, fix after-improvement measurement, trim golden set to 3
- Add --golden flag to run_eval.py: runs eval cases through a throwaway
agent with LLM judge (no BQ logging) for immediate pass/fail scoring
- Fix step 5 measurement: use --golden instead of BQ query to avoid
evaluating stale sessions from previous runs
- Re-run same synthetic traffic (not new) for fair before/after comparison
- Show golden eval set growth after improvement step
- Trim golden set to 3 V1-passing cases; failures discovered by cycle
- Default traffic count: 15 -> 10
- Remove response truncation in eval output
* Fix reset mechanism, add fresh-traffic measurement, tune demo output
- Replace git-checkout reset with baseline file copies (prompts_v1.py,
eval_cases_v1.json) since src/ is gitignored by the parent repo
- Fix run_eval.py --golden to exit non-zero when cases fail
- Replace circular same-traffic re-run in Step 5 with fresh synthetic
traffic generation for honest generalization testing
- Start golden eval set at 3 V1-passing cases (grows organically)
- Tune traffic generation prompt with exact tool data to produce
answerable questions
- Add pre-flight golden eval check to run_cycle.sh
- Update DEMO_SCRIPT timings from real test runs (~5 min single cycle)
* Fix pre-flight dead code, Step 5 exit-on-fail, and results box alignment
- Pre-flight: use set +e/set -e to capture exit code (set -e made
PREFLIGHT_EXIT always 0, rendering the check dead code)
- Step 5: wrap eval in set +e so failures don't kill the script
(failures here are the "after" score, not errors)
- Results box: use dynamic width and int-format rates for proper
border alignment
- Clarify --golden flag help text: it's LLM judge mode that works
with any eval cases via --eval-cases
* Address PR review: stale files, 0-session guard, graceful degradation
Review feedback from caohy1988:
- Delete stale report JSON before retry loop so previous runs can't
satisfy the file-existence check (run_cycle.sh Steps 3 and 5)
- Guard against 0-session quality reports in improve_agent.py to
prevent prompt rewrites based on no signal
- Align setup.sh env vars with README: honor PROJECT_ID if set
(fall back to gcloud), rename BQ_LOCATION to DATASET_LOCATION
Review feedback from PR review:
- Golden gate graceful degradation: skip improvement instead of
crashing with traceback when all 3 candidates fail golden eval.
Failed cases are still extracted into the golden set.
- Add cost notes section to README documenting per-cycle Gemini
API call growth as golden set expands
- Update Guardrails docs to reflect skip-on-failure behavior
* Restructure Step 4/5, fix BQ staleness, revert app-name hack
- Step 4: extract failed cases FIRST so regression gate validates
against full golden set (original + extracted)
- Step 5: mirror Steps 1-3 (golden eval, fresh BQ traffic, quality
report from BQ) instead of LLM judge mode
- Fix BQ staleness: increase propagation wait to 30s, add session ID
guard to detect and retry when stale Step 2 sessions are returned
- Revert --app-name parameter from run_eval.py (not needed)
- Replace "throwaway agent" wording with "local agent"
- Update DEMO_SCRIPT.md and README.md to match new flow
* Fix reset.sh to use git checkout, remove redundant baseline files
reset.sh now uses git checkout to restore prompts.py and
eval_cases.json to V1 state. Removed prompts_v1.py and
eval_cases_v1.json that were unnecessary copies. Restored
committed state to V1 so git checkout works correctly.
* Parallelize eval cases, remove duplicate regression check, suppress warnings
- Run eval cases concurrently with asyncio.gather (both run_all_cases
and run_golden_eval) for ~5-8x speedup
- Remove redundant Step 5 regression check (Step 4 already validates
the candidate against all golden cases)
- Add PYTHONWARNINGS=ignore to suppress authlib deprecation warnings
- Remove per-call -W flags (env var covers all python invocations)
* Restore Step 5 regression check (run all golden cases PASS/FAIL)
* Fix BQ retry: validate session count before accepting quality report
* Fix dotenv override, remove duplicate Step 5 eval, update demo numbers
- Fix quality_report.py dotenv override=True that overwrote demo env
vars (DATASET_ID, TABLE_ID) with repo root .env values, causing
0-session results
- Remove duplicate regression check from Step 5 (already validated
in Step 4)
- Update DEMO_SCRIPT.md with actual demo results (40% → 90%)
* Add setup step to DEMO_SCRIPT.md
* Throttle parallel eval to 3 concurrent requests to avoid Vertex AI 429
* Revert "Throttle parallel eval to 3 concurrent requests to avoid Vertex AI 429"
This reverts commit 6591599.
* Add HttpRetryOptions for 429 retries, auto-fix pre-flight failures
- Use HttpRetryOptions(attempts=3) on all genai.Client and Gemini model
instances instead of manual retry loops
- When pre-flight golden eval fails, automatically run the improver to
fix the prompt instead of just printing an error
- Add --from-eval-results flag to improve_agent.py to build a synthetic
quality report from golden eval results
* Add reusable LoopAgent-based prompt improver module
Replace the manual retry loop in improve_agent.py with an ADK
LoopAgent that wraps a single LlmAgent with six tools:
read_quality_report, read_current_prompt, generate_candidate,
test_candidate, write_prompt, and exit_loop. The LLM decides its
own workflow — when to retry, when to exit.
The agent_improvement/ module is reusable for any ADK agent via
ImprovementConfig (agent_factory, tools, prompt_adapter, eval set).
Key changes:
- agent_improvement/: new reusable module with PromptAdapter ABC,
EvalRunner, TrafficGenerator, tool introspection, LoopAgent
- run_improvement.py: demo entry point wiring company_info_agent
- run_cycle.sh: calls run_improvement.py, handles failed improvements
- README.md, DEMO_SCRIPT.md: updated architecture and Step 4 docs
Tested: full cycle V1→V2 with 20%→100% quality on fresh traffic.
* Deduplicate agent creation, remove old improver, fix domain leaks in reusable module
- Add create_agent(prompt) factory + AGENT_TOOLS to agent/agent.py as
single source of truth for agent creation
- Refactor eval/run_eval.py golden mode to use EvalRunner from
agent_improvement instead of duplicating judge prompt and eval logic
- Refactor run_improvement.py to import create_agent + AGENT_TOOLS
instead of duplicating the agent factory
- Delete improver/ directory (fully replaced by agent_improvement/)
- Fix hardcoded "lookup_company_policy" in extract_failed_cases()
- Fix domain-specific "policy question" / "defers to HR" in JUDGE_PROMPT
- Add judge_prompt field to ImprovementConfig for customization
* Replace inline Python JSON parsing with jq in run_cycle.sh
7 of 10 python3 -c calls were just reading JSON fields. jq is cleaner,
faster, and doesn't need a Python interpreter for simple field access.
The 3 remaining python3 -c calls import CURRENT_VERSION from the
agent's prompts.py module, which requires Python.
* Remove unused TrafficGenerator ABC and GenericTrafficGenerator
Never wired up — the demo uses eval/generate_traffic.py (domain-specific)
which produces better traffic because it knows the actual policy data.
* Make improvement cycle configurable via --agent-config flag
Add per-agent improvement_config.py convention with build_config(),
get_root_agent(), get_bq_plugin(), and metadata constants. All
orchestration scripts (run_cycle.sh, run_improvement.py, run_eval.py)
accept --agent-config to point at any agent. Default falls back to
demo's company_info_agent.
Extend PythonFilePromptAdapter with prompt_variable and version_variable
params to support multi-prompt agents (e.g. knowledge_supervisor).
* Revert "Make improvement cycle configurable via --agent-config flag"
This reverts commit 1244b80.
* Make improvement cycle configurable via JSON config
Replace Python config module with declarative improve/config.json.
Shell script reads metadata with jq and version with grep (zero
Python calls for config). Python scripts use config_loader.py which
reads JSON, imports agent module, and builds ImprovementConfig.
Extend PythonFilePromptAdapter with prompt_variable and
version_variable params for multi-prompt agents.
* Add Vertex AI prompt storage, optimizer, and teacher model ground truth
- Store prompts in Vertex AI Prompt Registry instead of local files.
Agent reads prompt from VERTEX_PROMPT_ID env var on startup, falls
back to local prompts.py if not set.
- Add VertexPromptAdapter for cloud prompt read/write/versioning.
Delete + recreate on reset for clean v1 state.
- Add Vertex AI Prompt Optimizer integration (target_response mode).
Teacher agent generates synthetic ground truth for failed sessions
using the same tools, then feeds (question, bad_response, ground_truth)
triples to the optimizer.
- Fix async bug: generate_candidate and _generate_via_vertex_optimizer
are now async, using await instead of run_until_complete().
- Move config.json from improve/ to project root, remove improve/ dir.
Update config_loader agent_root from grandparent to parent.
- setup.sh now creates Vertex AI prompt automatically (step 6/6).
run_cycle.sh auto-creates if vertex_prompt_id is empty.
- Update README and DEMO_SCRIPT to reflect Vertex AI architecture,
config.json, optimizer, teacher model, and future/next steps.
* Fix Vertex AI optimizer: tool-use directive, dependency, and cleanup
- Fix dependency: vertexai>=1.148.0 -> google-cloud-aiplatform>=1.148.0
(standalone vertexai PyPI package caps at 1.71.1)
- setup.sh auto-removes conflicting standalone vertexai package
- Add tool-use directive to optimizer input and re-append after
optimizer strips it (fixes 40%->100% instead of 40%->70%)
- Save ground truth to reports/ground_truth_latest.json for inspection
- Remove redundant pre-flight re-run (already validated in test_candidate)
- Clear ImportError when google-cloud-aiplatform not installed
- Add [improvement] extra to pyproject.toml
* Mirror Vertex AI prompt updates to local prompts.py for git tracking
* Classify extracted eval cases and display prompt in cycle output
- Infer category and expected_tool for extracted eval cases using
keyword matching against known tool topics (pto, sick_leave,
remote_work, expenses, benefits, holidays, date_handling)
- Fix existing 6 extracted cases: unknown -> proper categories
- Display full prompt text at start and end of improvement cycle
- Show version, char count, and inspect command for Vertex AI prompts
* Add show_prompt.sh, use it in run_cycle.sh, reset prompts.py on reset
* Add configurable teacher_model_id and update docs for accuracy
Add teacher_model_id config field so the teacher agent can use a
stronger model (e.g. gemini-2.5-pro) for ground truth generation.
Defaults to null (same model as target agent). Update README with
teacher agent explanation, fix percentages (40%->100%), document
prompt_variable/version_variable/show_prompt.sh/local mirroring.
Fix DEMO_SCRIPT dependency reference and result percentages.
* Improve cycle UX: progress output, timeouts, and ground truth display
- Add 120s timeout per eval case so one stuck request doesn't hang the run
- Show agent vs teacher comparison after ground truth generation
- Print status before optimizer call and before golden eval test
- Fill BQ wait times with useful info (questions, golden set, ground truth count)
- step_end now prints descriptive label and elapsed time
- Display total wall time in minutes:seconds at end of run
- Use SDK native HttpRetryOptions for 429 handling on optimizer
* Fix malformed JSON retry in traffic generator, update cycle timing
- Retry up to 3 times when Gemini returns invalid JSON in traffic gen
- Update DEMO_SCRIPT timing: 5min->6min per cycle, 15min->18min for 3
- User-facing run_cycle.sh tweaks from prior session
* Fix setup hanging on stale .env: always recreate, increase timeouts
- setup.sh now always recreates .env with current project (was skipping
if file existed, leaving stale VERTEX_PROMPT_ID from other projects)
- Increase Vertex AI prompt create/delete timeout from 90s to 300s
- Add progress prints in setup_vertex.py so user sees where it's stuck
* Fix setup_vertex.py hanging: defer imports, add progress prints
Move vertexai imports from module-level to inside main() so the user
sees "Loading Vertex AI SDK..." before the slow import. On fresh
projects, the SDK initialization can take minutes.
* Address PR review: tool-use judge, traffic dedup, cost docs, timeouts
- Judge prompt now checks expected_tool: fails responses that claim
specifics without evidence the tool was called (catches hallucination)
- Traffic generator deduplicates against golden eval set and retries
if fewer than count/2 cases generated
- README Cost section documents golden eval set growth and its impact
- setup_vertex.py: defer imports, add progress prints, increase timeout
to 300s for fresh project provisioning
* feat: Add overview image and links to Vertex AI docs
* Update README: add cycle diagram, fix config defaults, improve docs
- Add ASCII visualization diagram showing the full improvement cycle flow
- Add quick-links navigation bar at the top
- Fix config table defaults (prompt_storage -> python_file, use_vertex_optimizer -> false)
- Add missing config fields (vertex_project, vertex_location)
- Document pre-flight check, per-case timeouts, traffic deduplication
- Add gcloud command for granting IAM roles
- Highlight hero moment as blockquote
* feat: Add overview image and links to Vertex AI docs (upd)
* feat: README(upd)
* Fix PR review issues: tool-call evidence, freshness guard, domain leaks
HIGH:
- eval_runner: capture actual tool-call events (function_call parts),
give judge objective data instead of asking it to guess from text
- run_cycle.sh: surface failing cases and version before auto-fixing
in pre-flight, show V_old -> V_new after fix
MEDIUM:
- run_cycle.sh: use actual traffic count (after dedup) for --limit
instead of requested count, preventing stale session pollution
- run_cycle.sh: stricter freshness guard -- zero overlap with Step 3
session IDs, not just != check
- improver_agent: remove hardcoded HR-domain keywords from
_classify_question, derive categories from tool names/docstrings;
remove hardcoded lookup_company_policy from optimizer re-append
LOW:
- Consistent vertex_location from config.json in run_cycle.sh,
show_prompt.sh (was hardcoded us-central1 or using DATASET_LOCATION)
- Document _state single-process assumption in improver_agent.py
- Add drift warning in VertexPromptAdapter.read_prompt() when local
mirror version differs from Vertex AI
* Remove remaining domain leaks and location hardcode from reusable module
- Teacher prompt: "contact HR" -> "defer the user elsewhere" (generic)
- Optimizer tool_use_directive: "policy information/question" -> generic
- Optimizer Client: hardcoded "us-central1" -> config.vertex_location
- Add vertex_location field to ImprovementConfig, pass through loader
* Fix README markdown links
* Suppress INFO log spam in run_cycle.sh via LOGLEVEL env var
* Improve _classify_question: word matching, plural normalization, scoring
Replace substring matching with word-boundary splitting, basic suffix
stripping (-s, -ing, -ly, -ed), stop-word filtering, and score-based
tool selection. Fixes (unknown) classification for PTO, expense,
remote-work, and date questions.
* Suppress authlib deprecation warnings in all Python entry points
Add warnings.filterwarnings('ignore') before imports in run_eval.py,
run_improvement.py, and quality_report.py. Use python3 -W ignore via
$PY variable in run_cycle.sh for belt-and-suspenders coverage.
* Add progress indicator for Vertex AI Prompt Optimizer, suppress genai warnings
The optimizer is a server-side job that takes 2-4 minutes. Previously
it blocked silently with no output. Now prints elapsed time every 15s
via asyncio.to_thread + progress task. Also suppresses genai SDK
"non-text parts" logger warnings across all entry points.
* img update
* Align README step labels with run_cycle.sh output
Drop the 6th "REPEAT" pseudo-step and match step names to what
the script actually prints (e.g. GENERATE SYNTHETIC TRAFFIC).
* Filter authlib deprecation warning from stderr in run_cycle.sh
The warning comes from google-adk's transitive authlib dependency
and bypasses Python warning filters when a newer authlib version is
installed in ~/.local/lib. Shell-level stderr filter strips the
four lines of noise.
* Address PR review: golden set tool-call conflict, session verification, traffic failures
High:
- Set expected_tool to "unknown" for 3 committed baseline cases.
V1 answers from inline knowledge without calling tools, so the
tool-call evidence judge was failing them on every pre-flight.
- Add prominent WARNING banner when pre-flight auto-improve runs.
Medium:
- Traffic-mode eval now fails if any case has empty session_id or
ERROR response, since those cases never reach BigQuery.
- Step 3/5 save expected session IDs from eval results and verify
the quality report covers the same sessions.
Low:
- Add -ies plural normalization to _word_forms (policies -> policy).
- Document all four judge prompt placeholders in config.py.
- Fix README link to quality_report.py (scripts/ -> ../../scripts/).
- Remove trailing blank line at EOF in run_cycle.sh.
* Fix README: config defaults, missing fields, teacher prompt, broken sentence
- prompt_storage default: vertex -> python_file (matches code)
- use_vertex_optimizer default: true -> false (matches code)
- Add vertex_project and vertex_location to config table
- Teacher prompt: "contact HR" -> "defer the user elsewhere" (matches code)
- Fix incomplete sentence in The Agent section
* Fix setup_vertex.py: use typed config objects for Vertex AI prompt API
prompts.delete() and prompts.create() expect DeletePromptConfig and
CreatePromptConfig objects, not plain dicts. The dict caused
"'dict' object has no attribute 'timeout'" on delete.
* Display golden eval set after starting prompt in run_cycle.sh
* Add table of contents to agent improvement cycle README
* Remove bash 3.2-incompatible stderr filter, fix classifier tie-breaking
Remove the case/esac stderr filter inside process substitution that
broke on macOS default bash. Add lexicographic tie-breaking to
_classify_question so tool-order no longer affects results.
---------
Co-authored-by: Haiyuan Cao <haiyuan@google.com>1 parent 7c34289 commit 1185e9b
27 files changed
Lines changed: 4754 additions & 9 deletions
File tree
- examples/agent_improvement_cycle
- agent_improvement
- agent
- eval
- scripts
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
| 120 | + | |
| 121 | + | |
| 122 | + | |
| 123 | + | |
| 124 | + | |
| 125 | + | |
| 126 | + | |
| 127 | + | |
| 128 | + | |
| 129 | + | |
| 130 | + | |
| 131 | + | |
| 132 | + | |
| 133 | + | |
| 134 | + | |
| 135 | + | |
| 136 | + | |
| 137 | + | |
| 138 | + | |
| 139 | + | |
| 140 | + | |
| 141 | + | |
| 142 | + | |
| 143 | + | |
| 144 | + | |
| 145 | + | |
| 146 | + | |
| 147 | + | |
| 148 | + | |
| 149 | + | |
| 150 | + | |
| 151 | + | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
| 156 | + | |
| 157 | + | |
| 158 | + | |
| 159 | + | |
| 160 | + | |
| 161 | + | |
| 162 | + | |
| 163 | + | |
| 164 | + | |
| 165 | + | |
| 166 | + | |
| 167 | + | |
| 168 | + | |
| 169 | + | |
| 170 | + | |
| 171 | + | |
| 172 | + | |
| 173 | + | |
| 174 | + | |
| 175 | + | |
| 176 | + | |
| 177 | + | |
| 178 | + | |
| 179 | + | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
| 183 | + | |
| 184 | + | |
| 185 | + | |
| 186 | + | |
| 187 | + | |
| 188 | + | |
| 189 | + | |
| 190 | + | |
| 191 | + | |
| 192 | + | |
| 193 | + | |
| 194 | + | |
| 195 | + | |
| 196 | + | |
| 197 | + | |
| 198 | + | |
| 199 | + | |
| 200 | + | |
| 201 | + | |
| 202 | + | |
| 203 | + | |
| 204 | + | |
| 205 | + | |
| 206 | + | |
| 207 | + | |
| 208 | + | |
| 209 | + | |
| 210 | + | |
| 211 | + | |
| 212 | + | |
| 213 | + | |
| 214 | + | |
| 215 | + | |
| 216 | + | |
| 217 | + | |
| 218 | + | |
| 219 | + | |
| 220 | + | |
| 221 | + | |
| 222 | + | |
| 223 | + | |
| 224 | + | |
| 225 | + | |
| 226 | + | |
| 227 | + | |
| 228 | + | |
| 229 | + | |
| 230 | + | |
| 231 | + | |
| 232 | + | |
| 233 | + | |
| 234 | + | |
| 235 | + | |
| 236 | + | |
| 237 | + | |
| 238 | + | |
| 239 | + | |
| 240 | + | |
| 241 | + | |
| 242 | + | |
| 243 | + | |
| 244 | + | |
| 245 | + | |
| 246 | + | |
| 247 | + | |
| 248 | + | |
| 249 | + | |
| 250 | + | |
| 251 | + | |
| 252 | + | |
| 253 | + | |
| 254 | + | |
| 255 | + | |
0 commit comments