Skip to content

Experiment/optimize zhenxin s f 65nm ptm#6

Open
phuocphn wants to merge 70 commits into
masterfrom
experiment/optimize-Zhenxin_S_F_65nmPTM
Open

Experiment/optimize zhenxin s f 65nm ptm#6
phuocphn wants to merge 70 commits into
masterfrom
experiment/optimize-Zhenxin_S_F_65nmPTM

Conversation

@phuocphn
Copy link
Copy Markdown

@phuocphn phuocphn commented Sep 9, 2025

add Zhenxin_S_FC with 65nm PTM technology node as a new benchmark circuit.

This circuit is obtained from

Zhao, Zhenxin, and Lihong Zhang. "Deep reinforcement learning for analog circuit sizing." 2020 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2020.

image

Summary by CodeRabbit

  • New Features

    • Added three new NGSpice-backed RL environments with single-goal support Zhenxin_S_FC.
    • Introduced ready-to-run training scripts for each environment.
    • Added configurations and circuit netlists to enable out-of-the-box simulation and training.
  • Refactor

    • Improved NGSpice integration with templated netlist handling, safer path management, and automatic cleanup for more reliable runs.
    • Replaced prints with structured logging and rotating log files.
  • Documentation

    • Added examples, configuration files, and experiment notes to illustrate usage and results.
  • Chores

    • Updated .gitignore to exclude generated *.out files.

phuocphn and others added 27 commits September 1, 2025 14:14
- AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run15
…M_run15

Run/optimize zhenxin s fc 65nm ptm run15
Docstrings generation was requested by @phuocphn.

* #3 (comment)

The following files were modified:

* `autockt/envs/ngspice_zhenxin_s_fc.py`
* `examples/Zhenxin_S_FC/action_normalizer.py`
* `examples/reward_calculation.py`
📝 Add docstrings to `run/optimize-Zhenxin_S_FC_65nmPTM_run15`
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Sep 9, 2025

Walkthrough

Adds three new Gym environments and their ngspice evaluation backends, introduces netlists, YAML configs, and a 65nm model. Updates the ngspice wrapper for templating and cleanup. Adds training scripts, evaluation registration/logging, spec generation logging, examples, and experiment artifacts. Minor .gitignore update.

Changes

Cohort / File(s) Summary
VCS/Ignore
\.gitignore
Adds *.out ignore under Streamlit section; retains .streamlit/secrets.toml.
Core Ngspice Wrapper
eval_engines/ngspice/ngspice_wrapper.py
Adds netlist templating (Jinja2), path validation, refined substitutions, safer wrdata paths, improved naming, and cleanup of generated design folders post-simulation.
RL Environments (Gym)
autockt/envs/ngspice_ledro_d_fc.py, autockt/envs/ngspice_ledro_d_fc45.py, autockt/envs/ngspice_zhenxin_s_fc.py
New YAML-driven, multi-goal environments integrating ngspice simulators; define action/observation spaces, normalization, reward, and update loops. Include Ordered YAML loaders; two use ActionNormalizer.
Ngspice Eval Engines (Translators)
eval_engines/ngspice/LEDRO_D_FC.py, eval_engines/ngspice/LEDRO_D_FC45.py, eval_engines/ngspice/Zhenxin_S_FC.py
New classes to parse AC/DC outputs, compute gain/UGBW/PHM/ibias, find crossings, and generate netlists/designs.
Netlists (Circuit Templates)
eval_engines/ngspice/ngspice_inputs/netlist/ledro_d_fc.cir, .../ledro_d_fc45.cir, .../Zhenxin_S_FC.cir
Adds parameterized SPICE circuit templates with control blocks to write ac.csv/dc.csv; use templating placeholders.
YAML Configs
eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml, .../ledro_d_fc45.yaml, .../zhenxin_s_fc.yaml
Introduces parameter ranges, normalization factors, target spec ranges, and paths for each circuit family.
SPICE Models
eval_engines/ngspice/ngspice_inputs/spice_models/65nm_bulk.txt
Adds PTM 65nm bulk NMOS/PMOS model definitions.
Training Scripts
autockt/train_ledro.py, autockt/train_ledro45.py, autockt/train_zhenxin_s_fc_65nm.py
New Ray RLlib PPO training entrypoints for each environment; configs, optional restore branches, checkpointing.
Evaluation Orchestration
autockt/eval.py
Registers ledro_d_fc env; replaces prints with Loguru debugging; adds rotating file logger.
Spec Generation
autockt/gen_specs.py
Logs generated spec ranges to a timestamped log file; extends main to generate specs for three new YAMLs.
Examples — LEDRO 7nm
examples/LEDRO_D_FC/ledro_d_fc.cir, examples/LEDRO_D_FC/convert.py
Adds example netlist and a Jinja2 converter script to render with sample parameters.
Examples — LEDRO 45nm
examples/LEDRO_D_FC_45/ledro_d_fc45.cir, examples/LEDRO_D_FC_45/action_normalizer.py
Adds example netlist and ActionNormalizer demo for 25-dim action space.
Examples — Zhenxin S FC
examples/Zhenxin_S_FC/Zhenxin_S_FC.cir, examples/Zhenxin_S_FC/action_normalizer.py, examples/Zhenxin_S_FC/extract_perf.py
Adds example netlist, ActionNormalizer demo, and a standalone performance extractor.
Examples — Reward
examples/reward_calculation.py
Provides standalone lookup and reward computation demo for specs.
Experiments — Configs/Notes
experiments/.../params.json, experiments/.../README.md
Adds multiple experiment configuration JSONs and brief README notes capturing runs and outcomes.
Experiments — Error Logs
experiments/.../error_*.txt
Adds Ray/RLlib traceback logs showing failures due to missing ac.csv during ngspice result parsing.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User as Trainer (RLlib/PPO)
  participant Env as Gym Env (LEDRO/Zhenxin)
  participant Eval as Eval Engine (NgspiceWrapper subclass)
  participant Spice as ngspice
  participant FS as Filesystem (ac.csv, dc.csv)

  rect rgb(245,250,255)
    User->>Env: reset()
    Env->>Eval: create_design_and_simulate(params)
    Eval->>Spice: run netlist (.cir)
    Spice-->>FS: write ac.csv, dc.csv
    Eval->>FS: read ac.csv, dc.csv
    Eval-->>Env: specs {gain, ugbw, phm, ibias}
    Env-->>User: obs0 (norm specs | goal | params)
  end

  loop each step
    User->>Env: step(action)
    Env->>Env: map action -> params (normalize/linspace)
    Env->>Eval: create_design_and_simulate(params)
    Eval->>Spice: simulate
    Spice-->>FS: ac.csv, dc.csv
    Eval->>FS: parse outputs
    Eval-->>Env: current specs
    Env->>Env: compute reward, done
    Env-->>User: next_obs, reward, done, info
  end

  note over Eval,FS: On completion, temp design folder is cleaned up.
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Pre-merge checks (2 passed, 1 warning)

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 32.56% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title "Experiment/optimize zhenxin s f 65nm ptm" references the core work of adding and optimizing the Zhenxin_S_FC benchmark circuit targeting the 65 nm PTM technology node, which aligns with the primary change in the pull request. It is specific enough to convey the main focus, avoids irrelevant noise, and uses concise wording. Therefore, it successfully reflects the main change in the changeset.
Description Check ✅ Passed The description succinctly states that the pull request adds the Zhenxin_S_FC environment with the 65 nm PTM node as a new benchmark circuit and cites the original ISCAS 2020 reference, directly relating to the changes in the pull request. It clearly describes the purpose of the additions and remains on-topic without introducing unrelated details. Consequently, the description is adequately related to the changeset.

Poem

I twitch my ears at circuits’ song,
Netlists bloom where bytes belong.
YAML maps and goals align,
Spices simmer, gains refine.
Rewards hop in, logs hop out—
A rabbit debugs without a doubt.
Onward to bandwidth, squeak and shout! 🐇⚡️

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch experiment/optimize-Zhenxin_S_F_65nmPTM

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 47

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-06_03-50-47.txt (1)

1-53: Guard missing ac.csv in parse_output and penalize instead of crashing
– In eval_engines/ngspice/Zhenxin_S_FC.py, import os and wrap the np.genfromtxt(ac_fname) call with an os.path.exists check (raise FileNotFoundError with a clear message if missing).
– In autockt/envs/ngspice_zhenxin_s_fc.py (reset/update step), catch FileNotFoundError from the simulation, return a large negative reward and done=True (or otherwise skip/crash-safe transition) rather than allowing PPO to crash.

autockt/eval.py (1)

167-179: Fix undefined args usage inside rollout (args is not in scope).

rollout() references args.num_val_specs/args.traj_len/args.num_val_specs but only receives no_render/out. This breaks evaluation.

Apply:

- def run(args, parser):
+ def run(args, parser):
@@
-    rollout(agent, args.env, num_steps, args.out, args.no_render)
+    rollout(
+        agent,
+        args.env,
+        num_steps,
+        args.out,
+        args.no_render,
+        traj_len=args.traj_len,
+        num_val_specs=args.num_val_specs,
+    )
@@
-def rollout(agent, env_name, num_steps, out="assdf", no_render=True):
+def rollout(
+    agent,
+    env_name,
+    num_steps,
+    out="assdf",
+    no_render=True,
+    traj_len=60,
+    num_val_specs=50,
+):
@@
-            "num_valid": args.num_val_specs,
+            "num_valid": num_val_specs,
@@
-    while rollout_steps < args.num_val_specs:
+    while rollout_steps < num_val_specs:
@@
-        while not done and steps < args.traj_len:
+        while not done and steps < traj_len:
@@
-    print("Num specs reached: " + str(reached_spec) + "/" + str(args.num_val_specs))
+    print("Num specs reached: " + str(reached_spec) + "/" + str(num_val_specs))
♻️ Duplicate comments (4)
experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-05_21-45-37.txt (1)

1-53: Same ac.csv-not-found failure as above; address via the same fix.

Apply the parse_output hardening and env penalty handling suggested for the 2025-09-06 log.

experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-05_01-18-02.txt (1)

1-53: Duplicate of the missing ac.csv failure.

Same fix as proposed in the other error log: add file checks, retries, and unique design folder suffixes; capture ngspice exit/logs.

experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-06_08-13-23.txt (1)

1-53: Duplicate of the missing ac.csv failure.

Same remediation as previously suggested.

experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-05_00-32-02.txt (1)

1-53: Duplicate of the missing ac.csv failure.

Same remediation as previously suggested.

🧹 Nitpick comments (68)
.gitignore (1)

204-204: Add intent comment for global “*.out” ignore
In .gitignore at line 204, prepend *.out with a brief comment (e.g. “Temporary experiment artifacts; safe to ignore globally”).

experiments/run_params_10_990_400_horizon200_start200/README.md (1)

1-1: Clarify metric and add reproducibility context.

Add date, commit hash, seed, and env/config to make this result reproducible.

-The result looks good, ray/tune/episode_reward_max~ 440 for horizon length of 200 -> FoM=2.2
+Run: 2025-08-30, commit <abc123>, seed=0, env=LEDRO_D_FC, horizon=200
+Result: ray/tune/episode_reward_max ≈ 440 -> FoM=2.2
experiments/ledro_d_fc_7nm_run4_horizon_50_range_10_400_400_start_33/README.md (1)

1-2: Tighten wording and formatting.

Minor grammar/formatting tweak.

-The maximum ray/tune/episode_reward_max is just for -135.7 for the horizon length of 50
--> Thus the average FoM is : -2.71
+Max ray/tune/episode_reward_max: -135.7 (horizon=50)
+Average FoM: -2.71
experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/params.json (1)

1-18: Config looks reasonable; add seeds and framework for determinism.

Consider adding framework and seed to ease reproduction; optional.

 {
   "env": "<class 'autockt.envs.ngspice_zhenxin_s_fc.Zhenxin_S_FC'>",
   "env_config": {
     "generalize": false,
     "run_valid": false
   },
   "horizon": 50,
+  "framework": "torch",
+  "seed": 0,
   "model": {
     "fcnet_hiddens": [
       128,
       128,
       128
     ]
   },
   "num_gpus": 0,
   "num_workers": 3,
   "train_batch_size": 1200
 }
examples/LEDRO_D_FC/convert.py (1)

1-38: Use safe temp dirs and file path resolution; add main guard.

Avoid hard-coded /tmp, resolve template path relative to this file, and gate execution.

-from jinja2 import Template
+from jinja2 import Template
+from pathlib import Path
+import tempfile
@@
-with open("ledro_d_fc.cir") as f:
+tpl_path = Path(__file__).with_name("ledro_d_fc.cir")
+with tpl_path.open(encoding="utf-8") as f:
     data = f.read()
@@
-template = Template(data)
+template = Template(data)
@@
-state = dict({
+def main():
+    # Example state; adjust as needed
+    state = {
         "nA1": 7.45e-08,
@@
-        "tempc": 27,
-
-    "design_path": "/tmp"
-})
-# Render the template with a variable
-output = template.render(state)
-
-print(output)
+        "tempc": 27,
+    }
+    with tempfile.TemporaryDirectory(prefix="ledro_", dir=None) as tmpdir:
+        state["design_path"] = tmpdir
+        output = template.render(state)
+        print(output)
+
+if __name__ == "__main__":
+    main()
experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/README.md (1)

1-18: Fix broken image ref and tab-indented lists.

  • Ensure image.png exists or remove the image.
  • Replace tabs with spaces; avoid inline HTML styles for portability.
-![alt text](image.png)
+<!-- Add a valid image or remove if unavailable -->
+<!-- ![Zhenxin_S_FC run14](path/to/image.png) -->
@@
-	- **hostname**: server-2
+  - hostname: server-2
@@
-	- <span style="color:rgb(255, 192, 0)">tensorboard</span>:  `PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6`
+  - tensorboard: `PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6`
eval_engines/ngspice/ngspice_inputs/netlist/ledro_d_fc45.cir (1)

81-91: Large ‘write output.log’ block may be unnecessary.

If not parsed, drop it to reduce I/O time. Keep wrdata outputs only.

-write output.log I(V0) @m0[gm] @m0[ids] @m0[vth] vgs_nm0 vds_nm0
-+ @m1[gm] @m1[ids] @m1[vth] vgs_nm1 vds_nm1
-+ @m2[gm] @m2[ids] @m2[vth] vgs_nm2 vds_nm2 
-+ @m3[gm] @m3[ids] @m3[vth] vgs_nm3 vds_nm3
-+ @m4[gm] @m4[ids] @m4[vth] vgs_nm4 vds_nm4
-+ @m5[gm] @m5[ids] @m5[vth] vgs_nm5 vds_nm5 
-+ @m6[gm] @m6[ids] @m6[vth] vgs_nm6 vds_nm6
-+ @m7[gm] @m7[ids] @m7[vth] vgs_nm7 vds_nm7
-+ @m8[gm] @m8[ids] @m8[vth] vgs_nm8 vds_nm8
-+ @m9[gm] @m9[ids] @m9[vth] vgs_nm9 vds_nm9
-+ @m10[gm] @m10[ids] @m10[vth] vgs_nm10 vds_nm10
+* write output.log I(V0) ... (removed to reduce I/O)
eval_engines/ngspice/ngspice_inputs/spice_models/65nm_bulk.txt (1)

1-4: Add upstream license/attribution header for PTM models

PTM models require attribution; include license text or a reference to third_party/PTM with license and source URL.

I can add a third_party/PTM/ folder with LICENSE and README and prepend a short attribution header here. Want me to push that?

examples/LEDRO_D_FC_45/ledro_d_fc45.cir (1)

42-47: Optional: trim saved vectors to reduce memory footprint

Add a selective .save to avoid storing all internal vectors on large sweeps.

Example:

 .control
+save all
+save v(Voutp) v(Voutn) I(V0)
 op

Adjust as needed.

eval_engines/ngspice/ngspice_inputs/netlist/ledro_d_fc.cir (2)

42-42: Optional: add a targeted .save list

Reduce memory/time on large AC sweeps.

 .control
+save v(Voutp) v(Voutn) I(V0)
 op

1-2: Nit: file header

Add brief header (tech, supply range, licensing pointer) for future maintainers.

examples/reward_calculation.py (3)

3-8: Remove duplicate specs; the second assignment silently overrides the first.

Keeps the example deterministic and avoids confusion while importing.

Apply:

-cur_specs = np.array([7.20504133e03, 8.66115710e-04, 4.56972473e01, 2.69241239e07])
-ideal_specs = np.array([8.02000000e02, 1.68455518e-03, 6.00000000e01, 1.90104525e06])
-
 cur_specs = np.array([7.20504133e03, 8.66115710e-04, 6.56972473e01, 2.69241239e07])
 ideal_specs = np.array([8.02000000e02, 1.68455518e-03, 6.00000000e01, 1.90104525e06])

25-75: Validate dimensions against specs_id and simplify penalty bookkeeping.

Prevents silent misalignment between inputs and specs_id; minor readability improvement.

Apply:

-def reward(spec, goal_spec):
+def reward(spec, goal_spec):
@@
-    norm_specs = lookup(spec, goal_spec)
+    spec = np.asarray(spec, dtype=float)
+    goal_spec = np.asarray(goal_spec, dtype=float)
+    if spec.shape != goal_spec.shape:
+        raise ValueError("spec and goal_spec must have the same shape")
+    if len(specs_id) != spec.shape[0]:
+        raise ValueError("specs_id length must match number of specs")
+    norm_specs = lookup(spec, goal_spec)
@@
-    reward = 0
+    penalty = 0.0
     for i, rel_spec in enumerate(norm_specs):
@@
-        if specs_id[i] == "ibias_max" and rel_spec > 0:
-            reward += np.abs(rel_spec)  # /10
-        elif specs_id[i] == "gain_min" and rel_spec < 0:
-            reward += 3 * np.abs(rel_spec)  # /10
-        elif specs_id[i] != "ibias_max" and rel_spec < 0:
-            reward += np.abs(rel_spec)
-    return -reward
+        if specs_id[i] == "ibias_max" and rel_spec > 0:
+            penalty += np.abs(rel_spec)
+        elif specs_id[i] == "gain_min" and rel_spec < 0:
+            penalty += 3 * np.abs(rel_spec)
+        elif specs_id[i] != "ibias_max" and rel_spec < 0:
+            penalty += np.abs(rel_spec)
+    return -penalty

Note: env autockt/envs/ngspice_zhenxin_s_fc.py saturates small penalties to 10; this example returns a pure negative penalty. Confirm if divergence is intentional.


79-79: Avoid printing on import; gate under main.

Prevents side effects when the module is imported.

Apply:

-print(reward(cur_specs, ideal_specs))
+if __name__ == "__main__":
+    print(reward(cur_specs, ideal_specs))
examples/Zhenxin_S_FC/action_normalizer.py (2)

31-31: Replace ambiguous Unicode minus with ASCII hyphen to satisfy linters.

Prevents RUF002 warning and keeps docs consistent.

Apply:

-            action (np.ndarray): Elementwise action values in (−1, 1) to be scaled.
+            action (np.ndarray): Elementwise action values in (-1, 1) to be scaled.

105-114: Gate example prints under main to avoid side effects on import.

Keeps examples import-safe.

Apply:

-action = ActionNormalizer(
-    action_space_low=action_space_low, action_space_high=action_space_high
-).action(action_space.sample())
-print("action: ", action)
-
-for idx in [2, 2 + 3, 5 + 3, 8 + 3, 11 + 3, 14 + 3, -1, -2]:
-    action[idx] = int(action[idx])
-print("action: ", action)
-print("action: ", ",".join([str(x) for x in action]))
+if __name__ == "__main__":
+    action = ActionNormalizer(
+        action_space_low=action_space_low, action_space_high=action_space_high
+    ).action(action_space.sample())
+    print("action: ", action)
+    for idx in [2, 2 + 3, 5 + 3, 8 + 3, 11 + 3, 14 + 3, -1, -2]:
+        action[idx] = int(action[idx])
+    print("action: ", action)
+    print("action: ", ",".join([str(x) for x in action]))
eval_engines/ngspice/ngspice_wrapper.py (1)

52-55: Avoid re-reading the netlist twice; reuse the already-read string.

Minor cleanup.

Apply:

-        raw_file = open(design_netlist, "r")
-        self.tmp_lines = raw_file.readlines()
-        raw_file.close()
+        self.tmp_lines = self.netlist_str.splitlines(keepends=True)
examples/Zhenxin_S_FC/extract_perf.py (4)

76-76: Remove noisy print or guard with debug

Unconditional print will spam logs.

Apply:

-        print ("phase_fun(ugbw): ", phase_fun(ugbw))
+        if debug:
+            print("phase_at_ugbw(deg):", float(phase_fun(ugbw)))

77-83: Use canonical phase margin formula and normalize

Current piecewise is awkward. PM is typically 180 + phase_at_ugbw, normalized to [0, 180].

Apply:

-        if valid:
-            if phase_fun(ugbw) > 0:
-                return -180 + phase_fun(ugbw)
-            else:
-                return 180 + phase_fun(ugbw)
+        if valid:
+            pm = 180.0 + float(phase_fun(ugbw))
+            pm = (pm + 360.0) % 360.0
+            return 360.0 - pm if pm > 180.0 else pm

Confirm your downstream expects PM in [0, 180] deg. I can adjust to your convention if different.


85-92: Misleading parameter name; make helper static

cls suggests a classmethod. It’s an instance-free helper; make it @staticmethod.

Apply:

-    def _get_best_crossing(cls, xvec, yvec, val):
+    @staticmethod
+    def _get_best_crossing(xvec, yvec, val):

24-30: Clarify units in returned spec

Here gain is in dB and ugbw is in MHz. Elsewhere (LEDRO) gain is linear and ugbw is in Hz. This will bite consumers.

Options:

  • Document units clearly in the docstring and keep keys.
  • Or encode units in keys (e.g., gain_dB, ugbw_MHz, phm_deg, ibias_A).
    Want a patch for either approach?
eval_engines/ngspice/LEDRO_D_FC.py (7)

56-58: Unit consistency: gain is linear here (not dB)

This differs from examples/Zhenxin_S_FC/extract_perf.py where gain is in dB. Align or document to avoid reward/metric mix-ups.

I can standardize both paths (linear or dB) based on your preference.


88-92: Make _get_best_crossing a static helper

Same nit as in the example script; cls is misleading.

Apply:

-    def _get_best_crossing(cls, xvec, yvec, val):
+    @staticmethod
+    def _get_best_crossing(xvec, yvec, val):

103-111: Avoid mutating caller’s state; copy before normalization

create_design alters state in place (adds design_path, rescales fields). This can leak side effects into the training loop.

Apply:

-    def create_design(self, state, new_fname):
+    def create_design(self, state, new_fname):
+        state = dict(state)  # shallow copy to avoid caller side effects

If you rely on design_path upstream, return it separately rather than injecting into state.


104-107: Directory uniquifier: prefer uuid over PRNG

random.randint is fine, but uuid4 avoids collisions without relying on PRNG seeding.

Apply:

-        design_folder = os.path.join(self.gen_dir, new_fname) + str(
-            random.randint(0, 10000)
-        )
+        import uuid
+        design_folder = os.path.join(self.gen_dir, f"{new_fname}_{uuid.uuid4().hex[:8]}")

Note: remove the random import if unused elsewhere.


111-117: Rounding B-counts: simplify or confirm types

int(round(...)) is likely redundant; round() already returns int with no ndigits. If inputs are numpy scalars, confirm behavior.

Option:

-        state["nB1"] = int(round(state["nB1"]))
+        state["nB1"] = round(state["nB1"])

Apply similarly for nB2..nB6 if inputs are Python numbers. If numpy scalars, keep int() for safety.


131-134: Redundant close() inside context manager

with open(...) closes automatically.

Apply:

-            f.write(netlist_str)
-            f.close()
+            f.write(netlist_str)

67-86: Phase margin computation: consider canonical form

Same suggestion as in the example extractor: use PM = 180 + phase(ugbw) and normalize to [0, 180].

I can send a patch mirroring the alternative if you want to standardize both implementations.

experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/params.json (1)

1-18: Avoid committing run artifacts; if kept, add reproducibility metadata.

params.json under experiments is ephemeral. Either ignore it or enrich it with seed, git SHA, and env version to make results reproducible.

If you decide to ignore these, add rules like:

+.gitignore
+experiments/**/PPO_*/*/params.json
+experiments/**/error_*.txt
experiments/ledro_d_fc_7nm_run4_horizon_50_range_10_400_400_start_33/PPO_LEDRO_D_FC_0_2025-08-29_14-36-17yoqm6ky6/params.json (1)

1-18: Same note on committing artifacts; also consider pinning a seed.

If this file is intended for re-runs, include a fixed random seed and a code/git hash; otherwise ignore it via .gitignore.

experiments/run_params_10_990_400_horizon200_start200/PPO_LEDRO_D_FC_0_2025-08-30_16-10-10yltfhz_o/params.json (1)

1-18: Consolidate duplicate config.

Multiple nearly identical params.json files drift easily. Prefer a single checked-in YAML/JSON template and script args to derive run-specific configs.

autockt/gen_specs.py (2)

9-9: Prefer timezone-aware, ISO-8601 timestamps.

Naive localtime hurts reproducibility. Use an explicit UTC timestamp string.

Apply:

-import datetime
+from datetime import datetime

And update the log write (see below).


99-107: DRY the repeated gen_data calls.

Replace repetition with a small loop for maintainability.

Apply:

-    CIR_YAML = "eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml"
-    gen_data(CIR_YAML, "ledro_d_fc", int(args.num_specs))
-
-    CIR_YAML = "eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc45.yaml"
-    gen_data(CIR_YAML, "ledro_d_fc45", int(args.num_specs))
-
-    CIR_YAML = "eval_engines/ngspice/ngspice_inputs/yaml_files/zhenxin_s_fc.yaml"
-    gen_data(CIR_YAML, "zhenxin_s_fc", int(args.num_specs))
+    more = [
+        ("eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml", "ledro_d_fc"),
+        ("eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc45.yaml", "ledro_d_fc45"),
+        ("eval_engines/ngspice/ngspice_inputs/yaml_files/zhenxin_s_fc.yaml", "zhenxin_s_fc"),
+    ]
+    for CIR_YAML, env in more:
+        gen_data(CIR_YAML, env, int(args.num_specs))
experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-05_03-08-18.txt (1)

1-50: Commit history hygiene: drop experiment artifacts from the repo.

These Ray error logs are build/run artifacts and will bloat the repo. Add them to .gitignore and remove from the PR.

Apply to the top-level .gitignore:

+experiments/**
+**/PPO_*/**
+**/error_*.txt
+ray_results/**
experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-04_23-19-28.txt (1)

1-50: Same root cause: make parser robust and surface context in logs.

Mirror the fail-soft handling and also log ngspice stdout/stderr to design_path/ngspice.log for post-mortem.

Add logging around the ngspice invoke (in ngspice_wrapper.run or equivalent):

- subprocess.check_call(cmd, cwd=design_folder)
+ with open(os.path.join(design_folder, "ngspice.log"), "w") as lf:
+     subprocess.run(cmd, cwd=design_folder, stdout=lf, stderr=subprocess.STDOUT, check=False, timeout=self.timeout_s)

Would you like me to wire a configurable timeout and retry with relaxed tolerances on convergence errors?

eval_engines/ngspice/ngspice_inputs/yaml_files/zhenxin_s_fc.yaml (2)

18-20: Normalization tuple: fix typo and confirm order matches sorted spec keys.

Comment typo (“normlaized”) and potential order mismatch risk. Code sorts target_specs keys alphabetically (gain_min, ibias_max, phm_min, ugbw_min). Ensure normalize follows this order.

-#Every specification output will be normlaized in the order of (gain, ibias, pm, ugbw)
+# Every specification output will be normalized in the order of (gain, ibias, pm, ugbw)

To be safe, consider making normalize a mapping keyed by spec name to avoid order coupling.


1-27: YAML lint and formatting cleanups.

Too many spaces after colons and a trailing-space at Line 17.

-  w_m12:  !!python/tuple [1, 100, 100]
+  w_m12: !!python/tuple [1, 100, 100]
...
-  cc:  !!python/tuple [1, 50, 100]
+  cc: !!python/tuple [1, 50, 100]
-  
+
-  gain_min: !!python/tuple [800,1000]
+  gain_min: !!python/tuple [800, 1000]
-  phm_min: !!python/tuple [60,60.0000001]
+  phm_min: !!python/tuple [60, 60.0000001]
autockt/train_ledro.py (3)

4-6: Remove unused import.

TwoStageAmp is not used.

-from autockt.envs.ngspice_vanilla_opamp import TwoStageAmp

29-41: Simplify control flow and surface Ray version constraints.

The if True branch makes restore path dead code. Also, tune.run_experiments is deprecated in newer Ray; if you upgrade, switch to Tuner.

-if True:
-    trials = tune.run_experiments(
-        {
-            "train_7nFinFET_LEDRO_D_FC": {
-                "checkpoint_freq": 10,
-                "run": "PPO",
-                "env": LEDRO_D_FC,
-                "stop": {"episode_reward_mean": -0.02},
-                "config": config_train,
-            },
-        }
-    )
-else:
+trials = tune.run_experiments(
+    {
+        "train_7nFinFET_LEDRO_D_FC": {
+            "checkpoint_freq": 10,
+            "run": "PPO",
+            "env": LEDRO_D_FC,
+            "stop": {"episode_reward_mean": -0.02},
+            "config": config_train,
+        },
+    }
+)
+# Restore block (enable when needed)
 if False:
     print("RESTORING NOW!!!!!!")
     exit()  # do not restore for now

If targeting Ray >= 2.0, I can provide a Tuner-based rewrite.


17-25: Add reproducibility knobs (seed) and resource guards.

Set framework seeds and derive num_workers from available CPUs.

 config_train = {
     "train_batch_size": 1200,
     "horizon": 200,
     "num_gpus": 0,
     "model": {"fcnet_hiddens": [128, 128, 128]},
-    "num_workers": 6,
+    "num_workers": min(6, int(ray.cluster_resources().get("CPU", 1)) - 1),
+    "seed": 1337,
     "env_config": {"generalize": True, "run_valid": False},
 }
eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc45.yaml (2)

32-40: Normalize order vs key order: same caution as Zhenxin.

Ensure normalize matches sorted keys: gain_min, ibias_max, phm_min, ugbw_min. Current tuple matches (gain, ibias, pm, ugbw), which is OK—please confirm.


5-31: YAML lint + trailing space fix.

Clean spacing and remove trailing space at Line 31.

-  mp1:  !!python/tuple [1, 100, 100]
+  mp1: !!python/tuple [1, 100, 100]
...
-  cc:  !!python/tuple [1, 50, 100]
+  cc: !!python/tuple [1, 50, 100]
-
+
eval_engines/ngspice/ngspice_inputs/netlist/Zhenxin_S_FC.cir (3)

43-43: Voltage source orientation nit.

vss “0 VSS” is unusual; prefer “VSS VSS 0 0” or drop it if unused.

-vss 0 VSS dc=0
+VSS VSS 0 DC 0

58-60: Control: use ‘set’, ensure ASCII output, and quote paths.

Inside .control, use ‘set’ for numdgt and force ASCII to keep genfromtxt happy.

- option numdgt=7
- wrdata {{design_path}}/ac.csv v(Vout)
+ set numdgt=7
+ set filetype=ascii
+ wrdata "{{design_path}}/ac.csv" v(Vout)

Same for dc.csv.


52-62: Guarantee AC runs before wrdata and handle failures.

Consider moving .ac inside .control above wrdata and check for singularities; on error, still write an empty placeholder file to unblock parser.

examples/Zhenxin_S_FC/Zhenxin_S_FC.cir (2)

3-3: Parameterize model include path.

Same portability issue as engine deck.

-.include "/home/pham/code/analog-ml/AutoCkt/eval_engines/ngspice/ngspice_inputs/spice_models/65nm_bulk.txt"
+.include "{{model_dir}}/65nm_bulk.txt"

64-70: Prefer writing AC metrics in CSV alongside raw sweep for reproducibility.

Good that you measure GBW/PM. Also emit vdb(Vout) and vp(Vout) to CSV for offline verification.

-wrdata ac.csv v(Vout)
+wrdata ac.csv v(Vout) vdb(Vout) vp(Vout)
examples/LEDRO_D_FC/ledro_d_fc.cir (2)

45-45: Also parameterize OSDI path or guard it.

Make pre_osdi optional or templated; failing to find this file aborts runs.

-pre_osdi /home/pham/shared_files/ngspice/osdilibs/bsimcmg.osdi
+* pre_osdi {{osdi_lib_path}}  ; optional, enable if available

101-106: Quote wrdata paths for safety.

Be consistent with engine decks and quote design_path.

-wrdata {{design_path}}/ac.csv v(Voutp)-v(Voutn)
+wrdata "{{design_path}}/ac.csv" v(Voutp)-v(Voutn)
...
-wrdata {{design_path}}/dc.csv i(V0)
+wrdata "{{design_path}}/dc.csv" i(V0)
autockt/eval.py (1)

221-235: Reduce debug log volume per step.

Logging full action/next_state arrays each step will bloat logs and slow rollouts. Consider logging shapes or summaries, or guard behind a flag.

Example:

-logger.debug(
-    "\n action: " + str(action) + ...
-)
+logger.debug(
+    "step={} reward={} done={} action_shape={} next_state_shape={}",
+    steps, reward, done, np.shape(action), np.shape(next_state)
+)
autockt/train_zhenxin_s_fc_65nm.py (1)

35-43: Drop the first config_train block; it’s immediately overridden.

Keep a single, authoritative config to avoid confusion.

Apply:

-# configures training of the agent with associated hyperparameters
-# See Ray documentation for details on each parameter
-config_train = {
-    # "sample_batch_size": 200,
-    "train_batch_size": 2400,
-    # "sgd_minibatch_size": 1200,
-    # "num_sgd_iter": 3,
-    # "lr":1e-3,
-    # "vf_loss_coeff": 0.5,
-    "horizon": 400,
-    # "rollout_fragment_length": 1200,
-    # "rollout_fragment_length": 200,
-    "num_gpus": 0,
-    "model": {"fcnet_hiddens": [64, 64]},
-    "num_workers": 6,
-    "env_config": {"generalize": False, "run_valid": False},
-}
autockt/train_ledro45.py (2)

33-41: Remove the first config_train block; the second overrides it.

Keeps the script clear and avoids stale params.

Apply:

-config_train = {
-    # "sample_batch_size": 200,
-    "train_batch_size": 2400,
-    # "sgd_minibatch_size": 1200,
-    # "num_sgd_iter": 3,
-    # "lr":1e-3,
-    # "vf_loss_coeff": 0.5,
-    "horizon": 400,
-    # "rollout_fragment_length": 1200,
-    # "rollout_fragment_length": 200,
-    "num_gpus": 0,
-    "model": {"fcnet_hiddens": [64, 64]},
-    "num_workers": 6,
-    "env_config": {"generalize": True, "run_valid": False},
-}

44-67: Gate train vs restore on --checkpoint_dir, not a hardcoded True.

Currently the restore path is dead code.

Apply:

-if True:
+if not args.checkpoint_dir:
@@
-else:
+else:
     print("RESTORING NOW!!!!!!")
-    exit()  # do not restore for now
     tune.run_experiments(
         {
             "restore_ppo": {
                 "run": "PPO",
                 "config": config_train,
                 "env": LEDRO_D_FC45,
                 # "restore": trials[0]._checkpoint.value},
-                "restore": args.checkpoint_dir,
+                "restore": args.checkpoint_dir,
                 "checkpoint_freq": 1,
             },
         }
     )
examples/LEDRO_D_FC_45/action_normalizer.py (2)

41-41: Replace the Unicode multiplication sign.

Use "x" instead of "×" to avoid lint noise and encoding surprises.

Apply:

-# L: Rationale: start at ~2× technology minimum to reduce short-channel effects and improve matching.
+# L: Rationale: start at ~2x technology minimum to reduce short-channel effects and improve matching.

81-89: Guard example code to avoid side effects on import.

Printing and mutation shouldn’t run when this module is imported.

Apply:

-action = ActionNormalizer(action_space_low=action_space_low, action_space_high =  action_space_high).action(action_space.sample()) # convert [-1.1] range back to normal range
-action = action.astype(object)
-
-print ("action: ", action)
-
-for idx in [2, 2+3, 5+3, 8+3, 11+3, 14+3, -1, -2]:
-    action[idx] = int(action[idx])
-
-print ("action: ", action)
+if __name__ == "__main__":
+    action = ActionNormalizer(
+        action_space_low=action_space_low,
+        action_space_high=action_space_high,
+    ).action(action_space.sample())  # convert [-1,1] back to real range
+    action = action.astype(object)
+    print("action: ", action)
+    for idx in [2, 5, 8, 11, 14, 17, -1, -2]:
+        action[idx] = int(action[idx])
+    print("action: ", action)
eval_engines/ngspice/LEDRO_D_FC45.py (2)

88-101: Rename cls→self or make it @staticmethod for clarity.

Method isn’t classmethod; using cls is misleading.

Apply:

-    def _get_best_crossing(cls, xvec, yvec, val):
+    def _get_best_crossing(self, xvec, yvec, val):

Alternatively add @staticmethod and drop self.


1-8: Remove unused imports.

importlib and time are not used.

Apply:

-import importlib
-import time
autockt/envs/ngspice_zhenxin_s_fc.py (1)

532-577: Inconsistent parameter documentation.

The docstring mentions 11 parameters but the param_names list contains 11 parameters as expected. The docstring should clarify the expected structure.

Update the docstring to be more specific:

     def update(self, params_idx):
         """
         Update the circuit design using the provided parameter vector, run the simulator, and return the resulting specifications.
         
         Parameters:
             params_idx (Sequence[int|float]): Sequence of 11 parameter values (in the same order as the internal
-                param_names: ["w_m12","w_m3","w_m45","w_m67","w_m89","w_m1011","vbp1","vbp2","vbn1","vbn2","cc"].
+                param_names: ["w_m12", "w_m3", "w_m45", "w_m67", "w_m89", "w_m1011", "vbp1", "vbp2", "vbn1", "vbn2", "cc"].
                 These are treated as the parameter values passed to the simulator.
         
         Returns:
             numpy.ndarray: 1-D array of simulated specification values. The specs are taken from the simulator's
             output, sorted by specification name (ascending) before conversion to the array.
         """
autockt/envs/ngspice_ledro_d_fc45.py (1)

370-374: TensorBoard writer usage without proper initialization check.

The global writer is used but there's no verification it was successfully initialized.

Consider wrapping TensorBoard logging in a try-except or checking if writer is available:

-        writer.add_scalar('gain', self.cur_specs[0], self.env_steps)
-        writer.add_scalar('ugbw', self.cur_specs[1], self.env_steps)
-        writer.add_scalar('pm', self.cur_specs[2], self.env_steps)
-        writer.add_scalar('power', self.cur_specs[3], self.env_steps)
+        if writer is not None:
+            try:
+                writer.add_scalar('gain', self.cur_specs[0], self.env_steps)
+                writer.add_scalar('ugbw', self.cur_specs[1], self.env_steps)
+                writer.add_scalar('pm', self.cur_specs[2], self.env_steps)
+                writer.add_scalar('power', self.cur_specs[3], self.env_steps)
+            except Exception as e:
+                logger.warning(f"Failed to log to TensorBoard: {e}")
autockt/envs/ngspice_ledro_d_fc.py (9)

149-151: Avoid aliasing the same Discrete instance.

Use a list comprehension instead of multiplying a list.

-        self.action_space = spaces.Tuple(
-            [spaces.Discrete(len(self.action_meaning))] * len(self.params_id)
-        )
+        self.action_space = spaces.Tuple(
+            [spaces.Discrete(len(self.action_meaning)) for _ in self.params_id]
+        )

253-259: Use logger instead of print for step diagnostics.

Keep console clean and respect configured levels/rotation.

-            print("-" * 10)
-            print("params = ", self.cur_params_idx)
-            print("specs:", self.cur_specs)
-            print("ideal specs:", self.specs_ideal)
-            print("re:", reward)
-            print("-" * 10)
+            logger.info("-" * 10)
+            logger.info(f"params = {self.cur_params_idx}")
+            logger.info(f"specs: {self.cur_specs}")
+            logger.info(f"ideal specs: {self.specs_ideal}")
+            logger.info(f"reward: {reward}")
+            logger.info("-" * 10)
@@
-        print("***cur params idx:", self.cur_params_idx, "specs: ", self.cur_specs, " reward: ", reward)
+        logger.debug(f"cur params idx: {self.cur_params_idx} specs: {self.cur_specs} reward: {reward}")

Also applies to: 264-264


271-275: Add epsilon to normalization denominator to avoid divide-by-zero.

If any spec and goal sum to ~0, current code will NaN/Inf.

-        norm_spec = (spec - goal_spec) / (goal_spec + spec)
+        denom = goal_spec + spec
+        denom = np.where(np.abs(denom) < 1e-12, 1e-12, denom)
+        norm_spec = (spec - goal_spec) / denom

217-217: Remove unused local reward in reset.

It’s computed and discarded.

-        reward = self.reward(self.cur_specs, self.specs_ideal)
+        # reward computed in step()

114-117: Use direct boolean checks.

Improves readability and avoids E712.

-        if self.generalize == False:
+        if not self.generalize:
@@
-        if self.generalize == True:
+        if self.generalize:
-            if self.valid == True:
+            if self.valid:
@@
-            if self.multi_goal == False:
+            if not self.multi_goal:

Also applies to: 180-181, 193-196


133-133: Slight cleaner access to first value.

-        self.num_os = len(list(self.specs.values())[0])
+        self.num_os = len(next(iter(self.specs.values())))

41-41: Ensure logs directory exists and fix log filename.

Current sink points to fc45 and may fail if “logs/” doesn’t exist.

 logger.remove()
+
+os.makedirs("logs", exist_ok=True)
@@
-    "logs/ngspice_ledro_d_fc45.log",
+    "logs/ngspice_ledro_d_fc.log",

Also applies to: 47-47


323-329: Handle ngspice failures gracefully.

Wrap simulation to surface actionable errors to the agent (large penalty) instead of crashing the episode.

Example (outside selected range):

try:
    state, specs_dict, _ = self.sim_env.create_design_and_simulate(param_val[0])
    cur_specs = OrderedDict(sorted(specs_dict.items(), key=lambda k: k[0]))
    return np.array(list(cur_specs.values()))
except Exception as e:
    logger.error(f"Simulation failed: {e}")
    # Return a large penalty vector to steer away; adjust to your spec semantics
    return np.array([np.inf if k.endswith("_max") else -np.inf for k in self.specs_id], dtype=np.float32)

10-10: Trim unused imports; move IPython import into main.

Keeps runtime deps light for library users.

-import psutil
@@
-from multiprocessing.dummy import Pool as ThreadPool
@@
-import statistics
@@
-import itertools
@@
-import IPython

Add inside main():

import IPython  # local to demo

Also applies to: 12-12, 16-16, 19-19, 18-18

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c1f3571 and 5e6fe0a.

⛔ Files ignored due to path filters (15)
  • experiments/ledro_d_fc_7nm_run3/PPO_LEDRO_D_FC_0_2025-08-28_17-19-59h7o8x_d0/image.png is excluded by !**/*.png
  • experiments/ledro_d_fc_7nm_run3/PPO_LEDRO_D_FC_0_2025-08-28_17-19-59h7o8x_d0/params.pkl is excluded by !**/*.pkl
  • experiments/ledro_d_fc_7nm_run3/PPO_LEDRO_D_FC_0_2025-08-28_17-19-59h7o8x_d0/progress.csv is excluded by !**/*.csv
  • experiments/ledro_d_fc_7nm_run4_horizon_50_range_10_400_400_start_33/PPO_LEDRO_D_FC_0_2025-08-29_14-36-17yoqm6ky6/params.pkl is excluded by !**/*.pkl
  • experiments/ledro_d_fc_7nm_run4_horizon_50_range_10_400_400_start_33/PPO_LEDRO_D_FC_0_2025-08-29_14-36-17yoqm6ky6/progress.csv is excluded by !**/*.csv
  • experiments/ledro_d_fc_7nm_run4_horizon_50_range_10_400_400_start_33/image.png is excluded by !**/*.png
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/params.pkl is excluded by !**/*.pkl
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/progress.csv is excluded by !**/*.csv
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/image.png is excluded by !**/*.png
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/params.pkl is excluded by !**/*.pkl
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/progress.csv is excluded by !**/*.csv
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/image.png is excluded by !**/*.png
  • experiments/run_params_10_990_400_horizon200_start200/PPO_LEDRO_D_FC_0_2025-08-30_16-10-10yltfhz_o/params.pkl is excluded by !**/*.pkl
  • experiments/run_params_10_990_400_horizon200_start200/PPO_LEDRO_D_FC_0_2025-08-30_16-10-10yltfhz_o/progress.csv is excluded by !**/*.csv
  • experiments/run_params_10_990_400_horizon200_start200/image.png is excluded by !**/*.png
📒 Files selected for processing (44)
  • .gitignore (1 hunks)
  • autockt/envs/ngspice_ledro_d_fc.py (1 hunks)
  • autockt/envs/ngspice_ledro_d_fc45.py (1 hunks)
  • autockt/envs/ngspice_zhenxin_s_fc.py (1 hunks)
  • autockt/eval.py (4 hunks)
  • autockt/gen_specs.py (3 hunks)
  • autockt/train_ledro.py (1 hunks)
  • autockt/train_ledro45.py (1 hunks)
  • autockt/train_zhenxin_s_fc_65nm.py (1 hunks)
  • eval_engines/ngspice/LEDRO_D_FC.py (1 hunks)
  • eval_engines/ngspice/LEDRO_D_FC45.py (1 hunks)
  • eval_engines/ngspice/Zhenxin_S_FC.py (1 hunks)
  • eval_engines/ngspice/ngspice_inputs/netlist/Zhenxin_S_FC.cir (1 hunks)
  • eval_engines/ngspice/ngspice_inputs/netlist/ledro_d_fc.cir (1 hunks)
  • eval_engines/ngspice/ngspice_inputs/netlist/ledro_d_fc45.cir (1 hunks)
  • eval_engines/ngspice/ngspice_inputs/spice_models/65nm_bulk.txt (1 hunks)
  • eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml (1 hunks)
  • eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc45.yaml (1 hunks)
  • eval_engines/ngspice/ngspice_inputs/yaml_files/zhenxin_s_fc.yaml (1 hunks)
  • eval_engines/ngspice/ngspice_wrapper.py (5 hunks)
  • examples/LEDRO_D_FC/convert.py (1 hunks)
  • examples/LEDRO_D_FC/ledro_d_fc.cir (1 hunks)
  • examples/LEDRO_D_FC_45/action_normalizer.py (1 hunks)
  • examples/LEDRO_D_FC_45/ledro_d_fc45.cir (1 hunks)
  • examples/Zhenxin_S_FC/Zhenxin_S_FC.cir (1 hunks)
  • examples/Zhenxin_S_FC/action_normalizer.py (1 hunks)
  • examples/Zhenxin_S_FC/extract_perf.py (1 hunks)
  • examples/reward_calculation.py (1 hunks)
  • experiments/ledro_d_fc_7nm_run3/PPO_LEDRO_D_FC_0_2025-08-28_17-19-59h7o8x_d0/params.json (1 hunks)
  • experiments/ledro_d_fc_7nm_run4_horizon_50_range_10_400_400_start_33/PPO_LEDRO_D_FC_0_2025-08-29_14-36-17yoqm6ky6/params.json (1 hunks)
  • experiments/ledro_d_fc_7nm_run4_horizon_50_range_10_400_400_start_33/README.md (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-04_23-19-28.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-05_00-32-02.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-05_01-18-02.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/error_2025-09-05_03-08-18.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/PPO_Zhenxin_S_FC_0_2025-09-04_16-10-519x116nc6/params.json (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/README.md (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-05_21-45-37.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-06_03-50-47.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-06_06-30-25.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-06_08-13-23.txt (1 hunks)
  • experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/params.json (1 hunks)
  • experiments/run_params_10_990_400_horizon200_start200/PPO_LEDRO_D_FC_0_2025-08-30_16-10-10yltfhz_o/params.json (1 hunks)
  • experiments/run_params_10_990_400_horizon200_start200/README.md (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (12)
autockt/train_ledro45.py (2)
autockt/envs/ngspice_ledro_d_fc.py (1)
  • LEDRO_D_FC (92-331)
autockt/envs/ngspice_ledro_d_fc45.py (1)
  • LEDRO_D_FC45 (132-462)
autockt/train_ledro.py (1)
autockt/envs/ngspice_ledro_d_fc.py (1)
  • LEDRO_D_FC (92-331)
autockt/train_zhenxin_s_fc_65nm.py (2)
autockt/envs/ngspice_ledro_d_fc45.py (1)
  • LEDRO_D_FC45 (132-462)
autockt/envs/ngspice_zhenxin_s_fc.py (1)
  • Zhenxin_S_FC (145-577)
examples/reward_calculation.py (1)
autockt/envs/ngspice_zhenxin_s_fc.py (2)
  • lookup (475-478)
  • reward (480-530)
eval_engines/ngspice/LEDRO_D_FC45.py (1)
eval_engines/ngspice/ngspice_wrapper.py (1)
  • NgSpiceWrapper (21-165)
eval_engines/ngspice/LEDRO_D_FC.py (1)
eval_engines/ngspice/ngspice_wrapper.py (1)
  • NgSpiceWrapper (21-165)
examples/Zhenxin_S_FC/action_normalizer.py (1)
autockt/envs/ngspice_zhenxin_s_fc.py (3)
  • ActionNormalizer (72-104)
  • action (80-91)
  • reverse_action (93-104)
autockt/envs/ngspice_zhenxin_s_fc.py (3)
eval_engines/ngspice/Zhenxin_S_FC.py (1)
  • Zhenxin_S_FC_Class (18-117)
examples/reward_calculation.py (2)
  • lookup (10-19)
  • reward (25-76)
eval_engines/ngspice/ngspice_wrapper.py (1)
  • create_design_and_simulate (121-135)
autockt/eval.py (1)
autockt/envs/ngspice_ledro_d_fc.py (1)
  • LEDRO_D_FC (92-331)
eval_engines/ngspice/Zhenxin_S_FC.py (1)
eval_engines/ngspice/ngspice_wrapper.py (1)
  • NgSpiceWrapper (21-165)
autockt/envs/ngspice_ledro_d_fc.py (3)
eval_engines/ngspice/TwoStageClass.py (1)
  • TwoStageClass (14-97)
eval_engines/ngspice/LEDRO_D_FC.py (1)
  • LEDRO_D_FC_Class (18-136)
eval_engines/ngspice/ngspice_wrapper.py (1)
  • create_design_and_simulate (121-135)
autockt/envs/ngspice_ledro_d_fc45.py (3)
eval_engines/ngspice/TwoStageClass.py (1)
  • TwoStageClass (14-97)
eval_engines/ngspice/LEDRO_D_FC45.py (1)
  • LEDRO_D_FC45_Class (18-117)
eval_engines/ngspice/ngspice_wrapper.py (1)
  • create_design_and_simulate (121-135)
🪛 Ruff (0.12.2)
examples/LEDRO_D_FC/convert.py

33-33: Probable insecure usage of temporary file or directory: "/tmp"

(S108)

eval_engines/ngspice/LEDRO_D_FC45.py

105-105: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

eval_engines/ngspice/LEDRO_D_FC.py

105-105: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


111-111: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


112-112: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


113-113: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


114-114: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


115-115: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)


116-116: Value being cast to int is already an integer

Remove unnecessary int call

(RUF046)

eval_engines/ngspice/ngspice_wrapper.py

32-32: Probable use of unsafe yaml.load. Allows instantiation of arbitrary objects. Consider yaml.safe_load.

(S506)


67-67: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


111-111: Starting a process with a shell, possible injection detected

(S605)

examples/LEDRO_D_FC_45/action_normalizer.py

41-41: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?

(RUF003)

examples/Zhenxin_S_FC/action_normalizer.py

31-31: Docstring contains ambiguous (MINUS SIGN). Did you mean - (HYPHEN-MINUS)?

(RUF002)

autockt/envs/ngspice_zhenxin_s_fc.py

20-20: from eval_engines.util.core import * used; unable to detect undefined names

(F403)


22-22: Redefinition of unused os from line 17

Remove definition: os

(F811)


24-24: from eval_engines.ngspice.TwoStageClass import * used; unable to detect undefined names

(F403)


30-30: from eval_engines.ngspice.Zhenxin_S_FC import * used; unable to detect undefined names

(F403)


38-38: Redefinition of unused np from line 8

Remove definition: np

(F811)


146-146: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


189-189: Probable use of unsafe loader OrderedDictYAMLLoader with yaml.load. Allows instantiation of arbitrary objects. Consider yaml.safe_load.

(S506)


192-192: Avoid equality comparisons to False; use not self.generalize: for false checks

Replace with not self.generalize

(E712)


200-200: pickle and modules that wrap it can be unsafe when used to deserialize untrusted data, possible security issue

(S301)


205-205: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


212-212: Prefer next(iter(self.specs.values())) over single element slice

Replace with next(iter(self.specs.values()))

(RUF015)


225-225: Zhenxin_S_FC_Class may be undefined, or defined from star imports

(F405)


236-236: Local variable action_space is assigned to but never used

Remove assignment to unused variable action_space

(F841)


327-327: Avoid equality comparisons to True; use self.generalize: for truth checks

Replace with self.generalize

(E712)


328-328: Avoid equality comparisons to True; use self.valid: for truth checks

Replace with self.valid

(E712)


334-334: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


340-340: Avoid equality comparisons to False; use not self.multi_goal: for false checks

Replace with not self.multi_goal

(E712)


343-343: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


388-388: Local variable reward is assigned to but never used

Remove assignment to unused variable reward

(F841)

eval_engines/ngspice/Zhenxin_S_FC.py

105-105: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

autockt/envs/ngspice_ledro_d_fc.py

20-20: from eval_engines.util.core import * used; unable to detect undefined names

(F403)


22-22: Redefinition of unused os from line 17

Remove definition: os

(F811)


24-24: from eval_engines.ngspice.TwoStageClass import * used; unable to detect undefined names

(F403)


25-25: from eval_engines.ngspice.LEDRO_D_FC import * used; unable to detect undefined names

(F403)


93-93: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


111-111: Probable use of unsafe loader OrderedDictYAMLLoader with yaml.load. Allows instantiation of arbitrary objects. Consider yaml.safe_load.

(S506)


114-114: Avoid equality comparisons to False; use not self.generalize: for false checks

Replace with not self.generalize

(E712)


121-121: pickle and modules that wrap it can be unsafe when used to deserialize untrusted data, possible security issue

(S301)


126-126: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


133-133: Prefer next(iter(self.specs.values())) over single element slice

Replace with next(iter(self.specs.values()))

(RUF015)


145-145: LEDRO_D_FC_Class may be undefined, or defined from star imports

(F405)


180-180: Avoid equality comparisons to True; use self.generalize: for truth checks

Replace with self.generalize

(E712)


181-181: Avoid equality comparisons to True; use self.valid: for truth checks

Replace with self.valid

(E712)


187-187: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


193-193: Avoid equality comparisons to False; use not self.multi_goal: for false checks

Replace with not self.multi_goal

(E712)


196-196: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


210-212: Consider iterable unpacking instead of concatenation

Replace with iterable unpacking

(RUF005)


217-217: Local variable reward is assigned to but never used

Remove assignment to unused variable reward

(F841)

autockt/envs/ngspice_ledro_d_fc45.py

20-20: from eval_engines.util.core import * used; unable to detect undefined names

(F403)


22-22: Redefinition of unused os from line 17

Remove definition: os

(F811)


24-24: from eval_engines.ngspice.TwoStageClass import * used; unable to detect undefined names

(F403)


25-25: from eval_engines.ngspice.LEDRO_D_FC import * used; unable to detect undefined names

(F403)


26-26: from eval_engines.ngspice.LEDRO_D_FC45 import * used; unable to detect undefined names

(F403)


33-33: Redefinition of unused np from line 8

Remove definition: np

(F811)


133-133: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)


153-153: Probable use of unsafe loader OrderedDictYAMLLoader with yaml.load. Allows instantiation of arbitrary objects. Consider yaml.safe_load.

(S506)


156-156: Avoid equality comparisons to False; use not self.generalize: for false checks

Replace with not self.generalize

(E712)


163-163: pickle and modules that wrap it can be unsafe when used to deserialize untrusted data, possible security issue

(S301)


168-168: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


175-175: Prefer next(iter(self.specs.values())) over single element slice

Replace with next(iter(self.specs.values()))

(RUF015)


187-187: LEDRO_D_FC45_Class may be undefined, or defined from star imports

(F405)


197-197: Comment contains ambiguous × (MULTIPLICATION SIGN). Did you mean x (LATIN SMALL LETTER X)?

(RUF003)


268-268: Avoid equality comparisons to True; use self.generalize: for truth checks

Replace with self.generalize

(E712)


269-269: Avoid equality comparisons to True; use self.valid: for truth checks

Replace with self.valid

(E712)


275-275: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


281-281: Avoid equality comparisons to False; use not self.multi_goal: for false checks

Replace with not self.multi_goal

(E712)


284-284: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


310-310: Local variable reward is assigned to but never used

Remove assignment to unused variable reward

(F841)

🪛 YAMLlint (1.37.1)
eval_engines/ngspice/ngspice_inputs/yaml_files/zhenxin_s_fc.yaml

[warning] 6-6: too many spaces after colon

(colons)


[warning] 7-7: too many spaces after colon

(colons)


[warning] 8-8: too many spaces after colon

(colons)


[warning] 9-9: too many spaces after colon

(colons)


[warning] 10-10: too many spaces after colon

(colons)


[warning] 11-11: too many spaces after colon

(colons)


[warning] 16-16: too many spaces after colon

(colons)


[error] 17-17: trailing spaces

(trailing-spaces)


[warning] 23-23: too few spaces after comma

(commas)


[warning] 25-25: too few spaces after comma

(commas)

eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc45.yaml

[warning] 6-6: too many spaces after colon

(colons)


[warning] 7-7: too many spaces after colon

(colons)


[warning] 8-8: too many spaces after colon

(colons)


[warning] 9-9: too many spaces after colon

(colons)


[warning] 10-10: too many spaces after colon

(colons)


[warning] 11-11: too many spaces after colon

(colons)


[warning] 12-12: too many spaces after colon

(colons)


[warning] 13-13: too many spaces after colon

(colons)


[warning] 14-14: too many spaces after colon

(colons)


[warning] 15-15: too many spaces after colon

(colons)


[warning] 16-16: too many spaces after colon

(colons)


[warning] 17-17: too many spaces after colon

(colons)


[warning] 18-18: too many spaces after colon

(colons)


[warning] 19-19: too many spaces after colon

(colons)


[warning] 20-20: too many spaces after colon

(colons)


[warning] 21-21: too many spaces after colon

(colons)


[warning] 22-22: too many spaces after colon

(colons)


[warning] 23-23: too many spaces after colon

(colons)


[warning] 29-29: too many spaces after colon

(colons)


[warning] 30-30: too many spaces after colon

(colons)


[error] 31-31: trailing spaces

(trailing-spaces)


[warning] 37-37: too few spaces after comma

(commas)


[warning] 39-39: too few spaces after comma

(commas)

🪛 markdownlint-cli2 (0.17.2)
experiments/optimize-Zhenxin_S_FC_65nmPTM-run14/README.md

4-4: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


4-4: Hard tabs
Column: 1

(MD010, no-hard-tabs)


5-5: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


5-5: Hard tabs
Column: 1

(MD010, no-hard-tabs)


6-6: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


6-6: Hard tabs
Column: 1

(MD010, no-hard-tabs)


7-7: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


7-7: Hard tabs
Column: 1

(MD010, no-hard-tabs)


8-8: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


8-8: Hard tabs
Column: 1

(MD010, no-hard-tabs)


9-9: Unordered list indentation
Expected: 4; Actual: 2

(MD007, ul-indent)


9-9: Hard tabs
Column: 1

(MD010, no-hard-tabs)


10-10: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


10-10: Hard tabs
Column: 1

(MD010, no-hard-tabs)


11-11: Unordered list indentation
Expected: 4; Actual: 2

(MD007, ul-indent)


11-11: Hard tabs
Column: 1

(MD010, no-hard-tabs)


12-12: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


12-12: Hard tabs
Column: 1

(MD010, no-hard-tabs)


13-13: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


13-13: Hard tabs
Column: 1

(MD010, no-hard-tabs)


14-14: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


14-14: Hard tabs
Column: 1

(MD010, no-hard-tabs)


15-15: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


15-15: Hard tabs
Column: 1

(MD010, no-hard-tabs)


16-16: Unordered list indentation
Expected: 2; Actual: 1

(MD007, ul-indent)


16-16: Hard tabs
Column: 1

(MD010, no-hard-tabs)


17-17: Unordered list indentation
Expected: 4; Actual: 2

(MD007, ul-indent)


17-17: Hard tabs
Column: 1

(MD010, no-hard-tabs)


18-18: Unordered list indentation
Expected: 4; Actual: 2

(MD007, ul-indent)


18-18: Hard tabs
Column: 1

(MD010, no-hard-tabs)

🔇 Additional comments (8)
.gitignore (1)

203-203: Approve .streamlit/secrets.toml ignore entry; no commit history found.

eval_engines/ngspice/ngspice_inputs/netlist/ledro_d_fc45.cir (1)

94-104: Ensure AC/DC data are reliably written for downstream parsers.

You already have wrdata; also keep set wr_vecnames (present) and ascii. Consider adding an explicit check or echo on success.

experiments/ledro_d_fc_7nm_run3/PPO_LEDRO_D_FC_0_2025-08-28_17-19-59h7o8x_d0/params.json (1)

1-18: Config looks fine

Parameters are reasonable for PPO and align with the new LEDRO_D_FC environment.

experiments/optimize-Zhenxin_S_FC_65nmPTM-run15/PPO_Zhenxin_S_FC_0_2025-09-05_08-42-54nto41l8r/error_2025-09-06_06-30-25.txt (1)

1-53: Discard the proposed diffs: the patch targets a non-existent signature in NgSpiceWrapper.create_design_and_simulate and doesn’t align with the actual wrapper structure.

Likely an incorrect or invalid review comment.

autockt/eval.py (1)

64-64: Env registration looks good.

New "ledro_d_fc" registration aligns with the imported class.

autockt/train_zhenxin_s_fc_65nm.py (1)

68-77: Verify restore env matches the checkpoint’s env.

Training uses Zhenxin_S_FC, but restore uses LEDRO_D_FC45. Unless you intend cross-env restoration (likely incompatible policy shapes), switch to Zhenxin_S_FC or confirm the checkpoint env.

Suggested change:

-                "env": LEDRO_D_FC45,
+                "env": Zhenxin_S_FC,
eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml (1)

1-3: Confirm case-sensitive path correctness for database_dir.

"Autockt/..." vs project’s "autockt/..." may create a new folder or fail on case-sensitive systems.

If needed:

-database_dir: "Autockt/checkpoint/db/ledro_d_fc_ngspice"
+database_dir: "autockt/checkpoint/db/ledro_d_fc_ngspice"
eval_engines/ngspice/Zhenxin_S_FC.py (1)

103-117: Ignore create_design override—Jinja2 templating is already used in parent
The parent (ngspice_wrapper.py) imports and uses jinja2.Template to render netlists, so this override aligns with the existing workflow.

Likely an incorrect or invalid review comment.

Comment thread autockt/envs/ngspice_ledro_d_fc45.py
Comment on lines +97 to +129
"""
A YAML loader that loads mappings into ordered dictionaries.
"""

def __init__(self, *args, **kwargs):
yaml.Loader.__init__(self, *args, **kwargs)

self.add_constructor("tag:yaml.org,2002:map", type(self).construct_yaml_map)
self.add_constructor("tag:yaml.org,2002:omap", type(self).construct_yaml_map)

def construct_yaml_map(self, node):
data = OrderedDict()
yield data
value = self.construct_mapping(node)
data.update(value)

def construct_mapping(self, node, deep=False):
if isinstance(node, yaml.MappingNode):
self.flatten_mapping(node)
else:
raise yaml.constructor.ConstructorError(
None,
None,
"expected a mapping node, but found %s" % node.id,
node.start_mark,
)

mapping = OrderedDict()
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
value = self.construct_object(value_node, deep=deep)
mapping[key] = value
return mapping
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Code duplication: OrderedDictYAMLLoader class.

This class is duplicated in both environment files. Consider extracting it to a common utilities module.

Would you like me to help create a common utilities module to eliminate this duplication? This would improve maintainability and reduce the risk of inconsistencies.

🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc45.py around lines 97 to 129 the
OrderedDictYAMLLoader class is duplicated across environment files; extract this
loader into a shared utility module (e.g., autockt/utils/yaml.py or
autockt/common/yaml_loader.py), move the class there, update both environment
files to import the loader from the new module, and remove the duplicate class
definitions; ensure the new module exposes the same class name and behavior and
run tests/linters to confirm imports and references are correct.


self.env_steps = 0
with open(LEDRO_D_FC45.CIR_YAML, "r") as f:
yaml_data = yaml.load(f, OrderedDictYAMLLoader)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security concern: Using unsafe YAML loader.

Similar to the previous file, this uses an unsafe YAML loader that could execute arbitrary code.

Update OrderedDictYAMLLoader to extend SafeLoader:

-class OrderedDictYAMLLoader(yaml.Loader):
+class OrderedDictYAMLLoader(yaml.SafeLoader):
     """
     A YAML loader that loads mappings into ordered dictionaries.
     """
 
     def __init__(self, *args, **kwargs):
-        yaml.Loader.__init__(self, *args, **kwargs)
+        yaml.SafeLoader.__init__(self, *args, **kwargs)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
yaml_data = yaml.load(f, OrderedDictYAMLLoader)
class OrderedDictYAMLLoader(yaml.SafeLoader):
"""
A YAML loader that loads mappings into ordered dictionaries.
"""
def __init__(self, *args, **kwargs):
yaml.SafeLoader.__init__(self, *args, **kwargs)
🧰 Tools
🪛 Ruff (0.12.2)

153-153: Probable use of unsafe loader OrderedDictYAMLLoader with yaml.load. Allows instantiation of arbitrary objects. Consider yaml.safe_load.

(S506)

🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc45.py around line 153, the code calls
yaml.load with OrderedDictYAMLLoader which is an unsafe loader; update the
OrderedDictYAMLLoader class to inherit from yaml.SafeLoader (or yaml.Loader
alias for safe) and ensure the yaml.load call explicitly passes
Loader=OrderedDictYAMLLoader (i.e., use yaml.load(f,
Loader=OrderedDictYAMLLoader)) so the loader runs in safe mode and prevents
arbitrary code execution.

Comment on lines +342 to +344
for idx in [2, 2+3, 5+3, 8+3, 11+3, 14+3, -1, -2]:
action[idx] = int(action[idx])

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential index out of bounds error.

The code assumes specific indices exist in the action array but doesn't validate the array length.

Add validation before accessing indices:

+        if len(action) < 25:
+            raise ValueError(f"Expected action array of length 25, got {len(action)}")
         for idx in [2, 2+3, 5+3, 8+3, 11+3, 14+3, -1, -2]:
             action[idx] = int(action[idx])

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc45.py around lines 342 to 344, the loop
directly indexes action at hard-coded positions which can raise IndexError if
action is shorter; before converting, validate the array length and only operate
on indices that exist (e.g., compute the index list first, filter it with 0 <=
idx < len(action) for positive/negative handling or normalize negative indices
to positive equivalents), or raise a clear ValueError if required indices are
missing; then perform int conversion on the validated indices (or wrap accesses
in try/except to surface a descriptive error).

Comment on lines +24 to +30
from eval_engines.ngspice.TwoStageClass import *

# ADD_CIRCUIT
# tip: comment un-used classes to quickly grasp errors
# from eval_engines.ngspice.LEDRO_D_FC45 import *
# from eval_engines.ngspice.LEDRO_D_FC import *
from eval_engines.ngspice.Zhenxin_S_FC import *
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid wildcard imports that may cause namespace pollution.

Multiple wildcard imports make it difficult to track dependencies and can lead to name conflicts.

Replace wildcard imports with explicit imports:

-from eval_engines.ngspice.TwoStageClass import *
-# from eval_engines.ngspice.LEDRO_D_FC45 import *
-# from eval_engines.ngspice.LEDRO_D_FC import *
-from eval_engines.ngspice.Zhenxin_S_FC import *
+from eval_engines.ngspice.TwoStageClass import TwoStageClass
+from eval_engines.ngspice.Zhenxin_S_FC import Zhenxin_S_FC_Class

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.12.2)

24-24: from eval_engines.ngspice.TwoStageClass import * used; unable to detect undefined names

(F403)


30-30: from eval_engines.ngspice.Zhenxin_S_FC import * used; unable to detect undefined names

(F403)

🤖 Prompt for AI Agents
In autockt/envs/ngspice_zhenxin_s_fc.py lines 24 to 30, the file uses several
wildcard imports (from ... import *) which pollute the module namespace and risk
name collisions; replace each wildcard import with explicit imports of the exact
symbols you need (or import the module as an alias and reference symbols via
module.name) by opening eval_engines/ngspice/TwoStageClass.py and
Zhenxin_S_FC.py, identify the classes/functions used in this file, and change
the import statements to either "from <module> import ClassA, func_b, CONST_C"
for only those names or "import <module> as <alias>" and update references
accordingly so the namespace is explicit and unambiguous.

Comment on lines +10 to +19
def lookup(spec, goal_spec):
"""
Compute per-dimension normalized deviation between current and goal specifications.

The function converts goal_spec to floats and returns (spec - goal_spec) / (goal_spec + spec) elementwise.
Positive values indicate spec > goal_spec, negative values indicate spec < goal_spec. Inputs must be numeric arrays or array-like objects with compatible shapes; the result is a NumPy array of the same shape as the elementwise broadcast of the inputs.
"""
goal_spec = [float(e) for e in goal_spec]
norm_spec = (spec - goal_spec) / (goal_spec + spec)
return norm_spec
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard against division by zero and enforce shape compatibility in lookup.

Current formulation may divide by zero when spec == -goal per element; also lacks shape checks.

Apply:

-def lookup(spec, goal_spec):
+def lookup(spec, goal_spec):
@@
-    goal_spec = [float(e) for e in goal_spec]
-    norm_spec = (spec - goal_spec) / (goal_spec + spec)
-    return norm_spec
+    spec = np.asarray(spec, dtype=float)
+    goal_spec = np.asarray([float(e) for e in goal_spec], dtype=float)
+    if spec.shape != goal_spec.shape:
+        raise ValueError("spec and goal_spec must have the same shape")
+    denom = spec + goal_spec
+    eps = np.finfo(float).eps
+    norm_spec = np.divide(
+        spec - goal_spec,
+        denom,
+        out=np.zeros_like(denom, dtype=float),
+        where=np.abs(denom) > eps,
+    )
+    return norm_spec
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def lookup(spec, goal_spec):
"""
Compute per-dimension normalized deviation between current and goal specifications.
The function converts goal_spec to floats and returns (spec - goal_spec) / (goal_spec + spec) elementwise.
Positive values indicate spec > goal_spec, negative values indicate spec < goal_spec. Inputs must be numeric arrays or array-like objects with compatible shapes; the result is a NumPy array of the same shape as the elementwise broadcast of the inputs.
"""
goal_spec = [float(e) for e in goal_spec]
norm_spec = (spec - goal_spec) / (goal_spec + spec)
return norm_spec
def lookup(spec, goal_spec):
"""
Compute per-dimension normalized deviation between current and goal specifications.
The function converts goal_spec to floats and returns (spec - goal_spec) / (goal_spec + spec) elementwise.
Positive values indicate spec > goal_spec, negative values indicate spec < goal_spec. Inputs must be numeric arrays or array-like objects with compatible shapes; the result is a NumPy array of the same shape as the elementwise broadcast of the inputs.
"""
spec = np.asarray(spec, dtype=float)
goal_spec = np.asarray([float(e) for e in goal_spec], dtype=float)
if spec.shape != goal_spec.shape:
raise ValueError("spec and goal_spec must have the same shape")
denom = spec + goal_spec
eps = np.finfo(float).eps
norm_spec = np.divide(
spec - goal_spec,
denom,
out=np.zeros_like(denom, dtype=float),
where=np.abs(denom) > eps,
)
return norm_spec
🤖 Prompt for AI Agents
In examples/reward_calculation.py around lines 10 to 19, the lookup function can
divide by zero when spec == -goal_spec and does not enforce compatible shapes;
convert both inputs to NumPy arrays, validate that they have
broadcast-compatible shapes (or the same shape) and raise a clear ValueError if
not, compute the denominator as (goal_spec + spec) but guard against zeros by
replacing zeros (or abs(denominator) < tiny_eps) with a small epsilon or by
using np.where to set a safe denominator and optionally log/raise if any
replacements occurred, then return the elementwise normalized deviation as
before.

Comment on lines +19 to +21
self.action_space_low = action_space_low
self.action_space_high = action_space_high

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add shape/type normalization and guard against zero-width bounds.

Prevents divide-by-zero when any high == low and ensures inputs are arrays with matching shapes.

Apply:

     def __init__(self, action_space_low, action_space_high):
@@
-        self.action_space_low = action_space_low
-        self.action_space_high = action_space_high
+        self.action_space_low = np.asarray(action_space_low, dtype=float)
+        self.action_space_high = np.asarray(action_space_high, dtype=float)
+        if self.action_space_low.shape != self.action_space_high.shape:
+            raise ValueError("action_space_low/high must have the same shape")
+        if np.any(self.action_space_high <= self.action_space_low):
+            raise ValueError("Each high must be strictly greater than low")
@@
-        action = action * scale_factor + reloc_factor
+        action = np.asarray(action, dtype=float)
+        if action.shape != low.shape:
+            raise ValueError("action shape must match normalizer bounds")
+        action = action * scale_factor + reloc_factor
         action = np.clip(action, low, high)
@@
-        action = (action - reloc_factor) / scale_factor
+        action = np.asarray(action, dtype=float)
+        if action.shape != low.shape:
+            raise ValueError("action shape must match normalizer bounds")
+        action = (action - reloc_factor) / scale_factor

Optional readability tweak:

-        reloc_factor = high - scale_factor
+        reloc_factor = (high + low) / 2.0

Also applies to: 36-45, 55-65

🤖 Prompt for AI Agents
In examples/Zhenxin_S_FC/action_normalizer.py around lines 19 to 21, the class
stores action_space_low/high directly which can cause divide-by-zero when any
high == low and fails if inputs aren't arrays or shapes mismatch; convert
action_space_low/high to numpy arrays, validate matching shapes, and compute a
safe denominator by replacing zeros (where high==low) with 1.0 (or a tiny
epsilon) so normalization uses (high - low_safe); apply the same array
conversion/shape checks and zero-width-guard logic to the other affected ranges
(lines ~36-45 and ~55-65) where scaling/denormalizing is performed.

Comment on lines +37 to +41
if not os.path.isfile(ac_fname) or not os.path.isfile(dc_fname):
print("ac/dc file doesn't exist: %s" % output_path)

ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
dc_raw_outputs = np.genfromtxt(dc_fname, skip_header=1)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fail-fast on missing ac.csv/dc.csv instead of printing

Printing and continuing will crash on the next lines. Raise explicit errors to surface the root cause early.

Apply:

-        if not os.path.isfile(ac_fname) or not os.path.isfile(dc_fname):
-            print("ac/dc file doesn't exist: %s" % output_path)
+        if not os.path.isfile(ac_fname):
+            raise FileNotFoundError(f"Missing AC results file: {ac_fname}")
+        if not os.path.isfile(dc_fname):
+            raise FileNotFoundError(f"Missing DC results file: {dc_fname}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if not os.path.isfile(ac_fname) or not os.path.isfile(dc_fname):
print("ac/dc file doesn't exist: %s" % output_path)
ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
dc_raw_outputs = np.genfromtxt(dc_fname, skip_header=1)
if not os.path.isfile(ac_fname):
raise FileNotFoundError(f"Missing AC results file: {ac_fname}")
if not os.path.isfile(dc_fname):
raise FileNotFoundError(f"Missing DC results file: {dc_fname}")
ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
dc_raw_outputs = np.genfromtxt(dc_fname, skip_header=1)
🤖 Prompt for AI Agents
In examples/Zhenxin_S_FC/extract_perf.py around lines 37 to 41, the code only
prints when ac.csv or dc.csv is missing but then continues and will crash on the
subsequent np.genfromtxt calls; change this to fail-fast by checking both file
paths and raising a clear exception (e.g., FileNotFoundError or RuntimeError)
that includes the missing filename(s) and output_path so the caller sees the
root cause immediately instead of proceeding.

vout_real = ac_raw_outputs[:, 1]
vout_imag = ac_raw_outputs[:, 2]
vout = vout_real + 1j * vout_imag
ibias = -dc_raw_outputs[1]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Make ibias extraction robust to 1D/2D shapes

Indexing dc_raw_outputs[1] is fragile. Flatten and pick a stable element.

Apply:

-        ibias = -dc_raw_outputs[1]
+        dc_vals = np.atleast_1d(dc_raw_outputs).astype(float).ravel()
+        if dc_vals.size < 2:
+            raise ValueError(f"Unexpected DC data shape from {dc_fname}: {dc_raw_outputs.shape}")
+        ibias = -float(dc_vals[1])

If you prefer column-based access, I can align this with the exact dc.csv column order from your netlist. Want me to scan the netlists for wrdata dc.csv to confirm?


🏁 Script executed:

#!/bin/bash
rg -n "dc_raw_outputs" -C5 examples/Zhenxin_S_FC/extract_perf.py

Length of output: 611


Make ibias extraction shape-agnostic
Indexing dc_raw_outputs[1] returns an entire row when the CSV has multiple rows. Flatten the data and pick the second element to ensure ibias is always a scalar:

-        ibias = -dc_raw_outputs[1]
+        dc_vals = np.atleast_1d(dc_raw_outputs).astype(float).ravel()
+        if dc_vals.size < 2:
+            raise ValueError(f"Unexpected DC data shape from {dc_fname}: {dc_raw_outputs.shape}")
+        ibias = -float(dc_vals[1])
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ibias = -dc_raw_outputs[1]
# Ensure dc_raw_outputs is a flat array of floats, then extract the second element
dc_vals = np.atleast_1d(dc_raw_outputs).astype(float).ravel()
if dc_vals.size < 2:
raise ValueError(f"Unexpected DC data shape from {dc_fname}: {dc_raw_outputs.shape}")
ibias = -float(dc_vals[1])
🤖 Prompt for AI Agents
In examples/Zhenxin_S_FC/extract_perf.py around line 46, the assignment ibias =
-dc_raw_outputs[1] assumes dc_raw_outputs has a single row; when the CSV has
multiple rows this returns a whole row instead of a scalar. Replace this with a
shape-agnostic extraction: flatten or ravel dc_raw_outputs (or convert to a flat
list) and then take the second element (index 1) and negate it so ibias is
always a scalar; also add a simple length check to raise a clear error if there
are fewer than 2 elements.

Comment on lines +1 to +50
Traceback (most recent call last):
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/trial_runner.py", line 378, in _process_events
result = self.trial_executor.fetch_result(trial)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/ray_trial_executor.py", line 228, in fetch_result
result = ray.get(trial_future[0])
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/worker.py", line 2132, in get
raise value
ray.worker.RayTaskError: ray_worker (pid=3651949, host=cda-server-2)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/agents/agent.py", line 284, in train
result = Trainable.train(self)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/trainable.py", line 151, in train
result = self._train()
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/agents/ppo/ppo.py", line 103, in _train
fetches = self.optimizer.step()
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 125, in step
self.num_envs_per_worker, self.train_batch_size)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/optimizers/rollout.py", line 28, in collect_samples
next_sample = ray.get(fut_sample)
ray.worker.RayTaskError: ray_worker (pid=3651952, host=cda-server-2)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/policy_evaluator.py", line 368, in sample
batches = [self.input_reader.next()]
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/offline/input_reader.py", line 31, in next
batches = [self.sampler.get_data()]
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 65, in get_data
item = next(self.rollout_provider)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 267, in _env_runner
preprocessors, obs_filters, unroll_length, pack, callbacks)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 403, in _process_observations
resetted_obs = base_env.try_reset(env_id)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/env/base_env.py", line 257, in try_reset
return {_DUMMY_AGENT_ID: self.vector_env.reset_at(env_id)}
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/env/vector_env.py", line 88, in reset_at
return self.envs[index].reset()
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/autockt/envs/ngspice_zhenxin_s_fc.py", line 344, in reset
self.cur_specs = self.update(self.cur_params_idx)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/autockt/envs/ngspice_zhenxin_s_fc.py", line 496, in update
self.sim_env.create_design_and_simulate(param_val[0])[1].items(),
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/ngspice_wrapper.py", line 133, in create_design_and_simulate
specs = self.translate_result(design_folder)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/Zhenxin_S_FC.py", line 29, in translate_result
freq, vout, ibias = self.parse_output(output_path)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/Zhenxin_S_FC.py", line 46, in parse_output
ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/npyio.py", line 1744, in genfromtxt
fhd = iter(np.lib._datasource.open(fname, 'rt', encoding=encoding))
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/_datasource.py", line 266, in open
return ds.open(path, mode, encoding=encoding, newline=newline)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/_datasource.py", line 624, in open
raise IOError("%s not found." % path)
OSError: /tmp/ckt_da/designs_Zhenxin_S_FC/Zhenxin_S_FC_242.843_133.808_292.648_198.925_139.809_131.131_0.657_0.178_0.686_0.305_1.02397/ac.csv not found.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Don’t crash training on missing ac.csv; fail-soft and return penalized specs.

Translate path blows up on FileNotFoundError, halting PPO. Handle missing/invalid NGSpice outputs by returning a worst-case spec dict (or NaNs handled upstream) so the rollout can continue and the agent learns to avoid unstable regions.

Apply in ngspice_wrapper.create_design_and_simulate and Zhenxin_S_FC.parse_output:

--- a/eval_engines/ngspice/ngspice_wrapper.py
+++ b/eval_engines/ngspice/ngspice_wrapper.py
@@
-        specs = self.translate_result(design_folder)
+        try:
+            specs = self.translate_result(design_folder)
+        except Exception as e:
+            # Fail-soft: penalize this design but keep RL loop alive
+            spec_keys = list(self.yaml_cfg["target_specs"].keys())
+            worst = { 
+                "gain_min": 0.0, "ugbw_min": 0.0, "phm_min": 0.0, "ibias_max": 1e9
+            }
+            specs = {k: worst.get(k, 0.0) for k in spec_keys}
--- a/eval_engines/ngspice/Zhenxin_S_FC.py
+++ b/eval_engines/ngspice/Zhenxin_S_FC.py
@@
-        ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
+        if not os.path.exists(ac_fname):
+            raise FileNotFoundError(ac_fname)
+        ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
@@
-        dc_raw_outputs = np.genfromtxt(dc_fname, skip_header=1)
+        if not os.path.exists(dc_fname):
+            raise FileNotFoundError(dc_fname)
+        dc_raw_outputs = np.genfromtxt(dc_fname, skip_header=1)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Traceback (most recent call last):
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/trial_runner.py", line 378, in _process_events
result = self.trial_executor.fetch_result(trial)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/ray_trial_executor.py", line 228, in fetch_result
result = ray.get(trial_future[0])
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/worker.py", line 2132, in get
raise value
ray.worker.RayTaskError: �[36mray_worker�[39m (pid=3651949, host=cda-server-2)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/agents/agent.py", line 284, in train
result = Trainable.train(self)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/trainable.py", line 151, in train
result = self._train()
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/agents/ppo/ppo.py", line 103, in _train
fetches = self.optimizer.step()
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 125, in step
self.num_envs_per_worker, self.train_batch_size)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/optimizers/rollout.py", line 28, in collect_samples
next_sample = ray.get(fut_sample)
ray.worker.RayTaskError: �[36mray_worker�[39m (pid=3651952, host=cda-server-2)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/policy_evaluator.py", line 368, in sample
batches = [self.input_reader.next()]
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/offline/input_reader.py", line 31, in next
batches = [self.sampler.get_data()]
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 65, in get_data
item = next(self.rollout_provider)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 267, in _env_runner
preprocessors, obs_filters, unroll_length, pack, callbacks)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 403, in _process_observations
resetted_obs = base_env.try_reset(env_id)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/env/base_env.py", line 257, in try_reset
return {_DUMMY_AGENT_ID: self.vector_env.reset_at(env_id)}
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/env/vector_env.py", line 88, in reset_at
return self.envs[index].reset()
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/autockt/envs/ngspice_zhenxin_s_fc.py", line 344, in reset
self.cur_specs = self.update(self.cur_params_idx)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/autockt/envs/ngspice_zhenxin_s_fc.py", line 496, in update
self.sim_env.create_design_and_simulate(param_val[0])[1].items(),
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/ngspice_wrapper.py", line 133, in create_design_and_simulate
specs = self.translate_result(design_folder)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/Zhenxin_S_FC.py", line 29, in translate_result
freq, vout, ibias = self.parse_output(output_path)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/Zhenxin_S_FC.py", line 46, in parse_output
ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/npyio.py", line 1744, in genfromtxt
fhd = iter(np.lib._datasource.open(fname, 'rt', encoding=encoding))
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/_datasource.py", line 266, in open
return ds.open(path, mode, encoding=encoding, newline=newline)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/_datasource.py", line 624, in open
raise IOError("%s not found." % path)
OSError: /tmp/ckt_da/designs_Zhenxin_S_FC/Zhenxin_S_FC_242.843_133.808_292.648_198.925_139.809_131.131_0.657_0.178_0.686_0.305_1.02397/ac.csv not found.
# In eval_engines/ngspice/ngspice_wrapper.py, around the line assigning `specs`:
try:
specs = self.translate_result(design_folder)
except Exception as e:
# Fail-soft: penalize this design but keep the RL loop alive
spec_keys = list(self.yaml_cfg["target_specs"].keys())
worst = {
"gain_min": 0.0,
"ugbw_min": 0.0,
"phm_min": 0.0,
"ibias_max": 1e9,
}
specs = {k: worst.get(k, 0.0) for k in spec_keys}
Suggested change
Traceback (most recent call last):
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/trial_runner.py", line 378, in _process_events
result = self.trial_executor.fetch_result(trial)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/ray_trial_executor.py", line 228, in fetch_result
result = ray.get(trial_future[0])
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/worker.py", line 2132, in get
raise value
ray.worker.RayTaskError: �[36mray_worker�[39m (pid=3651949, host=cda-server-2)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/agents/agent.py", line 284, in train
result = Trainable.train(self)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/tune/trainable.py", line 151, in train
result = self._train()
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/agents/ppo/ppo.py", line 103, in _train
fetches = self.optimizer.step()
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/optimizers/multi_gpu_optimizer.py", line 125, in step
self.num_envs_per_worker, self.train_batch_size)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/optimizers/rollout.py", line 28, in collect_samples
next_sample = ray.get(fut_sample)
ray.worker.RayTaskError: �[36mray_worker�[39m (pid=3651952, host=cda-server-2)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/policy_evaluator.py", line 368, in sample
batches = [self.input_reader.next()]
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/offline/input_reader.py", line 31, in next
batches = [self.sampler.get_data()]
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 65, in get_data
item = next(self.rollout_provider)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 267, in _env_runner
preprocessors, obs_filters, unroll_length, pack, callbacks)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/evaluation/sampler.py", line 403, in _process_observations
resetted_obs = base_env.try_reset(env_id)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/env/base_env.py", line 257, in try_reset
return {_DUMMY_AGENT_ID: self.vector_env.reset_at(env_id)}
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/ray/rllib/env/vector_env.py", line 88, in reset_at
return self.envs[index].reset()
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/autockt/envs/ngspice_zhenxin_s_fc.py", line 344, in reset
self.cur_specs = self.update(self.cur_params_idx)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/autockt/envs/ngspice_zhenxin_s_fc.py", line 496, in update
self.sim_env.create_design_and_simulate(param_val[0])[1].items(),
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/ngspice_wrapper.py", line 133, in create_design_and_simulate
specs = self.translate_result(design_folder)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/Zhenxin_S_FC.py", line 29, in translate_result
freq, vout, ibias = self.parse_output(output_path)
File "/home/pham/code/analog-ml/AutoCkt-optimize-Zhenxin_S_FC_65nmPTM-run14/eval_engines/ngspice/Zhenxin_S_FC.py", line 46, in parse_output
ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/npyio.py", line 1744, in genfromtxt
fhd = iter(np.lib._datasource.open(fname, 'rt', encoding=encoding))
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/_datasource.py", line 266, in open
return ds.open(path, mode, encoding=encoding, newline=newline)
File "/home/pham/anaconda3/envs/autockt/lib/python3.5/site-packages/numpy/lib/_datasource.py", line 624, in open
raise IOError("%s not found." % path)
OSError: /tmp/ckt_da/designs_Zhenxin_S_FC/Zhenxin_S_FC_242.843_133.808_292.648_198.925_139.809_131.131_0.657_0.178_0.686_0.305_1.02397/ac.csv not found.
# In eval_engines/ngspice/Zhenxin_S_FC.py, inside `parse_output` before loading each CSV:
if not os.path.exists(ac_fname):
raise FileNotFoundError(ac_fname)
ac_raw_outputs = np.genfromtxt(ac_fname, skip_header=1)
if not os.path.exists(dc_fname):
raise FileNotFoundError(dc_fname)
dc_raw_outputs = np.genfromtxt(dc_fname, skip_header=1)

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +20 to +25
from eval_engines.util.core import *
import pickle
import os

from eval_engines.ngspice.TwoStageClass import *
from eval_engines.ngspice.LEDRO_D_FC import *
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove star imports and duplicate import; import only what you use.

Avoid F403/F405 collisions and the class name shadowing with local LEDRO_D_FC.

-from eval_engines.util.core import *
+# from eval_engines.util.core import debug  # if needed
@@
-from eval_engines.ngspice.TwoStageClass import *
-from eval_engines.ngspice.LEDRO_D_FC import *
+from eval_engines.ngspice.LEDRO_D_FC import LEDRO_D_FC_Class

Also drop the duplicate import os on Line 22.

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.12.2)

20-20: from eval_engines.util.core import * used; unable to detect undefined names

(F403)


22-22: Redefinition of unused os from line 17

Remove definition: os

(F811)


24-24: from eval_engines.ngspice.TwoStageClass import * used; unable to detect undefined names

(F403)


25-25: from eval_engines.ngspice.LEDRO_D_FC import * used; unable to detect undefined names

(F403)

🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc.py around lines 20 to 25, replace the
wildcard/star imports and the duplicate os import with explicit imports of only
the symbols used in this module, remove the repeated "import os" on line 22, and
avoid shadowing the local class name LEDRO_D_FC by either importing the class
under a clear alias (e.g., from eval_engines.ngspice.LEDRO_D_FC import
LEDRO_D_FC as LEDRO_D_FC_Class) or import the module and reference the class via
the module (e.g., import eval_engines.ngspice.LEDRO_D_FC as ledro_module) so
there are no F403/F405 collisions and the namespace is explicit.



# way of ordering the way a yaml file is read
class OrderedDictYAMLLoader(yaml.Loader):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use SafeLoader for YAML; avoid arbitrary object construction.

yaml.load with yaml.Loader is unsafe. Switch to SafeLoader subclass and named Loader kwarg.

-class OrderedDictYAMLLoader(yaml.Loader):
+class OrderedDictYAMLLoader(yaml.SafeLoader):
@@
-        yaml.Loader.__init__(self, *args, **kwargs)
+        yaml.SafeLoader.__init__(self, *args, **kwargs)
@@
-            yaml_data = yaml.load(f, OrderedDictYAMLLoader)
+            yaml_data = yaml.load(f, Loader=OrderedDictYAMLLoader)

Also applies to: 62-62, 111-111

🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc.py around line 56 (and also apply the same
change at lines 62 and 111), the YAML loader subclasses yaml.Loader which allows
arbitrary object construction; change these to subclass yaml.SafeLoader and pass
the SafeLoader (named Loader=...) when calling yaml.load so that parsing uses
safe loading. Update any custom constructor registrations to register with
SafeLoader instead of yaml.Loader and replace yaml.load(..., Loader=...)
accordingly to use yaml.SafeLoader to avoid unsafe deserialization.

Comment on lines +98 to +101
# obtains yaml file
path = os.getcwd()
CIR_YAML = path + "/eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml"

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make CIR_YAML path robust (don’t rely on CWD).

Use path relative to this file to work across runners and notebooks.

-    path = os.getcwd()
-    CIR_YAML = path + "/eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml"
+    path = os.path.dirname(os.path.abspath(__file__))
+    CIR_YAML = os.path.abspath(
+        os.path.join(path, "../../eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml")
+    )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# obtains yaml file
path = os.getcwd()
CIR_YAML = path + "/eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml"
# obtains yaml file
path = os.path.dirname(os.path.abspath(__file__))
CIR_YAML = os.path.abspath(
os.path.join(path, "../../eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml")
)
🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc.py around lines 98 to 101, the CIR_YAML path
is built from os.getcwd(), which is fragile; instead compute the YAML path
relative to this module file so it works across runners/notebooks. Replace use
of os.getcwd() with the directory of this file (e.g.
os.path.dirname(os.path.abspath(__file__)) or
pathlib.Path(__file__).resolve().parent) and construct CIR_YAML via os.path.join
(or Path / operator) to point to
"eval_engines/ngspice/ngspice_inputs/yaml_files/ledro_d_fc.yaml".

Comment on lines +117 to +122
load_specs_path = (
LEDRO_D_FC.path + "/autockt/gen_specs/ngspice_specs_gen_ledro_d_fc"
)
with open(load_specs_path, "rb") as f:
specs = pickle.load(f)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Gating and validating pickle load.

Untrusted pickle is unsafe (S301) and path may be missing. Gate with an explicit flag and validate existence.

-            with open(load_specs_path, "rb") as f:
-                specs = pickle.load(f)
+            assert env_config.get("trust_pickle", False), \
+                "Refuse to load specs from pickle unless env_config['trust_pickle']=True"
+            if not os.path.isfile(load_specs_path):
+                raise FileNotFoundError(f"Missing specs pickle at {load_specs_path}")
+            with open(load_specs_path, "rb") as f:
+                specs = pickle.load(f)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
load_specs_path = (
LEDRO_D_FC.path + "/autockt/gen_specs/ngspice_specs_gen_ledro_d_fc"
)
with open(load_specs_path, "rb") as f:
specs = pickle.load(f)
load_specs_path = (
LEDRO_D_FC.path + "/autockt/gen_specs/ngspice_specs_gen_ledro_d_fc"
)
assert env_config.get("trust_pickle", False), \
"Refuse to load specs from pickle unless env_config['trust_pickle']=True"
if not os.path.isfile(load_specs_path):
raise FileNotFoundError(f"Missing specs pickle at {load_specs_path}")
with open(load_specs_path, "rb") as f:
specs = pickle.load(f)
🧰 Tools
🪛 Ruff (0.12.2)

121-121: pickle and modules that wrap it can be unsafe when used to deserialize untrusted data, possible security issue

(S301)

Comment on lines +153 to +162
self.observation_space = spaces.Box(
low=np.array(
[LEDRO_D_FC.PERF_LOW] * 2 * len(self.specs_id)
+ len(self.params_id) * [1]
),
high=np.array(
[LEDRO_D_FC.PERF_HIGH] * 2 * len(self.specs_id)
+ len(self.params_id) * [1]
),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix observation_space bounds and normalize parameter indices (current obs violates Box).

Spec deltas can be in [-1, 1], but high is set to 0. Param indices are raw (0..N-1) while Box limits them to [0,1]. This will break Gym checks and learning stability.

Apply these diffs:

@@
-        self.observation_space = spaces.Box(
-            low=np.array(
-                [LEDRO_D_FC.PERF_LOW] * 2 * len(self.specs_id)
-                + len(self.params_id) * [1]
-            ),
-            high=np.array(
-                [LEDRO_D_FC.PERF_HIGH] * 2 * len(self.specs_id)
-                + len(self.params_id) * [1]
-            ),
-        )
+        self.observation_space = spaces.Box(
+            low=np.concatenate(
+                [np.full(2 * len(self.specs_id), LEDRO_D_FC.PERF_LOW, dtype=np.float32),
+                 np.zeros(len(self.params_id), dtype=np.float32)]
+            ),
+            high=np.concatenate(
+                [np.full(2 * len(self.specs_id), 1.0, dtype=np.float32),
+                 np.ones(len(self.params_id), dtype=np.float32)]
+            ),
+            dtype=np.float32,
+        )
@@
-        self.ob = np.concatenate(
-            [cur_spec_norm, self.specs_ideal_norm, self.cur_params_idx]
-        )
+        self.ob = np.concatenate(
+            [cur_spec_norm, self.specs_ideal_norm, self.cur_params_idx / self.param_max_idx]
+        )
@@
-        self.ob = np.concatenate(
-            [cur_spec_norm, self.specs_ideal_norm, self.cur_params_idx]
-        )
+        self.ob = np.concatenate(
+            [cur_spec_norm, self.specs_ideal_norm, self.cur_params_idx / self.param_max_idx]
+        )

Add this once after params are built (outside selected range):

# after building self.params
self.param_max_idx = np.array([len(p) - 1 for p in self.params], dtype=np.float32)

Also applies to: 220-222, 260-262

🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc.py around lines 153-162 (and also update the
analogous blocks at 220-222 and 260-262), the observation_space bounds are
incorrect: spec deltas must span [-1, 1] and parameter indices must be
represented in [0, 1] (not raw 0..N-1). After building self.params add:
self.param_max_idx = np.array([len(p) - 1 for p in self.params],
dtype=np.float32). Then construct observation_space lows and highs so the
spec-delta entries use [-1.0] and [1.0] (repeat for 2 * len(self.specs_id)), and
the param index entries use 0.0 and 1.0 (repeat len(self.params_id)); replace
the current arrays at each location accordingly so Gym Box bounds match
normalized parameter indices and spec delta ranges.

Comment on lines +207 to +213
self.cur_params_idx = np.array([2] * 17)
self.cur_params_idx = np.array(
# [2, 2, 2, 2, 2, 2] + [200, 200, 200, 200, 200, 200] + [10, 10, 10, 10, 10]
[2, 2, 2, 2, 2, 2]
+ [200, 200, 200, 200, 200, 200]
+ [10, 10, 10, 10, 10]
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove hard-coded 17-dim indices. Compute from YAML to avoid shape mismatches.

Hard-coding breaks as soon as param counts change.

-        self.cur_params_idx = np.array([2] * 17)
-        self.cur_params_idx = np.array(
-            # [2, 2, 2, 2, 2, 2] + [200, 200, 200, 200, 200, 200] + [10, 10, 10, 10, 10]
-            [2, 2, 2, 2, 2, 2]
-            + [200, 200, 200, 200, 200, 200]
-            + [10, 10, 10, 10, 10]
-        )
+        # start from midpoints of each discrete grid
+        self.cur_params_idx = np.array([len(p) // 2 for p in self.params], dtype=np.int32)
@@
-    env.step([2] * 17)
+    env.step([1] * len(env.params_id))

Also applies to: 359-359

🧰 Tools
🪛 Ruff (0.12.2)

210-212: Consider iterable unpacking instead of concatenation

Replace with iterable unpacking

(RUF005)

🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc.py around lines 207 to 213 (and also at line
359), the code hard-codes a 17-length index array which will break when
parameter counts change; replace the literal array construction with a computed
array based on the parameter counts loaded from the YAML (e.g., read lengths of
each param group or sum of group sizes) and build the concatenated numpy array
programmatically (using list multiplication or numpy.repeat with those computed
counts) so the resulting shape is derived from the YAML config instead of fixed
constants.

Comment on lines +250 to +259
# incentivize reaching goal state
if reward >= 10:
done = True
print("-" * 10)
print("params = ", self.cur_params_idx)
print("specs:", self.cur_specs)
print("ideal specs:", self.specs_ideal)
print("re:", reward)
print("-" * 10)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Episode never terminates with current reward sign.

reward() returns a negative penalty, but done triggers on reward >= 10, which is unreachable.

-        if reward >= 10:
+        # done when total penalty magnitude is within tolerance
+        if -reward <= self.success_tolerance:
             done = True
-            print("-" * 10)
-            print("params = ", self.cur_params_idx)
-            print("specs:", self.cur_specs)
-            print("ideal specs:", self.specs_ideal)
-            print("re:", reward)
-            print("-" * 10)
+            logger.info("-" * 10)
+            logger.info(f"params = {self.cur_params_idx}")
+            logger.info(f"specs: {self.cur_specs}")
+            logger.info(f"ideal specs: {self.specs_ideal}")
+            logger.info(f"reward: {reward}")
+            logger.info("-" * 10)

Also define a tolerance in init (outside selected range):

self.success_tolerance = float(env_config.get("success_tolerance", 0.02))
🤖 Prompt for AI Agents
In autockt/envs/ngspice_ledro_d_fc.py around lines 250-259, the episode never
terminates because reward() returns a negative penalty but the code tests reward
>= 10 (unreachable); change the termination check to match the negative-sign
convention (for example use reward <= -10 or equivalently -reward >= 10) and
keep the same logging, and add a success tolerance attribute in __init__
(outside the selected range) by setting self.success_tolerance =
float(env_config.get("success_tolerance", 0.02)) so the tolerance is
configurable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant