|
624 | 624 | it does not (or only changes via ``specialize``).""", |
625 | 625 | }, |
626 | 626 |
|
| 627 | + "concept.multi_fidelity": { |
| 628 | + "category": "Concept", |
| 629 | + "content": """\ |
| 630 | +Multi-fidelity optimization — cheap approximations guide expensive evaluations. |
| 631 | +
|
| 632 | +When a fast surrogate (eg. coarse mesh) correlates with the |
| 633 | +expensive truth (fine-mesh CFD), multi-fidelity BO spends most of its budget |
| 634 | +on cheap queries and reserves expensive evaluations for the most promising |
| 635 | +candidates. The GP learns the bias between fidelity levels and the |
| 636 | +acquisition function trades information gain against evaluation cost. |
| 637 | +
|
| 638 | +**Setup in foamBO:** |
| 639 | +
|
| 640 | +1. Add a fidelity parameter with ``is_fidelity: true`` and ``target_value``: |
| 641 | +```yaml |
| 642 | +experiment: |
| 643 | + parameters: |
| 644 | + - name: fidelity |
| 645 | + parameter_type: str |
| 646 | + values: ["coarse", "fine"] |
| 647 | + is_fidelity: true |
| 648 | + target_value: "fine" |
| 649 | +``` |
| 650 | +
|
| 651 | +2. Mark one metric as the cost signal with ``is_cost: true``: |
| 652 | +```yaml |
| 653 | +optimization: |
| 654 | + metrics: |
| 655 | + - name: executionTime |
| 656 | + command: ["scripts/metric.sh", "executionTime"] |
| 657 | + is_cost: true |
| 658 | +``` |
| 659 | +
|
| 660 | +3. Use ``method: fast`` (or any generation strategy — MF is auto-wired): |
| 661 | +```yaml |
| 662 | +trial_generation: |
| 663 | + method: fast |
| 664 | +``` |
| 665 | +
|
| 666 | +That's it. No custom generation nodes required for MF. |
| 667 | +
|
| 668 | +**Auto-wiring:** when ``is_fidelity`` is detected, foamBO automatically: |
| 669 | +- Selects ``qMultiFidelityHypervolumeKnowledgeGradient`` as the acqf. |
| 670 | +- Sets ``SingleTaskMultiFidelityGP`` as the surrogate. |
| 671 | +- Extracts ``target_fidelities`` from the search space (Ax built-in). |
| 672 | +- Learns ``cost_intercept`` and ``fidelity_weights`` from observed |
| 673 | + ``is_cost`` metric data each callback cycle. |
| 674 | +
|
| 675 | +To override the acqf (e.g. use MOMF instead), use a custom generation node: |
| 676 | +```yaml |
| 677 | +trial_generation: |
| 678 | + method: custom |
| 679 | + generation_nodes: |
| 680 | + - node_name: MF |
| 681 | + generator_specs: |
| 682 | + - generator_enum: BOTORCH_MODULAR |
| 683 | + model_kwargs: |
| 684 | + botorch_acqf_class: "MOMF" |
| 685 | +``` |
| 686 | +
|
| 687 | +**Two acquisition functions are supported:** |
| 688 | +
|
| 689 | +``qMultiFidelityHypervolumeKnowledgeGradient`` (qMF-HVKG): |
| 690 | +- Multi-objective, cost-aware, one-step lookahead. |
| 691 | +- Maximizes expected hypervolume improvement at target fidelity. |
| 692 | +- Better sample efficiency; slower candidate generation. |
| 693 | +- Takes ``cost_intercept`` + ``fidelity_weights`` via input constructor; |
| 694 | + foamBO auto-derives these from the ``is_cost`` metric. |
| 695 | +- **Recommended for most use cases** (trial cost dominates gen time). |
| 696 | +
|
| 697 | +``MOMF`` (Multi-Objective Multi-Fidelity): |
| 698 | +- Adds fidelity as a pseudo-objective (trust reward for higher fidelity). |
| 699 | +- Faster candidate generation; less sample efficient. |
| 700 | +- Takes ``cost_call`` (a callable) directly. |
| 701 | +- foamBO does NOT auto-wire cost for MOMF; pass ``cost_call`` manually |
| 702 | + via ``botorch_acqf_options`` if needed. |
| 703 | +
|
| 704 | +**Runner dispatch** via ``file_substitution`` (recommended): |
| 705 | +Use a string ``ChoiceParameter`` for fidelity and foamBO's built-in file |
| 706 | +substitution to swap the runner script (or a portion of it) per fidelity level. Place |
| 707 | +``Allrun.coarse`` and ``Allrun.fine`` in the template case: |
| 708 | +```yaml |
| 709 | +optimization: |
| 710 | + case_runner: |
| 711 | + file_substitution: |
| 712 | + - parameter: fidelity |
| 713 | + file_path: /Allrun |
| 714 | +``` |
| 715 | +When ``fidelity=coarse``, the runner copies ``Allrun.coarse`` → |
| 716 | +``Allrun`` before execution. When ``fidelity=fine``, copies |
| 717 | +``Allrun.fine`` → ``Allrun``. No if/else branching in scripts needed. |
| 718 | +
|
| 719 | +**Alternative** (continuous fidelity via env var): |
| 720 | +```bash |
| 721 | +if [ "$FIDELITY" = "0" ] || [ "$FIDELITY" = "0.0" ]; then |
| 722 | + ./Allrun.coarse |
| 723 | +else |
| 724 | + ./Allrun.fine |
| 725 | +fi |
| 726 | +``` |
| 727 | +
|
| 728 | +**Cost model evolution:** before any trials complete, uniform cost is |
| 729 | +assumed (cost ratio = 1). As ``is_cost`` data arrives, foamBO recomputes |
| 730 | +per-fidelity mean cost and updates ``cost_intercept`` / ``fidelity_weights`` |
| 731 | +each callback — no restart needed. |
| 732 | +
|
| 733 | +**Cost scaling warning:** the ``is_cost`` metric should emit *scaled* |
| 734 | +costs, not raw wall-clock seconds. With extreme cost ratios (e.g. coarse |
| 735 | +1s vs fine 3600s = 1:3600), the acquisition function may defer expensive |
| 736 | +evaluations indefinitely — the info-gain-per-cost ratio always favors |
| 737 | +cheap queries when the denominator is 3600× larger. |
| 738 | +
|
| 739 | +**Recommendation:** cap the effective ratio to 1:50–1:100 by emitting |
| 740 | +a normalized cost. Use the baseline trial's execution time as the |
| 741 | +reference scale: |
| 742 | +
|
| 743 | +- Coarse fidelity metric script: ``echo 1`` |
| 744 | +- Fine fidelity metric script: ``echo 50`` (not the raw wall time) |
| 745 | +
|
| 746 | +This tells the optimizer "fine is 50× more expensive" — enough to prefer |
| 747 | +coarse for exploration, but not so extreme that fine is never selected. |
| 748 | +Tune the ratio based on how many fine-fidelity evaluations you can afford |
| 749 | +in your budget. A ratio of 1:N means roughly 1 fine trial per N coarse |
| 750 | +trials. |
| 751 | +
|
| 752 | +Also consider seeding 3–5 initial trials at target fidelity (via SOBOL |
| 753 | +init phase with ``fixed_features``) so the GP has fine-fidelity signal |
| 754 | +from the start to estimate the coarse→fine bias. |
| 755 | +
|
| 756 | +**Composition with robust mode:** fully automatic. When both |
| 757 | +``is_fidelity`` and ``robust_optimization`` are present, foamBO composes |
| 758 | +them into a single acquisition loop: |
| 759 | +
|
| 760 | +- **Surrogate**: ``SingleTaskMultiFidelityGP`` (handles fidelity kernel) |
| 761 | + with ``SubstituteContextFeatures`` (handles context fan-out) as chained |
| 762 | + input transforms on the same model. |
| 763 | +- **Acquisition**: ``qMultiFidelityHypervolumeKnowledgeGradient`` with a |
| 764 | + ``RobustMCObjective`` (MARS or CVaR) passed as the ``objective``. |
| 765 | +- **Per-candidate evaluation**: qMFHVKG proposes ``(design, fidelity)``, |
| 766 | + the GP posterior fans to K context points via SubstituteContextFeatures, |
| 767 | + the risk-measure objective reduces K contexts to risk-adjusted values, |
| 768 | + and qMFHVKG computes cost-aware HV improvement at target fidelity. |
| 769 | +
|
| 770 | +No extra YAML is needed — setting ``is_fidelity`` on a parameter alongside |
| 771 | +``robust_optimization`` triggers composition automatically. The cost model |
| 772 | +(``is_cost`` metric) works identically in the composed path. |
| 773 | +
|
| 774 | +**Staged fallback** is still available for simpler workflows: MF BO at |
| 775 | +nominal context first, then robust verification via ``bootstrap``. |
| 776 | +
|
| 777 | +See also: ``concept.robust_optimization``, ``concept.bootstrap_and_specialize``.""", |
| 778 | + }, |
| 779 | + |
627 | 780 | } |
0 commit comments