You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/tutorial-mam.md
+87-24Lines changed: 87 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,12 @@
1
1
# Minimal Action Method using Optimal Control
2
2
3
-
The Minimal Action Method is a numerical technique for finding the most probable transition pathway between stable states in stochastic dynamical systems. It achieves this by minimizing an action functional that represents the path's deviation from the deterministic dynamics, effectively identifying the path of least resistance through the system's landscape.
4
-
This tutorial demonstrates how to implement MAM as an optimal control problem.
3
+
```@meta
4
+
Draft = false
5
+
```
6
+
7
+
The Minimal Action Method (MAM) is a numerical technique for finding the most probable transition pathway between stable states in stochastic dynamical systems. It achieves this by minimizing an action functional that represents the path's deviation from the deterministic dynamics, effectively identifying the path of least resistance through the system's landscape.
8
+
9
+
This tutorial demonstrates how to implement MAM as an optimal control problem, using the classical Maier-Stein model as a benchmark example.
5
10
6
11
## Required Packages
7
12
@@ -11,9 +16,29 @@ using NLPModelsIpopt
11
16
using Plots, Printf
12
17
```
13
18
19
+
## Problem Statement
20
+
21
+
We aim to find the most probable transition path between two stable states of a stochastic dynamical system. For a system with deterministic dynamics $f(x)$ and small noise, the transition path minimizes the action functional:
where $x_0$ and $x_f$ are the initial and final states, and $T$ is the transition time.
34
+
35
+
!!! note "Physical interpretation"
36
+
37
+
The action $S$ measures the "cost" of deviating from the deterministic flow $f(x)$. Paths with smaller action are exponentially more likely in the small noise limit.
38
+
14
39
## Problem Setup
15
40
16
-
We'll consider a 2D system with a double-well flow, called the Maier-Stein model. It is a famous benchmark problem as it exhibits non-gradient dynamics with two stable equilibrium points at (-1,0) and (1,0), connected by a non-trivial transition path.
41
+
We consider a 2D system with a double-well flow, called the Maier-Stein model. It is a famous benchmark problem as it exhibits non-gradient dynamics with two stable equilibrium points at $(-1,0)$ and $(1,0)$, connected by a non-trivial transition path.
17
42
The system's deterministic dynamics are given by:
18
43
19
44
```@example main-mam
@@ -45,29 +70,39 @@ nothing # hide
45
70
46
71
## Initial Guess
47
72
48
-
We provide an initial guess for the path using a simple interpolation:
73
+
We provide an initial guess for the path using a simple interpolation with the `@init` macro:
49
74
50
75
```@example main-mam
51
76
# Time horizon
52
77
T = 50
53
78
54
-
# Linear interpolation for x₁
55
-
x1(t) = -(1 - t/T) + t/T
56
-
57
-
# Parabolic guess for x₂
58
-
x2(t) = 0.3(-x1(t)^2 + 1)
59
-
x(t) = [x1(t), x2(t)]
60
-
u(t) = f(x(t))
61
-
62
-
# Initial guess
63
-
init = (state=x, control=u)
79
+
# Helper functions for initial state guess
80
+
L(t) = -(1 - t/T) + t/T # Linear interpolation from -1 to 1
The initial guess uses a simple geometric path: linear interpolation in $x_1$ and a parabolic arc in $x_2$. This provides a reasonable starting point that avoids the unstable saddle point at the origin. The control is initialized to follow the deterministic flow along this path.
97
+
67
98
## Solving the Problem
68
99
69
100
We solve the problem in two steps for better accuracy:
70
101
102
+
!!! note "Two-step resolution"
103
+
104
+
Starting with a coarse grid (50 points) allows for faster initial convergence. Refining with a fine grid (1000 points) then improves accuracy of the solution.
105
+
71
106
```@example main-mam
72
107
# First solve with coarse grid
73
108
sol = solve(ocp(T); init=init, grid_size=50)
@@ -104,22 +139,50 @@ The resulting path shows the most likely transition between the two stable state
104
139
To find the maximum likelihood path, we also need to minimize the transient time `T`. Hence, we perform a discrete continuation over the parameter `T` by solving the optimal control problem over a continuous range of final times `T`, using each solution to initialize the next problem.
105
140
106
141
```@example main-mam
107
-
objectives = []
108
-
Ts = range(1,100,100)
109
-
sol = solve(ocp(Ts[1]); display=false, init=init, grid_size=200)
110
-
println(" Time Objective Iterations")
111
-
for T=Ts
112
-
global sol = solve(ocp(T); display=false, init=sol, grid_size=1000, tol=1e-8)
The optimal transition time $T^*$ balances two competing effects: shorter times require larger deviations from the deterministic flow (higher action), while longer times allow the system to follow the flow more closely. The minimum represents the most probable transition time in the small noise limit.
0 commit comments