Skip to content

Commit e9c6fcb

Browse files
committed
Improve LQR tutorial: add theoretical context, display optimal costs, fix matrix form note with backend option, correct API usage for objective and state
1 parent 29eee0d commit e9c6fcb

2 files changed

Lines changed: 56 additions & 43 deletions

File tree

docs/src/tutorial-goddard.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,5 @@
11
# [Direct and indirect methods for the Goddard problem](@id tutorial-goddard)
22

3-
```@meta
4-
Draft = false
5-
```
6-
73
## Introduction
84

95
```@raw html

docs/src/tutorial-lqr.md

Lines changed: 56 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
11
# A simple Linear–quadratic regulator example
22

3+
```@meta
4+
Draft = false
5+
```
6+
37
## Problem statement
48

59
We consider the following Linear Quadratic Regulator (LQR) problem, which consists in minimizing
@@ -11,7 +15,7 @@ We consider the following Linear Quadratic Regulator (LQR) problem, which consis
1115
subject to the dynamics
1216

1317
```math
14-
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = -x_1(t) + u(t), \quad u(t) \in \R
18+
\dot x_1(t) = x_2(t), \quad \dot x_2(t) = -x_1(t) + u(t), \quad u(t) \in \mathbb{R}
1519
```
1620

1721
and the initial condition
@@ -57,6 +61,46 @@ end
5761
nothing # hide
5862
```
5963

64+
!!! note "Matrix form alternative"
65+
66+
```@raw html
67+
<details><summary>Click to unfold and see the matrix form.</summary>
68+
```
69+
70+
The problem can also be written using matrix notation with the `backend=:default` option:
71+
72+
```@example main-lqr
73+
x0 = [ 0
74+
1 ]
75+
A = [ 0 1
76+
-1 0 ]
77+
B = [ 0
78+
1 ]
79+
Q = [ 1 0
80+
0 1 ]
81+
R = 1
82+
tf = 3
83+
84+
ocp = @def begin
85+
t ∈ [0, tf], time
86+
x ∈ R², state
87+
u ∈ R, control
88+
x(0) == x0
89+
ẋ(t) == A * x(t) + B * u(t)
90+
0.5∫( x(t)' * Q * x(t) + u(t)' * R * u(t) ) → min
91+
end
92+
93+
solve(ocp; backend=:default, display=false)
94+
```
95+
96+
!!! warning "Known issue"
97+
98+
Not using `backend=:default` with the ADNLPModels modeler (the default one) for the matrix form will lead to an error. This is a [known issue](@extref OptimalControl manual-abstract-known-issues).
99+
100+
```@raw html
101+
</details>
102+
```
103+
60104
## Solving the problem for different final times
61105

62106
We solve the problem for $t_f \in \{3, 5, 30\}$.
@@ -69,6 +113,13 @@ for tf ∈ tfs
69113
solution = solve(lqr(tf), display=false)
70114
push!(solutions, solution)
71115
end
116+
117+
# Display costs and final states
118+
for i ∈ eachindex(solutions)
119+
x_func = state(solutions[i])
120+
obj = objective(solutions[i])
121+
println("tf = $(tfs[i]): cost = ", obj, ", x(tf) = ", x_func(tfs[i]))
122+
end
72123
nothing # hide
73124
```
74125

@@ -77,9 +128,9 @@ nothing # hide
77128
We plot the state and control variables using normalized time $s = (t - t_0)/(t_f - t_0)$:
78129

79130
```@example main-lqr
80-
plt = plot(solutions[1], :state, :control; time=:normalize, label="tf = $(tfs[1])")
81-
for (tf, sol) ∈ zip(tfs[2:end], solutions[2:end])
82-
plot!(plt, sol, :state, :control; time=:normalize, label="tf = $tf")
131+
plt = plot()
132+
for i ∈ eachindex(solutions)
133+
plot!(plt, solutions[i], :state, :control; time=:normalize, label="tf = $(tfs[i])")
83134
end
84135
85136
px1 = plot(plt[1], legend=false, xlabel="s", ylabel="x₁")
@@ -90,38 +141,4 @@ plot(px1, px2, pu, layout=(1, 3), size=(800, 300), leftmargin=5mm, bottommargin=
90141

91142
!!! note "Nota bene"
92143

93-
We can observe that $x(t_f)$ converges to the origin as $t_f$ increases.
94-
95-
## Known issues
96-
97-
The following definition will lead to an error when solving the problem. This is a [known issue](@extref OptimalControl manual-abstract-known-issues).
98-
99-
```@repl main-lqr
100-
101-
x0 = [ 0
102-
1 ]
103-
104-
A = [ 0 1
105-
-1 0 ]
106-
107-
B = [ 0
108-
1 ]
109-
110-
Q = [ 1 0
111-
0 1 ]
112-
113-
R = 1
114-
115-
tf = 3
116-
117-
ocp = @def begin
118-
t ∈ [0, tf], time
119-
x ∈ R², state
120-
u ∈ R, control
121-
x(0) == x0
122-
ẋ(t) == A * x(t) + B * u(t)
123-
0.5∫( x(t)' * Q * x(t) + u(t)' * R * u(t) ) → min
124-
end
125-
126-
solve(ocp)
127-
```
144+
We can observe that $x(t_f)$ converges to the origin as $t_f$ increases. This illustrates a fundamental property of the LQR problem: as the horizon extends, the optimal solution approaches the steady-state infinite-horizon LQR regulator, which drives the state to the origin with minimal cost.

0 commit comments

Comments
 (0)