|
1 | | -# [NLP and DOCP manipulations](@id tutorial-nlp) |
2 | | - |
3 | | -```@meta |
4 | | -CurrentModule = OptimalControl |
5 | | -``` |
6 | | - |
7 | | -We describe here some more advanced operations related to the discretized optimal control problem. |
8 | | -When calling `solve(ocp)` three steps are performed internally: |
9 | | - |
10 | | -- first, the OCP is discretized into a DOCP (a nonlinear optimization problem), |
11 | | -- then, this DOCP is solved with a nonlinear programming (NLP) solver, which returns a solution of the discretized problem, |
12 | | -- finally, a functional solution of the OCP is rebuilt from the solution of the discretized problem. |
13 | | - |
14 | | -These steps can also be done separately, for instance if you want to use your own NLP solver. |
15 | | - |
16 | | -Let us load the packages. |
17 | | - |
18 | | -```@example main-nlp |
19 | | -using OptimalControl |
20 | | -using Plots |
21 | | -``` |
22 | | - |
23 | | -We define a test problem |
24 | | - |
25 | | -```@example main-nlp |
26 | | -ocp = @def begin |
27 | | -
|
28 | | - t ∈ [0, 1], time |
29 | | - x ∈ R², state |
30 | | - u ∈ R, control |
31 | | -
|
32 | | - x(0) == [ -1, 0 ] |
33 | | - x(1) == [ 0, 0 ] |
34 | | -
|
35 | | - ẋ(t) == [ x₂(t), u(t) ] |
36 | | -
|
37 | | - ∫( 0.5u(t)^2 ) → min |
38 | | -
|
39 | | -end |
40 | | -nothing # hide |
41 | | -``` |
42 | | - |
43 | | -## Discretization and NLP problem |
44 | | - |
45 | | -We discretize the problem with [`direct_transcription`](@extref CTDirect.direct_transcription): |
46 | | - |
47 | | -```@example main-nlp |
48 | | -docp = direct_transcription(ocp) |
49 | | -nothing # hide |
50 | | -``` |
51 | | - |
52 | | -and get the NLP model with [`nlp_model`](@extref CTDirect.nlp_model): |
53 | | - |
54 | | -```@example main-nlp |
55 | | -nlp = nlp_model(docp) |
56 | | -nothing # hide |
57 | | -``` |
58 | | - |
59 | | -The DOCP contains information related to the transcription, including a copy of the original OCP, and the NLP is the resulting discretized nonlinear programming problem, in our case an `ADNLPModel`. |
60 | | - |
61 | | -We can now use the solver of our choice to solve it. |
62 | | - |
63 | | -## Resolution of the NLP problem |
64 | | - |
65 | | -For a first example we use the `ipopt` solver from [NLPModelsIpopt.jl](https://jso.dev/NLPModelsIpopt.jl) package to solve the NLP problem. |
66 | | - |
67 | | -```@example main-nlp |
68 | | -using NLPModelsIpopt |
69 | | -nlp_sol = ipopt(nlp; print_level=5, mu_strategy="adaptive", tol=1e-8, sb="yes") |
70 | | -nothing # hide |
71 | | -``` |
72 | | - |
73 | | -Then, we can build an optimal control problem solution with [`build_OCP_solution`](@extref CTDirect.build_OCP_solution-Tuple{Any}) (note that the multipliers are optional, but the OCP costate will not be retrieved if the multipliers are not provided) and plot it. |
74 | | - |
75 | | -```@example main-nlp |
76 | | -sol = build_OCP_solution(docp, nlp_sol) |
77 | | -plot(sol) |
78 | | -``` |
79 | | - |
80 | | -## Change the NLP solver |
81 | | - |
82 | | -Alternatively, we can use [MadNLP.jl](https://madnlp.github.io/MadNLP.jl) to solve anew the NLP problem: |
83 | | - |
84 | | -```@example main-nlp |
85 | | -using MadNLP |
86 | | -nlp_sol = madnlp(nlp; print_level=MadNLP.ERROR) |
87 | | -``` |
88 | | - |
89 | | -## Initial guess |
90 | | - |
91 | | -An initial guess, including warm start, can be passed to [`direct_transcription`](@extref CTDirect.direct_transcription) the same way as for `solve`. |
92 | | - |
93 | | -```@example main-nlp |
94 | | -docp = direct_transcription(ocp; init=sol) |
95 | | -nothing # hide |
96 | | -``` |
97 | | - |
98 | | -It can also be changed after the transcription is done, with [`set_initial_guess`](@extref CTDirect.set_initial_guess). |
99 | | - |
100 | | -```@example main-nlp |
101 | | -set_initial_guess(docp, sol) |
102 | | -nothing # hide |
103 | | -``` |
| 1 | +# |
0 commit comments