Skip to content

Commit b7e2927

Browse files
Update ode.md
Method descriptions for the solvers added.
1 parent 99d06ef commit b7e2927

File tree

1 file changed

+12
-5
lines changed
  • docs/src/optimization_packages

1 file changed

+12
-5
lines changed

docs/src/optimization_packages/ode.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -37,14 +37,21 @@ sol = solve(prob_manual, opt; maxiters=50_000)
3737

3838
## Available Optimizers
3939

40-
* `ODEGradientDescent(dt=...)` — uses the explicit Euler method.
41-
* `RKChebyshevDescent()` — uses the ROCK2 method.
42-
* `RKAccelerated()` — uses the Tsit5 Runge-Kutta method.
43-
* `HighOrderDescent()` — uses the Vern7 high-order Runge-Kutta method.
40+
All provided optimizers are **gradient-based local optimizers** that solve optimization problems by integrating gradient-based ODEs to convergence:
41+
42+
* `ODEGradientDescent(dt=...)` — performs basic gradient descent using the explicit Euler method. This is a simple and efficient method suitable for small-scale or well-conditioned problems.
43+
44+
* `RKChebyshevDescent()` — uses the ROCK2 solver, a stabilized explicit Runge-Kutta method suitable for stiff problems. It allows larger step sizes while maintaining stability.
45+
46+
* `RKAccelerated()` — leverages the Tsit5 method, a 5th-order Runge-Kutta solver that achieves faster convergence for smooth problems by improving integration accuracy.
47+
48+
* `HighOrderDescent()` — applies Vern7, a high-order (7th-order) explicit Runge-Kutta method for even more accurate integration. This can be beneficial for problems requiring high precision.
49+
50+
You can also define a custom optimizer using the generic `ODEOptimizer(solver; dt=nothing)` constructor by supplying any ODE solver supported by [OrdinaryDiffEq.jl](https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve/).
4451

4552
## Interface Details
4653

47-
All optimizers require gradient information (either via automatic differentiation or manually provided `grad!`).
54+
All optimizers require gradient information (either via automatic differentiation or manually provided `grad!`). The optimization is performed by integrating the ODE defined by the negative gradient until a steady state is reached.
4855

4956
### Keyword Arguments
5057

0 commit comments

Comments
 (0)