Skip to content

Changed: store conversion matrices as SparseMatrixCSC #212

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jun 11, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "ModelPredictiveControl"
uuid = "61f9bdb8-6ae4-484a-811f-bbf86720c31c"
authors = ["Francis Gagnon"]
version = "1.8.0"
version = "1.8.1"

[deps]
ControlSystemsBase = "aaaaaaaa-a6ca-5380-bf3e-84a91bcd477e"
Expand All @@ -16,6 +16,7 @@ PrecompileTools = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
ProgressLogging = "33c8b6b6-d38a-422a-b730-caa89a2f386c"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
RecipesBase = "3cdcf5f2-1ef4-517c-9805-6587b60abb01"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
SparseConnectivityTracer = "9f842d2f-2579-4b1d-911e-f412cf18a3f5"
SparseMatrixColorings = "0a514795-09f3-496d-8182-132a7b665d35"

Expand All @@ -36,6 +37,7 @@ PrecompileTools = "1"
ProgressLogging = "0.1"
Random = "1.10"
RecipesBase = "1"
SparseArrays = "1.10"
SparseConnectivityTracer = "0.6.13"
SparseMatrixColorings = "0.4.14"
TestItemRunner = "1"
Expand Down
2 changes: 1 addition & 1 deletion src/ModelPredictiveControl.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
module ModelPredictiveControl

using PrecompileTools
using LinearAlgebra
using LinearAlgebra, SparseArrays
using Random: randn

using RecipesBase
Expand Down
18 changes: 9 additions & 9 deletions src/controller/construct.jl
Original file line number Diff line number Diff line change
Expand Up @@ -122,10 +122,10 @@ struct ControllerConstraint{NT<:Real, GCfunc<:Union{Nothing, Function}}
x̂0min ::Vector{NT}
x̂0max ::Vector{NT}
# A matrices for the linear inequality constraints:
A_Umin ::Matrix{NT}
A_Umax ::Matrix{NT}
A_ΔŨmin ::Matrix{NT}
A_ΔŨmax ::Matrix{NT}
A_Umin ::SparseMatrixCSC{NT, Int}
A_Umax ::SparseMatrixCSC{NT, Int}
A_ΔŨmin ::SparseMatrixCSC{NT, Int}
A_ΔŨmax ::SparseMatrixCSC{NT, Int}
A_Ymin ::Matrix{NT}
A_Ymax ::Matrix{NT}
A_x̂min ::Matrix{NT}
Expand Down Expand Up @@ -675,7 +675,7 @@ constraints:
in which ``\mathbf{U_{min}}`` and ``\mathbf{U_{max}}`` vectors respectively contains
``\mathbf{u_{min}}`` and ``\mathbf{u_{max}}`` repeated ``H_p`` times.
"""
function relaxU(Pu::Matrix{NT}, C_umin, C_umax, nϵ) where NT<:Real
function relaxU(Pu::AbstractMatrix{NT}, C_umin, C_umax, nϵ) where NT<:Real
if nϵ == 1 # Z̃ = [Z; ϵ]
# ϵ impacts Z → U conversion for constraint calculations:
A_Umin, A_Umax = -[Pu C_umin], [Pu -C_umax]
Expand Down Expand Up @@ -717,7 +717,7 @@ bound, which is more precise than a linear inequality constraint. However, it is
convenient to treat it as a linear inequality constraint since the optimizer `OSQP.jl` does
not support pure bounds on the decision variables.
"""
function relaxΔU(PΔu::Matrix{NT}, C_Δumin, C_Δumax, ΔUmin, ΔUmax, nϵ) where NT<:Real
function relaxΔU(PΔu::AbstractMatrix{NT}, C_Δumin, C_Δumax, ΔUmin, ΔUmax, nϵ) where NT<:Real
nZ = size(PΔu, 2)
if nϵ == 1 # Z̃ = [Z; ϵ]
ΔŨmin, ΔŨmax = [ΔUmin; NT[0.0]], [ΔUmax; NT[Inf]] # 0 ≤ ϵ ≤ ∞
Expand Down Expand Up @@ -754,7 +754,7 @@ Denoting the decision variables augmented with the slack variable
in which ``\mathbf{Y_{min}, Y_{max}}`` and ``\mathbf{Y_{op}}`` vectors respectively contains
``\mathbf{y_{min}, y_{max}}`` and ``\mathbf{y_{op}}`` repeated ``H_p`` times.
"""
function relaxŶ(E::Matrix{NT}, C_ymin, C_ymax, nϵ) where NT<:Real
function relaxŶ(E::AbstractMatrix{NT}, C_ymin, C_ymax, nϵ) where NT<:Real
if nϵ == 1 # Z̃ = [Z; ϵ]
if iszero(size(E, 1))
# model is not a LinModel, thus Ŷ constraints are not linear:
Expand Down Expand Up @@ -792,7 +792,7 @@ the inequality constraints:
\end{bmatrix}
```
"""
function relaxterminal(ex̂::Matrix{NT}, c_x̂min, c_x̂max, nϵ) where {NT<:Real}
function relaxterminal(ex̂::AbstractMatrix{NT}, c_x̂min, c_x̂max, nϵ) where {NT<:Real}
if nϵ == 1 # Z̃ = [Z; ϵ]
if iszero(size(ex̂, 1))
# model is not a LinModel and transcription is a SingleShooting, thus terminal
Expand Down Expand Up @@ -821,7 +821,7 @@ It returns the ``\mathbf{Ẽŝ}`` matrix that appears in the defect equation
\mathbf{A_ŝ Z̃} = - \mathbf{F_ŝ}
```
"""
function augmentdefect(Eŝ::Matrix{NT}, nϵ) where NT<:Real
function augmentdefect(Eŝ::AbstractMatrix{NT}, nϵ) where NT<:Real
if nϵ == 1 # Z̃ = [Z; ϵ]
Ẽŝ = [Eŝ zeros(NT, size(Eŝ, 1), 1)]
else # Z̃ = Z (only hard constraints)
Expand Down
6 changes: 3 additions & 3 deletions src/controller/explicitmpc.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ struct ExplicitMPC{
R̂u::Vector{NT}
R̂y::Vector{NT}
lastu0::Vector{NT}
P̃Δu::Matrix{NT}
P̃u ::Matrix{NT}
Tu ::Matrix{NT}
P̃Δu::SparseMatrixCSC{NT, Int}
P̃u ::SparseMatrixCSC{NT, Int}
Tu ::SparseMatrixCSC{NT, Int}
Tu_lastu0::Vector{NT}
Ẽ::Matrix{NT}
F::Vector{NT}
Expand Down
6 changes: 3 additions & 3 deletions src/controller/linmpc.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ struct LinMPC{
R̂u::Vector{NT}
R̂y::Vector{NT}
lastu0::Vector{NT}
P̃Δu::Matrix{NT}
P̃u ::Matrix{NT}
Tu ::Matrix{NT}
P̃Δu::SparseMatrixCSC{NT, Int}
P̃u ::SparseMatrixCSC{NT, Int}
Tu ::SparseMatrixCSC{NT, Int}
Tu_lastu0::Vector{NT}
Ẽ::Matrix{NT}
F::Vector{NT}
Expand Down
8 changes: 4 additions & 4 deletions src/controller/nonlinmpc.jl
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ struct NonLinMPC{
R̂u::Vector{NT}
R̂y::Vector{NT}
lastu0::Vector{NT}
P̃Δu::Matrix{NT}
P̃u ::Matrix{NT}
Tu ::Matrix{NT}
P̃Δu::SparseMatrixCSC{NT, Int}
P̃u ::SparseMatrixCSC{NT, Int}
Tu ::SparseMatrixCSC{NT, Int}
Tu_lastu0::Vector{NT}
Ẽ::Matrix{NT}
F::Vector{NT}
Expand Down Expand Up @@ -550,7 +550,7 @@ This method is really intricate and I'm not proud of it. That's because of 3 ele
and as efficient as possible. All the function outputs and derivatives are cached and
updated in-place if required to use the efficient [`value_and_jacobian!`](@extref DifferentiationInterface DifferentiationInterface.value_and_jacobian!).
- The `JuMP` NLP syntax forces splatting for the decision variable, which implies use
of `Vararg{T,N}` (see the [performance tip][@extref Julia Be-aware-of-when-Julia-avoids-specializing]
of `Vararg{T,N}` (see the (performance tip)[@extref Julia Be-aware-of-when-Julia-avoids-specializing]
) and memoization to avoid redundant computations. This is already complex, but it's even
worse knowing that most automatic differentiation tools do not support splatting.
- The signature of gradient and hessian functions is not the same for univariate (`nZ̃ == 1`)
Expand Down
24 changes: 14 additions & 10 deletions src/controller/transcription.jl
Original file line number Diff line number Diff line change
Expand Up @@ -81,21 +81,22 @@ in which ``\mathbf{P_{Δu}}`` is defined in the Extended Help section.
- ``\mathbf{P_{Δu}} = \mathbf{I}`` if `transcription` is a [`SingleShooting`](@ref)
- ``\mathbf{P_{Δu}} = [\begin{smallmatrix}\mathbf{I} & \mathbf{0} \end{smallmatrix}]``
if `transcription` is a [`MultipleShooting`](@ref)
The matrix is store as as `SparseMatrixCSC` to support both cases efficiently.
"""
function init_ZtoΔU end

function init_ZtoΔU(
estim::StateEstimator{NT}, ::SingleShooting, _ , Hc
) where {NT<:Real}
PΔu = Matrix{NT}(I, estim.model.nu*Hc, estim.model.nu*Hc)
PΔu = sparse(Matrix{NT}(I, estim.model.nu*Hc, estim.model.nu*Hc))
return PΔu
end

function init_ZtoΔU(
estim::StateEstimator{NT}, ::MultipleShooting, Hp, Hc
) where {NT<:Real}
I_nu_Hc = Matrix{NT}(I, estim.model.nu*Hc, estim.model.nu*Hc)
PΔu = [I_nu_Hc zeros(NT, estim.model.nu*Hc, estim.nx̂*Hp)]
I_nu_Hc = sparse(Matrix{NT}(I, estim.model.nu*Hc, estim.model.nu*Hc))
PΔu = [I_nu_Hc spzeros(NT, estim.model.nu*Hc, estim.nx̂*Hp)]
return PΔu
end

Expand Down Expand Up @@ -144,30 +145,33 @@ The ``\mathbf{P_u}`` and ``\mathbf{T_u}`` matrices are defined in the Extended H
- ``\mathbf{P_u} = \mathbf{P_u^†}`` if `transcription` is a [`SingleShooting`](@ref)
- ``\mathbf{P_u} = [\begin{smallmatrix}\mathbf{P_u^†} & \mathbf{0} \end{smallmatrix}]``
if `transcription` is a [`MultipleShooting`](@ref)
The conversion matrices are stored as `SparseMatrixCSC` since it was benchmarked that it
is generally more performant than normal dense matrices, even for small `nu`, `Hp` and
`Hc` values. Using `Bool` element type and `BitMatrix` is also slower.
"""
function init_ZtoU(
estim::StateEstimator{NT}, transcription::TranscriptionMethod, Hp, Hc, nb
) where {NT<:Real}
nu = estim.model.nu
# Pu and Tu are `Matrix{NT}`, conversion is faster than `Matrix{Bool}` or `BitMatrix`
I_nu = Matrix{NT}(I, nu, nu)
I_nu = sparse(Matrix{NT}(I, nu, nu))
PuDagger = Matrix{NT}(undef, nu*Hp, nu*Hc)
for i=1:Hc
ni = nb[i]
Q_ni = repeat(I_nu, ni, 1)
iRows = (1:nu*ni) .+ @views nu*sum(nb[1:i-1])
PuDagger[iRows, :] = [repeat(Q_ni, 1, i) zeros(nu*ni, nu*(Hc-i))]
PuDagger[iRows, :] = [repeat(Q_ni, 1, i) spzeros(nu*ni, nu*(Hc-i))]
end
PuDagger = sparse(PuDagger)
Pu = init_PUmat(estim, transcription, Hp, Hc, PuDagger)
Tu = repeat(I_nu, Hp)
return Pu, Tu
end



init_PUmat( _ , ::SingleShooting, _ , _ , PuDagger) = PuDagger
function init_PUmat(estim, ::MultipleShooting, Hp, _ , PuDagger)
return [PuDagger zeros(eltype(PuDagger), estim.model.nu*Hp, estim.nx̂*Hp)]
function init_PUmat(
estim, ::MultipleShooting, Hp, _ , PuDagger::AbstractMatrix{NT}
) where NT<:Real
return [PuDagger spzeros(NT, estim.model.nu*Hp, estim.nx̂*Hp)]
end

@doc raw"""
Expand Down
6 changes: 6 additions & 0 deletions src/estimator/kalman.jl
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,12 @@ SteadyKalmanFilter estimator with a sample time Ts = 0.5 s, LinModel and:
!!! tip
Increasing `σQint_u` and `σQint_ym` values increases the integral action "gain".

Custom stochastic model for the unmeasured disturbances (different than integrated white
gaussian noise) can be specified by constructing a [`LinModel`](@ref) object with the
augmented state-space matrices directly, and by setting `nint_u=0` and `nint_ym=0`. See
[Disturbance-gallery](@extref LowLevelParticleFilters) for examples of other
disturbance models.

The constructor pre-compute the steady-state Kalman gain `K̂` with the [`kalman`](@extref ControlSystemsBase.kalman)
function. It can sometimes fail, for example when `model` matrices are ill-conditioned.
In such a case, you can try the alternative time-varying [`KalmanFilter`](@ref).
Expand Down
2 changes: 1 addition & 1 deletion src/estimator/manual.jl
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ ManualEstimator estimator with a sample time Ts = 0.5 s, LinModel and:
A custom stochastic model for the unmeasured disturbances (other than integrated white
noise) can be specified by constructing a [`SimModel`](@ref) object with the augmented
state-space matrices/functions directly, and by setting `nint_u=0` and `nint_ym=0`. See
[`Disturbance-gallery`](@extref LowLevelParticleFilters) for examples of other
[Disturbance-gallery](@extref LowLevelParticleFilters) for examples of other
disturbance models.
"""
function ManualEstimator(
Expand Down