Skip to content

Changed: store conversion matrices as SparseMatrixCSC #212

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jun 11, 2025

Conversation

franckgaga
Copy link
Member

@franckgaga franckgaga commented Jun 11, 2025

Following discussion at adrhill/SparseConnectivityTracer.jl#248, the conversion matrices from decision variable $\mathbf{Z}$ to inputs $\mathbf{U}$ and input increments $\mathbf{\Delta U}$ are now hard-coded as SparseMatrixCSC{NT, Int}, since the benchmarks indicated that it's not really slower than dense matrices on small plant models. It also ensures that sparsity detection for the differentiation of NonLinMPC with MultipleShooting is not overly-conservative. Side-bonus: it's more efficient on the memory.

I reproduce the benchmark here for posterity:

using LinearAlgebra, SparseArrays, BenchmarkTools
function get_Pu(nu, Hp, Hc)
    println("--- Constructing Pu with nu=$nu, Hp=$Hp and Hc=$Hc ---")
    nb = fill(1, Hc)
    nb[end] = Hp - Hc + 1
    I_nu = Matrix{Float64}(I, nu, nu)
    Pu = Matrix{Float64}(undef, nu*Hp, nu*Hc)
    for i=1:Hc
        ni    = nb[i]
        Q_ni  = repeat(I_nu, ni, 1)
        iRows = (1:nu*ni) .+ @views nu*sum(nb[1:i-1])
        Pu[iRows, :] = [repeat(Q_ni, 1, i) zeros(nu*ni, nu*(Hc-i))]
    end
    println("nonzero percentage:     $(100*count(!iszero, Pu) / length(Pu))%")
    return Pu
end
function bench(Pu, x)
    Pu_sparse = sparse(Pu)
    y        = zeros(size(Pu, 1))
    y_sparse = zeros(size(Pu_sparse, 1))
    print("dense multiplication: ");  @btime mul!($y, $Pu, $x)
    print("sparse multiplication:"); @btime mul!($y_sparse, $Pu_sparse, $x)
    mul!(y, Pu, x)
    mul!(y, Pu_sparse, x)
    @show y  y_sparse
end
Pu = get_Pu(2, 5, 2); x = rand(size(Pu, 2))
bench(Pu, x)
Pu = get_Pu(2, 50, 50); x = rand(size(Pu, 2))
bench(Pu, x)
Pu = get_Pu(5, 10, 5); x = rand(size(Pu, 2))
bench(Pu, x)
Pu = get_Pu(10, 50, 50); x = rand(size(Pu, 2))
bench(Pu, x)
nothing

resulting in:

--- Constructing Pu with nu=2, Hp=5 and Hc=2 ---
nonzero percentage:     45.0%
dense multiplication:   27.871 ns (0 allocations: 0 bytes)
sparse multiplication:  12.666 ns (0 allocations: 0 bytes)
y ≈ y_sparse = true
--- Constructing Pu with nu=2, Hp=50 and Hc=50 ---
nonzero percentage:     25.5%
dense multiplication:   517.589 ns (0 allocations: 0 bytes)
sparse multiplication:  797.740 ns (0 allocations: 0 bytes)
y ≈ y_sparse = true
--- Constructing Pu with nu=5, Hp=10 and Hc=5 ---
nonzero percentage:     16.0%
dense multiplication:   105.058 ns (0 allocations: 0 bytes)
sparse multiplication:  75.424 ns (0 allocations: 0 bytes)
y ≈ y_sparse = true
--- Constructing Pu with nu=10, Hp=50 and Hc=50 ---
nonzero percentage:     5.1%
dense multiplication:   16.160 μs (0 allocations: 0 bytes)
sparse multiplication:  4.457 μs (0 allocations: 0 bytes)
y ≈ y_sparse = true

The only case in which sparse multiplication is slightly slower is when nu=2, Hp=50 and Hc=50, which I would argue that it's a bad tuning (the control period Ts should be increased to reduce the horizons and the computational burden).

Note this benchmark infirm my assumption in #202:

It is very unlikely that any special sparse type will outperform computations with a dense matrix (a matrix product).

Morale: from now on I will rely more on sparse matrices when they contain a fair amount of zeros.

@franckgaga franckgaga changed the title Changed: store conversion matrices as `SparseMatrixCSC Changed: store conversion matrices as SparseMatrixCSC Jun 11, 2025
@codecov-commenter
Copy link

codecov-commenter commented Jun 11, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 98.92%. Comparing base (537591d) to head (760ecbd).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #212   +/-   ##
=======================================
  Coverage   98.92%   98.92%           
=======================================
  Files          26       26           
  Lines        4285     4286    +1     
=======================================
+ Hits         4239     4240    +1     
  Misses         46       46           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@franckgaga
Copy link
Member Author

franckgaga commented Jun 11, 2025

I also stored A_Umin, A_Umax, A_ΔŨmin and A_ΔŨmax as SparseMatrixCSC{NT, Int} since there are almost identical to the conversion matrices.

@franckgaga franckgaga changed the title Changed: store conversion matrices as SparseMatrixCSC Changed: store conversion matrices and defect matrices as SparseMatrixCSC Jun 11, 2025
@franckgaga
Copy link
Member Author

franckgaga commented Jun 11, 2025

On this subject, the prediction matrices and defect matrices for LinModel are quite sparses, and it's highly probable that computations with SparseMatrixCSC would be faster.

It is still a bad idea to store them as SparseMatrixCSC since setmodel! allows online modifications of a LinModel in a MPC object. If e.g. model.A contains some "accidental" zeros coefficients (which is common), it would be impossible to change these values later through setmodel!, since the sparsity detection will fix them as "structured" zeros.

That is why I reverted my last commit.

@franckgaga franckgaga changed the title Changed: store conversion matrices and defect matrices as SparseMatrixCSC Changed: store conversion matrices as SparseMatrixCSC Jun 11, 2025
@franckgaga franckgaga merged commit 20a5ed0 into main Jun 11, 2025
4 checks passed
@franckgaga franckgaga deleted the sparse_conv_mat branch June 11, 2025 22:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants