Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] MatrixAlgebraKit decompositions #230

Draft
wants to merge 14 commits into
base: master
Choose a base branch
from
Draft

[WIP] MatrixAlgebraKit decompositions #230

wants to merge 14 commits into from

Conversation

lkdvos
Copy link
Collaborator

@lkdvos lkdvos commented Mar 16, 2025

This is some preliminary code to start geting a feeling for how we could go and implement the MatrixAlgebraKit functions for tensors.

Looping over blocks

The first thing I did was try and generalize the "looping over blocks" concept, with an eye towards parallelizing that in the near future. I can come up with several different designs, which are all more or less equivalent, but look slightly different. The one I implemented here is centered around foreach, with a wink towards OhMyThreads.tforeach to include schedulers:

foreachblock(f, t, ts...)
# equivalent to
for (c, b) in blocks(t)
    f(c, (b, block.(ts, Ref(c))...))
end

There are several design questions here:

  1. Is it fair to loop over the blockcharges of the first tensor? This can often be the case if that is an "output" tensor, but it might not cover all cases, maybe you want to loop over the union of all sectors, or the intersection?
  2. Do we want f(c, bs...) or f(c, bs)?

Alternatives I can come up with are to generalize blocks(t) to blocks(t1, t2, ...) -> (c => (b1, b2, ...))..., but that does not encapsulate the threading options.

Should we also include a mapblocks!(f, t, ts...) and/or mapreduceblocks(f, op, t, ts...)?
If we also want mapblocks(f, t, ts...), should the output be based on all arguments, or just the first?

Implementing decompositions

Then, in order to define the decompositions, there are again some choices to be made.
The first is whether or not we want to define the MatrixAlgebraKit functions themselves for AbstractTensorMap, or only define our wrappers that dispatch to these implementations. Here, I chose to go ahead and implement the functions directly.

Secondly, the current implementation preallocates the entire output first, and then applies the relevant functions blockwise. This is particularly useful for multithreading and maximally reusing memory, but might become a bit more involved for the cases where we do not know the correct sizes a priori.

As a sidenote, currently we are not very consistent with the arrows of the factorizations. For example, leftorth(t) -> Q, R might have different duality of the connecting space depending on the number of indices of t, which can be a bit annoying when working with fermions.

Thirdly, it would be convenient to have the default_eig_alg function also work in the type domain, so this can be defined for a tensor without having to instantiate a block.
Similarly or additionally, a DefaultAlgorithm type to indicate deferring the selection until later could be useful.


Let me know what you think about some of this, I'll try and add some more functionality this week if we start converging on some of these design questions :)

@lkdvos lkdvos requested a review from Jutho March 16, 2025 20:19
@lkdvos lkdvos marked this pull request as draft March 16, 2025 20:19
Copy link

codecov bot commented Mar 16, 2025

Codecov Report

Attention: Patch coverage is 64.34109% with 46 lines in your changes missing coverage. Please review.

Project coverage is 73.28%. Comparing base (daf2f53) to head (40d12d8).

Files with missing lines Patch % Lines
src/tensors/matrixalgebrakit.jl 67.61% 34 Missing ⚠️
src/tensors/backends.jl 25.00% 6 Missing ⚠️
src/tensors/diagonal.jl 0.00% 6 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (daf2f53) and HEAD (40d12d8). Click for more details.

HEAD has 4 uploads less than BASE
Flag BASE (daf2f53) HEAD (40d12d8)
6 2
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #230      +/-   ##
==========================================
- Coverage   82.63%   73.28%   -9.35%     
==========================================
  Files          43       45       +2     
  Lines        5557     5649      +92     
==========================================
- Hits         4592     4140     -452     
- Misses        965     1509     +544     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants