Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Bures-Wasserstein Gradient Descent for Bures-Wasserstein Barycenters #680

Merged
merged 37 commits into from
Mar 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
7eb14d2
bw barycenter with batched sqrtm
clbonet Oct 17, 2024
869955c
BWGD for barycenters
clbonet Oct 19, 2024
be985d1
sbwgd for barycenters
clbonet Oct 19, 2024
9a43369
Test fixed_point vs gradient_descent
clbonet Oct 19, 2024
016704b
Merge branch 'master' into bwgd_barycenter
cedricvincentcuaz Oct 23, 2024
8afc00b
fix test bwgd
clbonet Oct 25, 2024
b2b0bca
nx exp_bures
clbonet Oct 25, 2024
d287a2a
update doc
clbonet Oct 25, 2024
9377405
Merge branch 'master' into bwgd_barycenter
clbonet Oct 31, 2024
4f648bb
fix merge
clbonet Oct 31, 2024
b821ee8
doc exp bw
clbonet Oct 31, 2024
d22028b
First tests stochastic + exp
clbonet Nov 5, 2024
dffa0cf
exp_bures with einsum
clbonet Nov 5, 2024
f3e911a
type Id test
clbonet Nov 6, 2024
97f2261
up test stochastic
clbonet Nov 7, 2024
7594393
test weights
clbonet Nov 7, 2024
6c48b3c
Add BW distance with batchs
clbonet Nov 7, 2024
ba806ff
step size SGD BW Barycenter
clbonet Nov 11, 2024
7ab365a
Merge branch 'master' into bwgd_barycenter
rflamary Nov 12, 2024
447a1a6
Merge branch 'master' into bwgd_barycenter
rflamary Nov 19, 2024
d4045f1
batchable BW distance
clbonet Nov 19, 2024
f669a8e
Merge branch 'master' into bwgd_barycenter
cedricvincentcuaz Dec 1, 2024
6c0a2a0
Merge branch 'master' into bwgd_barycenter
rflamary Dec 17, 2024
5da317f
Merge branch 'master' into bwgd_barycenter
cedricvincentcuaz Jan 13, 2025
2b317e2
Merge branch 'master' into bwgd_barycenter
clbonet Jan 27, 2025
50994ed
RELEASES.md
clbonet Feb 12, 2025
bad385f
precommit
clbonet Feb 12, 2025
0b20759
Add ot.gaussian.bures
clbonet Feb 12, 2025
fe3d9db
Add arg backend
clbonet Feb 13, 2025
506a524
up stop criteria sgd Gaussian barycenter
clbonet Feb 16, 2025
c640ecb
Fix release
clbonet Feb 16, 2025
41ebffc
fix doc
clbonet Feb 17, 2025
3a7af81
Merge branch 'master' into bwgd_barycenter
rflamary Mar 11, 2025
3a7effc
change API bw
clbonet Mar 11, 2025
f41b093
up test bures_wasserstein_distance
clbonet Mar 11, 2025
0a9d499
up test bures_wasserstein_distance
clbonet Mar 11, 2025
1444648
up test bures_wasserstein_distance
clbonet Mar 11, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -390,3 +390,7 @@ Artificial Intelligence.
[72] Thibault Séjourné, François-Xavier Vialard, and Gabriel Peyré (2021). [The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation](https://proceedings.neurips.cc/paper/2021/file/4990974d150d0de5e6e15a1454fe6b0f-Paper.pdf). Neural Information Processing Systems (NeurIPS).

[73] Séjourné, T., Vialard, F. X., & Peyré, G. (2022). [Faster Unbalanced Optimal Transport: Translation Invariant Sinkhorn and 1-D Frank-Wolfe](https://proceedings.mlr.press/v151/sejourne22a.html). In International Conference on Artificial Intelligence and Statistics (pp. 4995-5021). PMLR.

[74] Chewi, S., Maunu, T., Rigollet, P., & Stromme, A. J. (2020). [Gradient descent algorithms for Bures-Wasserstein barycenters](https://proceedings.mlr.press/v125/chewi20a.html). In Conference on Learning Theory (pp. 1276-1304). PMLR.

[75] Altschuler, J., Chewi, S., Gerber, P. R., & Stromme, A. (2021). [Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent](https://papers.neurips.cc/paper_files/paper/2021/hash/b9acb4ae6121c941324b2b1d3fac5c30-Abstract.html). Advances in Neural Information Processing Systems, 34, 22132-22145.
5 changes: 4 additions & 1 deletion RELEASES.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@
- Reorganize sub-module `ot/lp/__init__.py` into separate files (PR #714)
- Implement projected gradient descent solvers for entropic partial FGW (PR #702)
- Fix documentation in the module `ot.gaussian` (PR #718)
- Refactored `ot.bregman._convolutional` to improve readability (PR #709)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont' see that in the PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmh, I think I did a mistake when merging with the master at some point. (It was deleted from Line 46 of the Releases.md, and it seemed to be in the wrong releases of POT)

- Added `ot.gaussian.bures_barycenter_gradient_descent` (PR #680)
- Added `ot.gaussian.bures_wasserstein_distance` (PR #680)
- `ot.gaussian.bures_wasserstein_distance` can be batched (PR #680)

#### Closed issues
- Fixed `ot.mapping` solvers which depended on deprecated `cvxpy` `ECOS` solver (PR #692, Issue #668)
Expand Down Expand Up @@ -44,7 +48,6 @@ This release also contains few bug fixes, concerning the support of any metric i
- Notes before depreciating partial Gromov-Wasserstein function in `ot.partial` moved to ot.gromov (PR #663)
- Create `ot.gromov._partial` add new features `loss_fun = "kl_loss"` and `symmetry=False` to all solvers while increasing speed + updating adequatly `ot.solvers` (PR #663)
- Added `ot.unbalanced.sinkhorn_unbalanced_translation_invariant` (PR #676)
- Refactored `ot.bregman._convolutional` to improve readability (PR #709)

#### Closed issues
- Fixed `ot.gaussian` ignoring weights when computing means (PR #649, Issue #648)
Expand Down
8 changes: 4 additions & 4 deletions ot/backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -1363,7 +1363,7 @@ def solve(self, a, b):
return np.linalg.solve(a, b)

def trace(self, a):
return np.trace(a)
return np.einsum("...ii", a)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is that faster or slower? we need an idea


def inv(self, a):
return scipy.linalg.inv(a)
Expand Down Expand Up @@ -1776,7 +1776,7 @@ def solve(self, a, b):
return jnp.linalg.solve(a, b)

def trace(self, a):
return jnp.trace(a)
return jnp.diagonal(a, axis1=-2, axis2=-1).sum(-1)

def inv(self, a):
return jnp.linalg.inv(a)
Expand Down Expand Up @@ -2309,7 +2309,7 @@ def solve(self, a, b):
return torch.linalg.solve(a, b)

def trace(self, a):
return torch.trace(a)
return torch.diagonal(a, dim1=-2, dim2=-1).sum(-1)

def inv(self, a):
return torch.linalg.inv(a)
Expand Down Expand Up @@ -2723,7 +2723,7 @@ def solve(self, a, b):
return cp.linalg.solve(a, b)

def trace(self, a):
return cp.trace(a)
return cp.trace(a, axis1=-2, axis2=-1)

def inv(self, a):
return cp.linalg.inv(a)
Expand Down
Loading
Loading