Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagate training flag to FlowMatching Integrator #288

Closed
han-ol opened this issue Dec 20, 2024 · 4 comments
Closed

Propagate training flag to FlowMatching Integrator #288

han-ol opened this issue Dec 20, 2024 · 4 comments

Comments

@han-ol
Copy link
Contributor

han-ol commented Dec 20, 2024

Currently, dropout is ignored in FlowMatching.
I confirmed that the corresponding subnet_kwargs is propagated correctly to the subnet constructor. Nevertheless, dropout has no effect on training.

The Integrator and its subnet needs to receive a training=True flag to turn on dropout at training time.

@stefanradev93
Copy link
Contributor

@LarsKue Should we provide a hot fix or will you address this soon with the integrators refactor?

@LarsKue
Copy link
Contributor

LarsKue commented Dec 20, 2024

Will be addressed with the refactor.

LarsKue added a commit that referenced this issue Feb 6, 2025
stefanradev93 added a commit that referenced this issue Feb 7, 2025
* remove old integrators

* change logging level of debug infos

* add utils/integrate.py

* update usage of integration in flow matching

* fix #288

* set logging level to debug for tests

* add seed parameter to jacobian trace for stochastic evaluation under compiled backends

* fix shape of trace in flow matching

* fix integration for negative step size (mostly)

* Add user-defined loss functions due to empirically better performance of some non-MSE losses sometimes

* fix negative step size integration
fix small deviations in some backends
add (required) min and max step number selection for dynamic integration
improve dispatch
remove step size selection (users can compute this if they need it, but exposing the argument is ambiguous w.r.t. fixed vs adaptive integration)

* speed up build test

* fix density computation
add default integrate kwargs

* reduce default number of max steps for dynamic step size integration

* allow specifying steps = "dynamic" instead of just "adaptive"

* add integrate kwargs to serialization

* add todo for density test

* improve time broadcasting

* fix tensorflow incompatible types

---------

Co-authored-by: stefanradev93 <[email protected]>
@paul-buerkner
Copy link
Contributor

@LarsKue Is this issue solved already?

@LarsKue
Copy link
Contributor

LarsKue commented Feb 12, 2025

Fixed by #300

@LarsKue LarsKue closed this as completed Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants