-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagate training flag to FlowMatching Integrator #288
Comments
@LarsKue Should we provide a hot fix or will you address this soon with the integrators refactor? |
Will be addressed with the refactor. |
Merged
stefanradev93
added a commit
that referenced
this issue
Feb 7, 2025
* remove old integrators * change logging level of debug infos * add utils/integrate.py * update usage of integration in flow matching * fix #288 * set logging level to debug for tests * add seed parameter to jacobian trace for stochastic evaluation under compiled backends * fix shape of trace in flow matching * fix integration for negative step size (mostly) * Add user-defined loss functions due to empirically better performance of some non-MSE losses sometimes * fix negative step size integration fix small deviations in some backends add (required) min and max step number selection for dynamic integration improve dispatch remove step size selection (users can compute this if they need it, but exposing the argument is ambiguous w.r.t. fixed vs adaptive integration) * speed up build test * fix density computation add default integrate kwargs * reduce default number of max steps for dynamic step size integration * allow specifying steps = "dynamic" instead of just "adaptive" * add integrate kwargs to serialization * add todo for density test * improve time broadcasting * fix tensorflow incompatible types --------- Co-authored-by: stefanradev93 <[email protected]>
@LarsKue Is this issue solved already? |
Fixed by #300 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Currently, dropout is ignored in FlowMatching.
I confirmed that the corresponding subnet_kwargs is propagated correctly to the subnet constructor. Nevertheless, dropout has no effect on training.
The Integrator and its subnet needs to receive a training=True flag to turn on dropout at training time.
The text was updated successfully, but these errors were encountered: