You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fixes#7944.
### Description
In response to Issue #7944,
I added the new functionality scaled_dot_product_attention from PyTorch
to re-enable flash attention, present in the original MONAI Generative
Models repository. This is allowed for torch >= 2.0 and when argument
save_attn = False. Errors are raised otherwise. I ran quick tests and
added some checks on test_selfattention and test_crossattention scripts
to make sure the outputs are the same as not using flash attention.
### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [x] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [x] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [x] In-line docstrings updated.
- [ ] Documentation updated, tested `make html` command in the `docs/`
folder.
---------
Signed-off-by: Virginia Fernandez <[email protected]>
Co-authored-by: Virginia Fernandez <[email protected]>
Co-authored-by: YunLiu <[email protected]>
Co-authored-by: Yiheng Wang <[email protected]>
0 commit comments