-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Support explicit integer padding in lax.conv_transpose #32268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
c1afc19 to
a04ab03
Compare
a04ab03 to
fcfeec2
Compare
fcfeec2 to
08335fa
Compare
levskaya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this all looks right! apologies for the slow review.
08335fa to
9d85343
Compare
9d85343 to
1e460b4
Compare
94309d5 to
6637697
Compare
6637697 to
cac0151
Compare
cac0151 to
597e739
Compare
| preferred_element_type: DTypeLike | None = None) -> Array: | ||
| preferred_element_type: DTypeLike | None = None, | ||
| use_consistent_padding: bool = False, | ||
| out_sharding: NamedSharding | P | None = None) -> Array: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you adding out_sharding here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove it since out_sharding has nothing to do with the bug you are trying to fix.
This PR fixes the handling of integer padding arguments to
lax.conv_transpose. It addresses #32267 and google/flax#4593 by computing the padding necessary for a conv-transpose when the input padding argument is interpreted as the padding used by the ordinary convolution, as described in https://arxiv.org/abs/1603.07285