-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
layer_norm backward problem #40
Comments
Not sure if set |
This error only happens when |
I can reproduce with the following script:
|
I found that this issue is more extensive than I initially thought. Other operators, |
I'm noticing this error in a very different context for my own application - does anyone have any insight about what's going on here / how to fix this issue? For reference, am seeing the same donated buffer error. |
for the above issue, it is because donated_buffer requires create_graph=False and retain_graph=False, but our benchmarking method keeps repeating backward phase and have to set you can try disable donated buffer via |
I see, thanks - I may need to disable donated buffer, as I require create_graph=True (ours is a differentiable physics model that requires the higher-order gradients). |
The bwd and fwd_bwd tests for layer_norm failed.
Error string is
RuntimeError: This backward function was compiled with non-empty donated buffers which requires create_graph=False and retain_graph=False. Please keep backward(create_graph=False, retain_graph=False) across all backward() function calls, or set torch._functorch.config.donated_buffer=False to disable donated buffer.
Test Plan:
The text was updated successfully, but these errors were encountered: