-
Notifications
You must be signed in to change notification settings - Fork 364
Added CPU offloading #3452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Added CPU offloading #3452
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wasnt there supposed to be a bunch of logging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to test this change across our entire test suite to ensure it is working as expected.
@@ -690,6 +685,18 @@ def compile( | |||
gm = post_lowering(gm, settings) | |||
logger.debug("Lowered Input graph: " + str(gm.graph)) | |||
|
|||
# Move the weights in the state_dict to CPU | |||
if offload_module_to_cpu: | |||
exported_program.module().to(CPU_DEVICE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I encountered a situation where this wasn't enough and it required calling torch.cuda.empty_cache and gc.collect as well to release the memory. Here's a suggestion: Modify the delete_module
function to deallocate_module(module, delete=False)
and call it here as deallocate_module(exported_program.module(), delete=False)
.
Description
Added CPU offloading. Compilation takes no more than 1x GPU memory. Before engine compilation, the model and graph module are moved to CPU.
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: