You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@OleehyO thanks for the multi-GPU inference code provided here and mentioned here.
But even with two 32GB GPUs, with a similar setup and increasing the number of frames to 24 frames, with height 480 and width 720, it goes "Out of Memory"
Is there a limit to the number of frames, or a linear relationship between number of GPUs and number of frames?
xdit mainly uses multiple cards to accelerate inference speed, but it will not save a lot of memory, and for more details, it is recommended to consult the developers related to xdit.
If you just want to run the inference of the cogvideox model, it is recommended to use our repository's cli_demo.py. A 32GB GPU is sufficient to run any cogvideox model.
But this technically suggests integrating xdit to CogVideoX is more of a data parallelism benefit, not model parallelism, or no? Also, the original poster asked for something fast and memory-friendly. Which with two GPUs is not the case. thanks
@OleehyO thanks for the multi-GPU inference code provided here and mentioned here.
But even with two 32GB GPUs, with a similar setup and increasing the number of frames to 24 frames, with height 480 and width 720, it goes "Out of Memory"
Is there a limit to the number of frames, or a linear relationship between number of GPUs and number of frames?
thanks
The text was updated successfully, but these errors were encountered: