-
Consider we want to train two models in distributed mode and each uses 2 GPUs (4 GPUs available) how do you command the second training to use 2 specific gpus (or at least not the one used by the first training) |
Beta Was this translation helpful? Give feedback.
Answered by
rwightman
Oct 19, 2022
Replies: 2 comments
-
Set CUDA_VISIBLE_DEVICES in two different envs with non overlapping device ids and treat each as 2x GPU |
Beta Was this translation helpful? Give feedback.
0 replies
-
Or map different devices into different docker containers |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
fatemehtd
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Or map different devices into different docker containers