-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU and GPU memory consumption during inference #1170
Comments
The demo just process a single image, thus the memory should be small. Could you verify that the CPU threads are related to the jupyter notebook itself or mmdetection? |
@hellock I created a python script and ran the model on 1000 images in a loop. I want to know why does the CPU memory consumption is 2.95 GB ? Yes all the threads are of mmdetection because I verfied by running it in .py script. Is there a way to control threads ? |
Try |
@haikuoyao Hi, I have the same problem when training FasterRcnn. It uses a large amount of CPU and keep increasing. where do you add this "torch.set_num_threads(1)" in the code? thanks |
@TWDH I met in reference not in training. |
@hellock Hi, I have the same problem when training MaskRCNN. It uses a large amount of CPU and keep increasing. How should deal with?Thank you~ |
DOUBT
While running the demo notebook for MaskRCNN model. I want to know why the script uses :
STEPS TO REPRODUCE
config_file = '../configs/mask_rcnn_r50_fpn_1x.py'
checkpoint_file = '../checkpoints/mask_rcnn_r50_fpn_2x_20181010-41d35c05.pth'
model = init_detector(config_file, checkpoint_file, device='cuda:0')
result = inference_detector(model, img)
The text was updated successfully, but these errors were encountered: