Description
System Information (please complete the following information):
- OS & Version: Docker linux
- ML.NET Version: latest
- .NET Version: 8.0
Describe the bug
Hi all,
I'm currently having an issue with object detection in the docker container by using TorchSharp-Cpu. The first object detection is going fine by having around 2 GB in use, but after more like 100 the RAM usage is going up to 4GB and more.
And the worst part of it is that this is running out of memory after a while.
Looking at the process after the usage was done, there was no decrease in memory usage happening.
The model is created by images up to 9MB PNGs but resized in the model to only 250 by 250.
I was already multiple times checking the using statements and all IDisposable objects.
To Reproduce
Steps to reproduce the behavior:
- Set Up ASP.Web API
- Add an object detection Model => train it etc.
- Add a controller to use the model
- Provide an upload function with IFormFile
- Add using on opening the IFormFileStream
- Add using for the ModelInput
- Add a result around the prediction, store the result of precision in that, and close or make a using the output image.
- Make sure all using's are there and all streams are going to be closed.
- Optional (Add a lock statement around prediction to make it anytime single-threaded)
Expected behavior
RAM usage for single-threaded usage should go not over 2.5 GB or come down after usage. Dot. Memory on Windows was only showing that there is a lot of unmanaged Memory in use (streams).