Skip to content

Conversation

@MiXaiLL76
Copy link
Owner

The path to solving memory problems.

#57

@MiXaiLL76
Copy link
Owner Author

@jean-louquetin Hey! Sorry to bother you, but do you by any chance have a dataset for testing this PR+issue?

With my data, I don't see any problems overall.

@jean-louquetin
Copy link
Contributor

Hey @MiXaiLL76 , I don't have much time to check but probably if you take the cvpr dataset and some yolo model, you will see either that there is no nano/micro objects, or you will have the metrics for this yolo version !

@jean-louquetin
Copy link
Contributor

jean-louquetin commented Sep 22, 2025

I tested with VisDrone dataset, on the train split and got these:
Evaluate annotation type bbox
COCOeval_opt.evaluate() finished...
DONE (t=0.31s).
Accumulating evaluation results...
COCOeval_opt.accumulate() finished...
DONE (t=0.00s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.026
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.047
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.024
Average Precision (AP) @[ IoU=0.50:0.95 | area= nano | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= micro | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.013
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.094
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.199
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.008
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.029
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.030
Average Recall (AR) @[ IoU=0.50:0.95 | area= nano | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= micro | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.014
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.131
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.296
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.053
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.030

Meaning there are nano and micro (not -1 value) but the detection of yolo (I used yolo12n) was very bad on those ranges !

@MiXaiLL76
Copy link
Owner Author

MiXaiLL76 commented Sep 23, 2025

https://github.com/MiXaiLL76/faster_coco_eval/blob/fix-memory-bug/examples/comparison/ultralytics/memory_profile.ipynb

pycocotools

Memory Usage Profile:
+-------------------------+----------------+----------+
| Stage                   | Memory Usage   | Diff     |
+=========================+================+==========+
| 1. Initial              | 54.16 MB       | -        |
+-------------------------+----------------+----------+
| 2. Classes imported     | 54.16 MB       | +0.00    |
+-------------------------+----------------+----------+
| 3. Ground truth loaded  | 167.36 MB      | +113.20  |
+-------------------------+----------------+----------+
| 4. Predictions loaded   | 1081.80 MB     | +914.44  |
+-------------------------+----------------+----------+
| 5. Evaluator created    | 1081.95 MB     | +0.15    |
+-------------------------+----------------+----------+
| 6. Evaluation completed | 2258.05 MB     | +1176.10 |
+-------------------------+----------------+----------+
| 7. Results accumulated  | 2302.85 MB     | +44.80   |
+-------------------------+----------------+----------+
| 8. Summary generated    | 2303.00 MB     | +0.15    |
+-------------------------+----------------+----------+

faster-coco-eval==1.6.7 (separate_eval=False (default)

Memory Usage Profile:
+-------------------------+----------------+---------+
| Stage                   | Memory Usage   | Diff    |
+=========================+================+=========+
| 1. Initial              | 54.32 MB       | -       |
+-------------------------+----------------+---------+
| 2. Classes imported     | 54.32 MB       | +0.00   |
+-------------------------+----------------+---------+
| 3. Ground truth loaded  | 168.30 MB      | +113.98 |
+-------------------------+----------------+---------+
| 4. Predictions loaded   | 1082.68 MB     | +914.38 |
+-------------------------+----------------+---------+
| 5. Evaluator created    | 1082.83 MB     | +0.15   |
+-------------------------+----------------+---------+
| 6. Evaluation completed | 1797.67 MB     | +714.84 |
+-------------------------+----------------+---------+
| 7. Results accumulated  | 1797.67 MB     | +0.00   |
+-------------------------+----------------+---------+
| 8. Summary generated    | 1797.67 MB     | +0.00   |
+-------------------------+----------------+---------+

faster-coco-eval>=1.7.0 (separate_eval=False (default)

Memory Usage Profile:
+-------------------------+----------------+---------+
| Stage                   | Memory Usage   | Diff    |
+=========================+================+=========+
| 1. Initial              | 54.29 MB       | -       |
+-------------------------+----------------+---------+
| 2. Classes imported     | 54.29 MB       | +0.00   |
+-------------------------+----------------+---------+
| 3. Ground truth loaded  | 168.23 MB      | +113.94 |
+-------------------------+----------------+---------+
| 4. Predictions loaded   | 1082.62 MB     | +914.39 |
+-------------------------+----------------+---------+
| 5. Evaluator created    | 1082.77 MB     | +0.15   |
+-------------------------+----------------+---------+
| 6. Evaluation completed | 1684.39 MB     | +601.62 |
+-------------------------+----------------+---------+
| 7. Results accumulated  | 1684.39 MB     | +0.00   |
+-------------------------+----------------+---------+
| 8. Summary generated    | 1684.39 MB     | +0.00   |
+-------------------------+----------------+---------+

@MiXaiLL76
Copy link
Owner Author

@MiXaiLL76
add clear_cache_entry

Memory Usage Profile:
+-------------------------+----------------+---------+
| Stage                   | Memory Usage   | Diff    |
+=========================+================+=========+
| 1. Initial              | 53.84 MB       | -       |
+-------------------------+----------------+---------+
| 2. Classes imported     | 53.84 MB       | +0.00   |
+-------------------------+----------------+---------+
| 3. Ground truth loaded  | 167.78 MB      | +113.94 |
+-------------------------+----------------+---------+
| 4. Predictions loaded   | 1082.17 MB     | +914.39 |
+-------------------------+----------------+---------+
| 5. Evaluator created    | 1082.33 MB     | +0.16   |
+-------------------------+----------------+---------+
| 6. Evaluation completed | 1645.67 MB     | +563.34 |
+-------------------------+----------------+---------+
| 7. Results accumulated  | 1645.67 MB     | +0.00   |
+-------------------------+----------------+---------+
| 8. Summary generated    | 1645.67 MB     | +0.00   |
+-------------------------+----------------+---------+

@MiXaiLL76 MiXaiLL76 merged commit 488e6f0 into main Oct 22, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants