SysCV / shift-detection-tta

This repository implements continuous test-time adaptation algorithms for object detection on the SHIFT dataset.
MIT License
18 stars 2 forks source link

Evaluate and test using RTX 4070 12GB #15

Open panagiotamoraiti opened 5 months ago

panagiotamoraiti commented 5 months ago

Hello, I would like to ask, can i use an RTX 4070 12GB for evaluation and testing? When i am trying to run the mean-teacher adaptation that is provided in this repositiry i get the following error:

RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 11.72 GiB total capacity; 9.59 GiB already allocated; 47.88 MiB free; 9.81 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.

Can i do something about it or should i use RTX 3090 GPU?

Thank you for your time!

mattiasegu commented 5 months ago

Hi @panagiotamoraiti,

You don't have to necessarily use an RTX 3090 GPU. However, you have to keep in mind that the 3090 comes with 24 GB of vRAM, which is twice as much as your GPU.

You can try to (i) reduce the batch size, or (ii) reduce the image size. Let me know how it goes!

panagiotamoraiti commented 5 months ago

Hello, Thank you very much for your help! I tried scale the images, instead of (800, 1440), i used (600, 1080) and it worked. I got the following results, which are slightly different from the results in a log file i found on another github issue. Is there a chance that the reduced size is going to affect the performance of my implementation?

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.387
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.567
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.430
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.279
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.719
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.796
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.450
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.450
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.450
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.350
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.752
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.824
01/28 11:38:15 - mmengine - INFO - 
+------------+-------+------------+-------+----------+-------+
| category   | AP    | category   | AP    | category | AP    |
+------------+-------+------------+-------+----------+-------+
| pedestrian | 0.411 | car        | 0.495 | truck    | 0.512 |
| bus        | 0.411 | motorcycle | 0.428 | bicycle  | 0.063 |
+------------+-------+------------+-------+----------+-------+
01/28 11:38:15 - mmengine - INFO - bbox_mAP_copypaste: 0.387 0.567 0.430 0.279 0.719 0.796
01/28 11:38:15 - mmengine - INFO - Epoch(test) [2400/2400]    coco/pedestrian_precision: 0.4110  coco/car_precision: 0.4950  coco/truck_precision: 0.5120  coco/bus_precision: 0.4110  coco/motorcycle_precision: 0.4280  coco/bicycle_precision: 0.0630  coco/bbox_mAP: 0.3870  coco/bbox_mAP_50: 0.5670  coco/bbox_mAP_75: 0.4300  coco/bbox_mAP_s: 0.2790  coco/bbox_mAP_m: 0.7190  coco/bbox_mAP_l: 0.7960  data_time: 0.0017  time: 1.3705
mattiasegu commented 5 months ago

Hi,

Thanks for sharing your results. It is very likely that the image size will affect the performance of the method. For a fair comparison, you could compare to baselines using the same image size.

panagiotamoraiti commented 5 months ago

Hello, Thank you for your feedback and suggestion. To ensure a fair comparison, I will indeed take into consideration using the same image size for the baselines in my evaluation.