Tianxiaomo / pytorch-YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4
Apache License 2.0
4.47k stars 1.49k forks source link

CPU memory usage greatly increases when moving model from cpu to gpu #542

Open lpkoh opened 2 years ago

lpkoh commented 2 years ago

Hi,

I have trained a scaled-yolov4 object detection model in darknet, which I have converted to a pytorch model via this repo. When I load the pytorch model onto my CPU, I get a very small increase in CPU memory usage (less than 0.2GB, which is the size of my .weights darknet file). However, when I run the command “model.to(‘cuda:0’)”, I get an increase in CPU memory by 2.5GB, which is strange as (1) shouldn’t it be GPU memory that increases and (2) why is it so much bigger than 0.2GB?

The command I use to obtain the memory usage inside my docker container is os.system(‘cat /sys/fs/cgroup/memory/memory.usage_in_bytes’). I am assuming returns CPU memory usage.

Did anyone else face this issue?