I have trained a scaled-yolov4 object detection model in darknet, which I have converted to a pytorch model via this repo. When I load the pytorch model onto my CPU, I get a very small increase in CPU memory usage (less than 0.2GB, which is the size of my .weights darknet file). However, when I run the command “model.to(‘cuda:0’)”, I get an increase in CPU memory by 2.5GB, which is strange as (1) shouldn’t it be GPU memory that increases and (2) why is it so much bigger than 0.2GB?
The command I use to obtain the memory usage inside my docker container is os.system(‘cat /sys/fs/cgroup/memory/memory.usage_in_bytes’). I am assuming returns CPU memory usage.
Hi,
I have trained a scaled-yolov4 object detection model in darknet, which I have converted to a pytorch model via this repo. When I load the pytorch model onto my CPU, I get a very small increase in CPU memory usage (less than 0.2GB, which is the size of my .weights darknet file). However, when I run the command “model.to(‘cuda:0’)”, I get an increase in CPU memory by 2.5GB, which is strange as (1) shouldn’t it be GPU memory that increases and (2) why is it so much bigger than 0.2GB?
The command I use to obtain the memory usage inside my docker container is os.system(‘cat /sys/fs/cgroup/memory/memory.usage_in_bytes’). I am assuming returns CPU memory usage.
Did anyone else face this issue?