Closed NguyenDangBinh closed 5 years ago
Dear @NguyenDangBinh,
You should reduce the batch size to fit it to your GPU memory. Please change BATCH_SIZE:
parameter in the .yaml
file to a smaller one.
dear, I just reduce BATCH_SIZE =16, it is ok. Thank you.
Great! But note that, you wouldn't be able to reproduce our results in this case.
dear, When I run python scripts/train.py --cfg experiments/mpii/train.yaml, I met the error like below:
RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 7.93 GiB total capacity; 6.19 GiB already allocated; 340.00 MiB free; 551.04 MiB cached)
Note: my PC is Hp Z440: Xeon E51650v3, 32G ddr4, GTX1070