Open SayBender opened 2 years ago
Our code only supports two options for the resolution (600/800). Other settings may cause unexpected errors. After burnin, there will be a model duplication. This doubles the memory. So maybe need to check the memory usage during burnin to look at if at least half of the spare memory left. Also, note that current code only supports batch size equal to 1. Large batch size may also cause out of memory.
Dear Authors, can you explain what exactly is the burn in step? How does it affect the training? what are the extreme values you have tested? How is burn in step related to other variables?
And why my training gets out of memory error exactly on burn in step? even after changing the pixel size from 600 to 400 and adjusting the code accordingly.
This is the error that I get right at the epoch that is burn-in step. The code runs fine until then. Even when I bring BURNIN STEP at 2nd epoch for instance, it still goes out of memory!? Do you have any idea?
` return forward_call(*input, *kwargs) File "/home/say/NEmo/omni-detr/models/deformable_transformer.py", line 221, in forward src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, level_start_index, padding_mask) File "/home/say/.conda/envs/deformable/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, **kwargs) File "/home/say/NEmo/omni-detr/models/ops/modules/ms_deform_attn.py", line 105, in forward
Thank you