Why do we need to add tensor_label_k as input during SSOD?
When samples append tensor_label_q, tensor_label_k, tensor_unlabel_q as input, the cuda memory will increase until out of memery.
The CUDA out of memory information is as follows:
RuntimeError: CUDA out of memory. Tried to allocate 506.00 MiB (GPU 1; 31.75 GiB total capacity; 27.74 GiB already allocated; 424.00 MiB free; 29.83 GiB reserved in total by PyTorch)
RuntimeError: CUDA out of memory. Tried to allocate 1.97 GiB (GPU 0; 79.35 GiB total capacity; 56.13 GiB already allocated; 1.38 GiB free; 57.79 GiB reserved in total by PyTorch)
The code block is as follow: https://github.com/amazon-science/omni-detr/blob/main/engine.py#L184-L198
Why do we need to add tensor_label_k as input during SSOD? When samples append tensor_label_q, tensor_label_k, tensor_unlabel_q as input, the cuda memory will increase until out of memery.
The CUDA out of memory information is as follows:
RuntimeError: CUDA out of memory. Tried to allocate 506.00 MiB (GPU 1; 31.75 GiB total capacity; 27.74 GiB already allocated; 424.00 MiB free; 29.83 GiB reserved in total by PyTorch)
RuntimeError: CUDA out of memory. Tried to allocate 1.97 GiB (GPU 0; 79.35 GiB total capacity; 56.13 GiB already allocated; 1.38 GiB free; 57.79 GiB reserved in total by PyTorch)