Open 731894915 opened 3 years ago
Hi, @731894915 In my experiments, I didn't consume so much VRAM. Could you please provide more details?
Hi @731894915 You may also try the low precision, such as float16
, to reduce 40GB to 20GB. In our experiment, fp16 will not compromise the performance too much.
Hi, @Xuanmeng-Zhang @layumi. Thanks for your reply. The issue comes during the testing on MSMT17, which has 93820 images in the query and gallery set.
File "..../gnn_reranking/gnn_reranking.py", line 40, in gnn_reranking A = build_adjacency_matrix.forward(initial_rank.float()) RuntimeError: CUDA out of memory. Tried to allocate 32.79 GiB (GPU 0; 10.76 GiB total capacity; 144.61 MiB already allocated; 9.61 GiB free; 178.00 MiB reserved in total by PyTorch)
From the source code, I found that it requires building a 93820 x 93820 matrix. This matrix takes 93820 93820 4 / (1024^3) = 32.79G VRAM. Since I am using a single RTX 2080Ti with 11GB VRAM, it might still not work even if I choose fp16.
And it also seems that the adjacent matrix cannot be chucked into multiple smaller ones.
This problem also bothers me. Is there any solution for solving "CUDA out of memory" by constructing the adjacent matrix?
Thanks @731894915 and @wang-zm18. We have discussed a lot on this problem. But for the time being, it is quite tricky to optimise it since the output should be float as well.
We also have try the sparse matric in pytorch, but it also need to be dense to conduct multiply.
@Xuanmeng-Zhang Do you have any new idea about this? uint8 ? or any other solution by running on the cpu partly (large cpu memory may be needed instead)?
Hi, first of all, thanks for releasing your CUDA operator for reranking. However, I encountered memory allocation problems when dealing with large matrices which require more than 40GB VRAM. Is that possible for you to release the CPU version of GNN re-ranker mentioned in your paper? That would save us a lot of time from re-implementing the whole module.