Xuanmeng-Zhang / gnn-re-ranking

A real-time GNN-based method. Understanding Image Retrieval Re-Ranking: A Graph Neural Network Perspective
https://arxiv.org/abs/2012.07620
77 stars 4 forks source link

CUDA out of memory #2

Open 731894915 opened 3 years ago

731894915 commented 3 years ago

Hi, first of all, thanks for releasing your CUDA operator for reranking. However, I encountered memory allocation problems when dealing with large matrices which require more than 40GB VRAM. Is that possible for you to release the CPU version of GNN re-ranker mentioned in your paper? That would save us a lot of time from re-implementing the whole module.

Xuanmeng-Zhang commented 3 years ago

Hi, @731894915 In my experiments, I didn't consume so much VRAM. Could you please provide more details?

layumi commented 3 years ago

Hi @731894915 You may also try the low precision, such as float16, to reduce 40GB to 20GB. In our experiment, fp16 will not compromise the performance too much.

731894915 commented 3 years ago

Hi, @Xuanmeng-Zhang @layumi. Thanks for your reply. The issue comes during the testing on MSMT17, which has 93820 images in the query and gallery set.

File "..../gnn_reranking/gnn_reranking.py", line 40, in gnn_reranking A = build_adjacency_matrix.forward(initial_rank.float()) RuntimeError: CUDA out of memory. Tried to allocate 32.79 GiB (GPU 0; 10.76 GiB total capacity; 144.61 MiB already allocated; 9.61 GiB free; 178.00 MiB reserved in total by PyTorch)

From the source code, I found that it requires building a 93820 x 93820 matrix. This matrix takes 93820 93820 4 / (1024^3) = 32.79G VRAM. Since I am using a single RTX 2080Ti with 11GB VRAM, it might still not work even if I choose fp16.

And it also seems that the adjacent matrix cannot be chucked into multiple smaller ones.

wang-zm18 commented 3 years ago

This problem also bothers me. Is there any solution for solving "CUDA out of memory" by constructing the adjacent matrix?

layumi commented 3 years ago

Thanks @731894915 and @wang-zm18. We have discussed a lot on this problem. But for the time being, it is quite tricky to optimise it since the output should be float as well.

We also have try the sparse matric in pytorch, but it also need to be dense to conduct multiply.

@Xuanmeng-Zhang Do you have any new idea about this? uint8 ? or any other solution by running on the cpu partly (large cpu memory may be needed instead)?