Parskatt / DKM

[CVPR 2023] DKM: Dense Kernelized Feature Matching for Geometry Estimation
https://parskatt.github.io/DKM/
Other
378 stars 28 forks source link

How many GPUs and consumed memory for your training? #19

Closed TruongKhang closed 1 year ago

TruongKhang commented 1 year ago

Hello @Parskatt ,

Congratulations on your paper acceptance! :D I want to know the details about how many GPUs you use, the capacity memory of each, and how much GPU memory is consumed during your training. Thank you!!!

Parskatt commented 1 year ago

Hi! The code here uses quite a bit of memory, the high-resolution version uses about 10GB per sample during training. (in fp32). We're working on an autocast version with fp16 and support for DDP, this reduces the memory to about 5GB per sample :)

Parskatt commented 1 year ago

We trained using total batchsize 8 per GPU (we used A100fats), but I think if you have more limited memory you can try reducing batchsize and lr, or accumulating gradients.

TruongKhang commented 1 year ago

Hey @Parskatt, thank you! But I want to know how much memory each your GPU card has

Do you make it work for FP16 by purely using Pytorch? I used mix-precision in PyTorch-lightning but got the nan loss error.

Parskatt commented 1 year ago

Those cards have 80GB memory. Although now we're pretty close to having a version work for much smaller GPU sizes (basically should work even for 10GB cards). We also got the nan loss errors, you will probably need to use grad clipping and grad scaling. We'll share the updated code in a bit, so you can compare :)

TruongKhang commented 1 year ago

I got it, thank you!!!

Parskatt commented 1 year ago

Congrats on the accept to AAAI23 btw :) I think making feature-matching methods more efficient is a really important topic (pun intended).

TruongKhang commented 1 year ago

Thank you @Parskatt! :D, because I only have 4 GPUs and each one has 12GB, the training is so long. Anyway, the performance of your method on MegaDepth and ScanNet is really impressive. Thank you for sharing your code! Hope that we can see each other at a conference in the near future! :))))