Open longzeyilang opened 2 months ago
I do not want use megadepth and scannet data, just like coco_20k data?
Basically you need to update the dataloader and loss. I used to have coco in DKM training. Wouldnt really recommend it though.
why?
Because if you train dense matchers on homography they only do well on homography.
ok , I want to train my own dataset, about 128*128 image size
According to you, I want use scannet data to crop 128*128 image size to train. and detection in my own dataset. is it correct?
Hi, what should I trianing RoMa with coco_20k data augmentation? like https://github.com/verlab/accelerated_features/blob/main/modules/dataset/augmentation.py https://github.com/verlab/accelerated_features/blob/main/modules/dataset/augmentation.py how should I revise?