Closed xiaodongqq closed 1 year ago
Hi, it seems like the input is None. can you print the values of input, e.g., keypoints, scores, and descriptors
I got this error too.
This error happens at the following code, which corresponds to the line 214 of the file layers.py
.
message = torch.einsum('bhnm,bdhm->bdhn', prob, value)
The cause of this error is that the prob
is None.
Could you please fix the code to address this error?
This error happens at the layer SharedAttentionalPropagation
.
So, If you train GM instead of DGNNS or AdaGMN, the training works.
I got the following error, even if I train a GM.
FileNotFoundError: [Errno 2] No such file or directory: '/data/datasets/object_recognition/sain/MegaDepth_undistorted/training_data/matches_sep_spp/0183/16115.npy'
I think dump_megadepth.py
does not generate 16115.npy
.
sorry for the late reply. I am a little bit busy these days and will test the code and fix bugs as soon as possible. Thank you very much your patience.
I just fixed the bugs in training DGNNS and AdaGMN.
I got the following error, even if I train a GM.
FileNotFoundError: [Errno 2] No such file or directory: '/data/datasets/object_recognition/sain/MegaDepth_undistorted/training_data/matches_sep_spp/0183/16115.npy'
I thinkdump_megadepth.py
does not generate16115.npy
.
you might need to dump the training data first.
I dumped the training data, which took a few days, and then I ran the train.py
.
The assets/mega_scene_nmatches_spp.npy
has the number of valid pairs for each scene.
It says the scene 0183 has 16515 pairs, and the build_dataset_from_offline
method in megadepth.py
randomly samples 80 pair ids.
However, the dump_megadepth.py
generates only 15687 pairs in the folder of matches_sep_spp/0183
.
I think this is the cause of this error.
Could you please check whether the dumping and training code work correctly from the begining to the end, if you have chance.
Dear all,
Sorry for the late reply. The code for dumping data and training model is tested and bugs are fixed now. The readme file is also updated. Please do a tiny test on 3 scenes for dumping data and training model before using full scenes for training because this could save your time if there is something wrong.
I am going to close this. If you have any questions, feel free to reopen it.
Hello author, I am glad that you can reply to my previous questions. When I generated the corresponding key points and descriptors according to the readme you provided, I reported the following errors during training under Windows,
which could not be solved. Could you please tell me how to solve this problem? Thank you very much