rpautrat / SuperPoint

Efficient neural feature detector and descriptor
MIT License
1.88k stars 416 forks source link

I ran into a problem running the step6 #270

Open zhangsngood opened 1 year ago

zhangsngood commented 1 year ago

Hi, Thank you for this great work! I ran into a problem running the step6. The results are as follows.It said ' Image COCO_train2014_000000242900 has no corresponding label /home/lt/Downloads/dierci/superpoint/EXPER_DIR/outputs/mp_synth-v11_export_ha2/COCO_train2014_000000242900.npz', and I don't know what went wrong. Could you give me some suggestions to solve this problem.Thank you very much!

/home/lt/anaconda3/envs/step2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/lt/anaconda3/envs/step2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/lt/anaconda3/envs/step2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/lt/anaconda3/envs/step2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/lt/anaconda3/envs/step2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/lt/anaconda3/envs/step2/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) 2022-10-06 10:28:06.715110: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2022-10-06 10:28:06.851357: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2022-10-06 10:28:06.851627: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62 pciBusID: 0000:01:00.0 totalMemory: 7.79GiB freeMemory: 7.03GiB 2022-10-06 10:28:06.851642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2022-10-06 10:28:07.055039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-10-06 10:28:07.055068: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2022-10-06 10:28:07.055074: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2022-10-06 10:28:07.055147: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2394 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0, compute capability: 7.5) [10/06/2022 10:28:07 INFO] Running command TRAIN [10/06/2022 10:28:07 INFO] Number of GPUs detected: 1 Traceback (most recent call last): File "experiment.py", line 173, in args.func(config, output_dir, args) File "experiment.py", line 110, in _cli_train train(config, config['train_iter'], output_dir, pretrained_dir) File "experiment.py", line 35, in train with _init_graph(config) as net: File "/home/lt/anaconda3/envs/step2/lib/python3.6/contextlib.py", line 81, in enter return next(self.gen) File "experiment.py", line 84, in _init_graph dataset = get_dataset(config['data']['name'])(config['data']) File "/home/lt/Downloads/dierci/superpoint/datasets/base_dataset.py", line 102, in init self.dataset = self._init_dataset(self.config) File "/home/lt/Downloads/dierci/superpoint/datasets/coco.py", line 53, in _init_dataset assert p.exists(), 'Image {} has no corresponding label {}'.format(n, p) AssertionError: Image COCO_train2014_000000242900 has no corresponding label /home/lt/Downloads/dierci/superpoint/EXPER_DIR/outputs/mp_synth-v11_export_ha2/COCO_train2014_000000242900.npz

rpautrat commented 1 year ago

Hi, can you compare the number of files that you have in your ground truth folder /home/lt/Downloads/dierci/superpoint/EXPER_DIR/outputs/mp_synth-v11_export_ha2, and the number of images in your COCO dataset? This should be the same, but if it is not the case, you may have had some corrupted files somewhere in the GT generation.

zhangsngood commented 1 year ago

Thank you for your reply I found that there is no GT folder mp_synth-v11_export_ha2 in my output. I want to know how to obtain the complete GT folder, and which step can generate GT.

rpautrat commented 1 year ago

ReadMe => Step 2 :)

zhangsngood commented 1 year ago

ReadMe => Step 2 :)

Oh,Thank you for reminding me!I roughly understand. So I only need to modify the output path in the step2 and execute the command to generate GT, is it right? Will the weight in the folder mp_synth-v11_export_ha2 be different from that in the folder /home/lt/Downloads/dierci/superpoint/EXPER_DIR/outputs/magic-point_coco-export1 ?

rpautrat commented 1 year ago

Yes, you can call the output path to whatever you want, and then in your training config (e.g. superpoint_coco.yaml), you need to update the data->labels field with the same path.

The files in this output path are the generated GT keypoint features for each image in your dataset.