ahojnnes / local-feature-evaluation

Comparative Evaluation of Hand-Crafted and Learned Local Features
226 stars 48 forks source link

extremely slow running time of Gendarmenmarkt dataset #9

Closed bfan closed 6 years ago

bfan commented 6 years ago

I followed the instruction to reproduce the statistic result on the Gendarmenmarkt dataset. After conducting feature matching, it comes to python scripts/reconstruction_pipeline.py \ --dataset_path path/to/Fountain \ --colmap_path path/to/colmap/build/src/exe This script has already run over 3 days, so I think there might be something wrong? I checked the task manager, and found that two threads are running: ~/colmap/build/src/exe/dense_fuser --workspace_path ~/data/Gendarmenmarkt/dense/0 --input_type photometric --output_path ~/data/Gendarmenmarkt/dense/0 ~/colmap/build/src/exe/dense_fuser --workspace_path ~/data/Gendarmenmarkt/dense/0 --input_type photometric --output_path ~/data/Gendarmenmarkt/dense/0/fused.

Is this normal? Otherwise, how can I accelerate the process?

ahojnnes commented 6 years ago

Which Matlab version are you using and what GPU does your system have?

bfan commented 6 years ago

matlab R2015b, GPU is TITAN X (Pascal). Below is the last output information I got: .... WARNING: Ignoring image 981374980_1126f5a860_o.jpg, because input does not exist. WARNING: Ignoring image 999273378_59e22a0646_o.jpg, because input does not exist. Fusing image [1/1016] in 8.973s (29945 points)

I downloaded your supplied database file, and have not seen these warnings when loading features and matches by using your script: reconstruction_pipeline.py

ahojnnes commented 6 years ago

You need MATLAB 2016b or newer to run the matching on the GPU, which is why the pipeline is so slow for you. The matching part takes most of the time for most datasets. If you don’t have access to a newer matlab version, you could try and code the matching yourself using OpenCV.

bfan commented 6 years ago

Matching has done. I already run the matching_pipeline in Matlab. The slow running time is the final step in your instruction, i.e.,

After finishing the matching pipeline, run the reconstruction using: python scripts/reconstruction_pipeline.py \ --dataset_path path/to/Fountain \ --colmap_path path/to/colmap/build/src/exe

And it comes to the final step "dense_fuser" as I saw the running thread of it. According to your instruction, this step should not be so slow. Do I need to setup something?

ahojnnes commented 6 years ago

Are you running the dense fusion on a network drive by any chance? Or a slow drive in general?

bfan commented 6 years ago

Should not be this case. The disk I/O works well. I checked that there is no newer file was generated even the thread of "dense_fuser" was running over 5 days.... Previously, I used the smaller dataset "Fountain", everything went well. Can you give me some instructions to solve this problem?

ahojnnes commented 6 years ago

How much RAM do you have on the machine?

bfan commented 6 years ago

256GB with 40 CPU(2.6GHz).

ahojnnes commented 6 years ago

Could you share the dense reconstruction folder with me?

bfan commented 6 years ago

Sure. The size of folder is large (over 1 GB), so I uploaded it on my homepage. Please refer to: www.nlpr.ia.ac.cn/fanbin/dense.rar to get it.

bfan commented 6 years ago

Did you find any problem in my running?

ahojnnes commented 6 years ago

Please retry with the latest version of the benchmark. I upgraded the benchmark to the latest COLMAP version and am currently updating the numbers for the existing descriptors.