AIBluefisher / DAGSfM

Distributed and Graph-based Structure from Motion. This project includes the official implementation of our Pattern Recognition 2020 paper: Graph-Based Parallel Large Scale Structure from Motion.
https://aibluefisher.github.io/GraphSfM/
BSD 3-Clause "New" or "Revised" License
395 stars 84 forks source link
3d-reconstruction clustering computer-vision distributed-systems graph multi-view-geometry sfm solver structure-from-motion

DAGSfM: Distributed and Graph-Based Structure-from-Motion Library

If you use this project for your research, please cite:

@article{article,
  author = {Chen, Yu and Shen, Shuhan and Chen, Yisong and Wang, Guoping},
  year = {2020},
  month = {07},
  pages = {107537},
  title = {Graph-Based Parallel Large Scale Structure from Motion},
  journal = {Pattern Recognition},
  doi = {10.1016/j.patcog.2020.107537}
}
@inproceedings{schoenberger2016sfm,
    author={Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael},
    title={Structure-from-Motion Revisited},
    booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2016},
}

2. How to Build

2.1 Required

Basic Requirements

sudo apt-get install \
    git \
    cmake \
    build-essential \
    libboost-program-options-dev \
    libboost-filesystem-dev \
    libboost-graph-dev \
    libboost-regex-dev \
    libboost-system-dev \
    libboost-test-dev \
    libeigen3-dev \
    libsuitesparse-dev \
    libfreeimage-dev \
    libgoogle-glog-dev \
    libgflags-dev \
    libglew-dev \
    qtbase5-dev \
    libqt5opengl5-dev \
    libcgal-dev \
    libcgal-qt5-dev

[ceres-solver]()

sudo apt-get install libatlas-base-dev libsuitesparse-dev
git clone https://ceres-solver.googlesource.com/ceres-solver
cd ceres-solver
git checkout $(git describe --tags) # Checkout the latest release
mkdir build
cd build
cmake .. -DBUILD_TESTING=OFF -DBUILD_EXAMPLES=OFF
make
sudo make install

igraph

igraph is used for Community Detection and graph visualization.

sudo apt-get install build-essential libxml2-dev
wget https://igraph.org/nightly/get/c/igraph-0.7.1.tar.gz
tar -xvf igraph-0.7.1.tar.gz
cd igraph-0.7.1
./configure
make
make check
sudo make install

rpclib

rpclib is a light-weight Remote Procedure Call (RPC) library. Other RPC libs, such as GRPC, etc, are not chosen by this project for flexibility and convinience.

git clone https://github.com/qchateau/rpclib.git
cd rpclib
mkdir build && cd build
cmake ..
make -j8
sudo make install

2.2 Optional

Python Modules (Python 2.7 only)

This module is used for similarity seaching, while needs more evaluation.

sudo pip install scikit-learn tensorflow-gpu==1.7.0 scipy numpy progressbar2

# if the version of scikit-learn is not compatible, upgrade it by:
# pip install --upgrade scikit-learn

2.3 Build DAGSfM

git clone https://github.com/AIBluefisher/DAGSfM.git
cd DAGSfM
mkdir build && cd build
cmake .. && make -j8

3. Usage

As our algorithm is not integrated in the GUI of COLMAP, the scripts to run the distributed SfM are given (We hope there is anyone that is interested in integrating this pipeline into the GUI):

Sequential Mode

sudo chmod +x scripts/shell/distributed_sfm.sh
./distributed_sfm.sh $image_dir $num_images_ub $log_folder $completeness_ratio

Distributed Mode

(1) At first, we need to establish the server for every worker:

cd build/src/exe
./colmap local_sfm_worker --output_path=$output_path --port=$your_port

The RPC server establishes on local worker would be listening on the given port, and keep waitting until master assigns a job. We can also establish multiple workers on one machine, but to notice that port number should be unique!

(2) Then, the ip and port for every server should be written in a config.txt file. The file format should follow:

server_num
ip1 port1 image_path1
ip2 port2 image_path2
... ...

*note: image_path of each worker must be consistent with the --output_path option.*

(3) At last, start our master

cd GraphSfM_PATH/scripts/shell
# The project folder must contain a folder "images" with all the images.
DATASET_PATH=/path/to/project
CONFIG_FILE_PATH=/path_to_config_file
num_images_ub=100
log_folder=/path_to_log_dir

./distributed_sfm sh $DATASET_PATH $num_images_ub $log_folder $CONFIG_FILE_PATH

The distributed_sfm.sh actually executes the following command in SfM module:

/home/chenyu/Projects/Disco/build/src/exe/colmap distributed_mapper \
$DATASET_PATH/$log_folder \
--database_path=$DATASET_PATH/database.db \
--image_path=$DATASET_PATH/images \
--output_path=$DATASET_PATH/$log_folder \
--config_file_name=$CONFIG_FILE_PATH/config.txt \
--num_workers=8 \
--distributed=1 \
--repartition=0 \
--num_images=100 \
--script_path=/home/chenyu/Projects/Disco/scripts/shell/similarity_search.sh \
--dataset_path=$DATASET_PATH \
--output_dir=$DATASET_PATH/$log_folder \
--mirror_path=/home/chenyu/Projects/Disco/lib/mirror \
--assign_cluster_id=0 \
--write_binary=1 \
--retriangulate=0 \
--final_ba=1 \
--select_tracks_for_bundle_adjustment=1 \
--long_track_length_threshold=10 \
--graph_dir=$DATASET_PATH/$log_folder \
--num_images_ub=$num_images_ub \
--completeness_ratio=0.7 \
--relax_ratio=1.3 \
--cluster_type=SPECTRA #SPECTRA #NCUT COMMUNITY_DETECTION #
# --max_num_cluster_pairs=$max_num_cluster_pairs \
# --image_overlap=$image_overlap \

Thus, you need to overwrite /home/chenyu/Projects/Disco/build/src/exe/colmap, --script_path, --mirror_path options by yours.

The parameters need to be reset for different purpose:

If succeed, camera poses and sparse points should be included in $DATASET/sparse folder, you can use COLMAP's GUI to import it and show the visual result:

./build/src/exe/colmap gui

For small scale reconstruction, you can set the $num_images_ub equal to the number of images, the program would just use the incremental SfM pipeline of COLMAP.

For large scale reconstruction, our GraphSfM is highly recommended, these parameters should be tuned carefully: larger $num_images_ub and $completeness_ratio can make reconstruction more robust, but also may lead to low efficiency and even degenerate to incremental one.

Segment large scale maps

In some cases where we have a very large scale map, such that a latter Multi-View Stereo becomes infeasible because of memory limitation. We can use the point_cloud_segmenter to segment original map that is stored in colmap format into multiple small maps.

./build/src/exe/colmap point_cloud_segmenter \
--colmap_data_path=path_to_colmap_data \ 
--output_path=path_to_store_small_maps \
--max_image_num=max_number_image_for_small_map \
--write_binary=1

Before running this command, make sure path_to_colmap_data contains images.txt, cameras.txt, points3D.txt or images.bin, cameras.bin, points3D.bin.

ChangeLog