VladimirYugay / KinectFusion

3D geometry estimation from RGB-D data using Kinect Fusion approach
52 stars 9 forks source link

Kinect Fusion

Project overview

The goal of this project is to obtain a 3D reconstruction of a static scene which was recorded with a Kinect camera. The Kinect camera provides RGB as well as depth information as can be seen in the picture below. here

Results

Below you can see a result of a raycast which was performed after ~120 frames. here

How to run the code?

First clone the repo with git clone https://github.com/VladimirYugay/KinectFusion.git.

Setup (Linux)

Setup (Windows)

Setup (Mac)

Download dataset

You can either use your own image sequence which was recorded with a Kinect Camera v2 or you can download the same dataset we used here. Extract the folder and move it to KinectFusion/data/rgbd_dataset_freiburg1_xyz.

Build and run

In order to build and run the project follow these steps:

cd ..\ mkdir build\ cd build\ cmake ..\ make\ ./kinect_fusion

Contributer Guidelines

If you want to contribute, please use cpplint. You can install it via pip install cpplint and run with: cpplint [OPTIONS] files.

CUDA Implementation details

You can find GPU implementation on vy/unstable/gpu_integration and vy/icp/gpu_icp_pose_estimation branches

Things implemented on CUDA

  1. Finding correspondences
  2. ICP linear system construction vy/icp/gpu_icp_pose_estimation
  3. Volume integration
CPU GPU Ratio
Correspondences: 138,609 Correspondences: 23,792 6
Volume integration: 8,787,468 Volume integration: 694,208 12
Linear System: 277 Linear System: 488 0.7