lanpokn / Event-3DGS

Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting
19 stars 0 forks source link

Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting

Alt text

Introduction

This repository contains the research code for Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting, which is accepted by NeurlPS 2024. The video introduction can refer to https://recorder-v3.slideslive.com/?share=92624&s=576b3d56-3c7c-46aa-abdf-1eac1d3a4d1d

The code is designed to implement the event-based 3D reconstruction algorithm described in the paper and includes key components such as the photovoltage contrast estimation module and a novel event-based loss for optimizing reconstruction quality.

The code has been tested in a personal environment on Windows 11. If you encounter any difficult-to-resolve issues during deployment in other environments, please feel free to provide feedback.

Installation

Follow the steps below to set up the environment and install dependencies.

  1. Clone the repository:

    git clone https://github.com/lanpokn/Event-3DGS.git
    cd Event-3DGS
  2. Install the necessary dependencies:

    This project is based on 3DGS (https://github.com/graphdeco-inria/gaussian-splatting), so please refer to its installation instructions.

    Some parts of the code in this project use additional libraries, which were mainly my personal attempts during the exploratory phase and can be ignored during use.

Dataset Format

To ensure proper usage, we will introduce the format in which we organize the data.

  1. Data organization :

    • We organize the data into the following structure:
      /path/to/dataset/
      ├── images/
      ├── images_event/
      ├── images_blurry/(optional)
      ├── renders/
      ├── sparse/...
  2. Data Explaination:

    • The 'images' folder stores the pure intensity images estimated from events. During the optimization process, the images in this folder are responsible for providing pure intensity.
    • The 'images_event' folder also stores the pure intensity images estimated from events, but the images in this folder are solely used to provide intensity differences during the optimization process. Since the estimation methods for intensity and intensity differences may vary, I separated them when validating the algorithm.
    • 'images_blurry' is optional and contains blurry images captured by an RGB camera, primarily used to validate its deblurring capability as presented in the original paper.
    • 'renders' stores the RGB ground truth, which is prepared for testing and does not participate in the reconstruction process.
    • The entire 'sparse' folder contains the camera poses in COLMAP format. For using COLMAP, please refer to https://colmap.github.io/. If you only have event data without pose or RGB information, you can first estimate the intensity images from the events and then use these intensity images for calibration with COLMAP.
  3. Notification:

    The image filenames in all storage folders must correspond one-to-one and be consistent with the results stored in COLMAP; otherwise, they will not be readable.

    There are many methods to obtain 'images' and 'images_event,' including but not limited to neural networks, classical integration methods, and filtering methods. You can choose based on your specific situation. If you're unfamiliar with this area, you can refer to https://github.com/ziweiWWANG/AKF and https://github.com/uzh-rpg/rpg_e2vid. For details on our method, please refer to the paper.

  4. Data Example: Here, we provide a simulated scene (with blurred images and event data simulated based on an existing dataset) for easy debugging. The images and images_event have already been generated based on the simulated events.

    The file shared via Baidu Netdisk: train_colmap_easy(1).zip
    Link: https://pan.baidu.com/s/1I9AP7ihz8wTYb2gmH0py1Q Access code: m388

Getting Started

Once you've set up the environment and arranged the dataset, you can use train.py to reconstruct the scene.

This file's usage is generally consistent with the original 3DGS, but I have added more command-line parameters to accommodate my event modality. Specifically:

Command-Line Options
Example Use

To provide some examples, below are sample commands corresponding to launch.json in VSCode:

`"args": ["-s", "your_dataset_path/","--gray","--event","--iterations","8000","-m","your_dataset_path/","--start_checkpoint","your_checkpoint_path"],`

Others(Explanation of minor issues)

Acknowledgments

We thank the authors of https://github.com/graphdeco-inria/gaussian-splatting and the other open-source libraries used in this work.