samiarja / ev_deep_motion_segmentation

Motion Segmentation for Neuromorphic Aerial Surveillance
https://samiarja.github.io/evairborne/
GNU Lesser General Public License v2.1
6 stars 0 forks source link
airborne event-camera motion-segmentation

Motion Segmentation for Neuromorphic Aerial Surveillance

This is the official repository for Motion Segmentation for Neuromorphic Aerial Surveillance by Sami Arja, Alexandre Marcireau, Saeed Afshar, Bharath Ramesh, Gregory Cohen

Project Page Paper Poster
Project Page Paper Poster

If you use this work in your research, please cite it:

@misc{arja_motionseg_2024,
    title = {Motion Segmentation for Neuromorphic Aerial Surveillance},
    url = {http://arxiv.org/abs/2405.15209},
    publisher = {arXiv},
    author = {Arja, Sami and Marcireau, Alexandre and Afshar, Saeed and Ramesh, Bharath and Cohen, Gregory},
    month = oct,
    year = {2024},
}

Setup

Requirements

Tested environments

Installation

git clone https://github.com/samiarja/ev_deep_motion_segmentation.git
cd ev_deep_motion_segmentation
conda env create -f environment.yml
python3 -m pip install -e .

Download dataset

You can download all the dataset from google drive The structure of the folder is as follows:

(root)/Dataset/
        EV-Airborne/
            (sequence_name1).es
            (sequence_name2).es
            (sequence_name3).es
            .....
        EV-IMO/

        EV-IMO2/

        DistSurf/

        HKUST-EMS/

        EED/

Run

Setup the config file

Please see the ./config/config.yaml for an example on how to setup the initial parameters.

Modify the entries to specify the dataset and seq and other parameters.

The seq name can be extracted from the .es file. If the filename is: EED_what_is_background_events.es

Then seq name is what_is_background. It is always between the dataset name (e.g. EED) and events. I will make this easier in future commits.

Execute motion segmentation

python main.py

The output from every layer of the network is saved in subfolders in ./output in this format:

input_frames
RAFT_FlowImages_gap1
RAFT_Flows_gap1
coarse
bs
tt_adapt
rgb
motion_comp
motion_comp_large_delta
config_EV-Airborne_recording_2023-04-26_15-30-21_cut2.yaml
EV-Airborne_recording_2023-04-26_15-30-21_cut2_events_with_motion_inter.h5
motion_segmentation_network_EV-Airborne_recording_2023-04-26_15-30-21_cut2.gif

Description of the content of each subfolder:

A faster implementation is also provided in main_fast_single_object.py, it only works if there for a single moving object.

python main_fast_single_object.py

Acknowledgement

This code is built on top of TokenCut, DINO, RAFT, and event_warping (our previous work). We would like to sincerely thanks those authors for their great works.