This is the official repository for Motion Segmentation for Neuromorphic Aerial Surveillance by Sami Arja, Alexandre Marcireau, Saeed Afshar, Bharath Ramesh, Gregory Cohen
Project Page | Paper | Poster |
If you use this work in your research, please cite it:
@misc{arja_motionseg_2024,
title = {Motion Segmentation for Neuromorphic Aerial Surveillance},
url = {http://arxiv.org/abs/2405.15209},
publisher = {arXiv},
author = {Arja, Sami and Marcireau, Alexandre and Afshar, Saeed and Ramesh, Bharath and Cohen, Gregory},
month = oct,
year = {2024},
}
git clone https://github.com/samiarja/ev_deep_motion_segmentation.git
cd ev_deep_motion_segmentation
conda env create -f environment.yml
python3 -m pip install -e .
You can download all the dataset from google drive The structure of the folder is as follows:
(root)/Dataset/
EV-Airborne/
(sequence_name1).es
(sequence_name2).es
(sequence_name3).es
.....
EV-IMO/
EV-IMO2/
DistSurf/
HKUST-EMS/
EED/
Please see the ./config/config.yaml
for an example on how to setup the initial parameters.
Modify the entries to specify the dataset
and seq
and other parameters.
The seq
name can be extracted from the .es
file. If the filename is:
EED_what_is_background_events.es
Then seq
name is what_is_background
. It is always between the dataset name (e.g. EED
) and events
. I will make this easier in future commits.
python main.py
The output from every layer of the network is saved in subfolders in ./output
in this format:
input_frames
RAFT_FlowImages_gap1
RAFT_Flows_gap1
coarse
bs
tt_adapt
rgb
motion_comp
motion_comp_large_delta
config_EV-Airborne_recording_2023-04-26_15-30-21_cut2.yaml
EV-Airborne_recording_2023-04-26_15-30-21_cut2_events_with_motion_inter.h5
motion_segmentation_network_EV-Airborne_recording_2023-04-26_15-30-21_cut2.gif
Description of the content of each subfolder:
.flo
format.TokenCut
which use the optical flow and the event time surface.crf: true
, then a crf
folder will be created.bs
or crf
.tt_adapt
.\delta t
../config/config.yaml
.'x','y','p','t','l','cl', 'vx', 'vy'
. vx
and vy
are the continuous motion labels and cl
is the discrete label. Both are used to generate the motion segmentation output.A faster implementation is also provided in main_fast_single_object.py
, it only works if there for a single moving object.
python main_fast_single_object.py
This code is built on top of TokenCut, DINO, RAFT, and event_warping (our previous work). We would like to sincerely thanks those authors for their great works.