Ziwei Wang, Liyuan Pan, Yonhon Ng, Zheyu Zhuang and Robert Mahony
The paper was accepted by the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) in Prague, Czech Republic.
If you use or discuss our SHEF algorithm, or use the dataset, please cite our paper as follows:
@inproceedings{wang2021stereo, title={Stereo hybrid event-frame (shef) cameras for 3d perception}, author={Wang, Ziwei and Pan, Liyuan and Ng, Yonhon and Zhuang, Zheyu and Mahony, Robert}, booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages={9758--9764}, year={2021}, organization={IEEE} }
Three scenarios: picnic
, complex boxes
, and simple boxes
.
Each scenario includes at least 6 sequences with different camera speeds and lighting conditions.
From FLIR RGB camera | From Prophesee event camera | Description | |
---|---|---|---|
intensity_images |
yes | no | Synchronised intensity images from FLIR RGB camera |
images_ts.txt |
no | yes | Timestamps of the ynchronised intensity images. We synchronise the two cameras by sending a trigger signal from the FLIR RGB camera to the event camera. |
log_td.dat |
no | yes | Event data, includes event x, y, ts, p |
./event_frame_depth_data/depth_ground_truth
./calibration_data/stereo_event_frame
./calibration_data/point_cloud
./calibration_data/ur5_pose
Run run_disparity.m. It will load event-frame pairs from baseline_disparity_code/data/event_edge and baseline_disparity_code/data/frame_edge.
Enter folder baseline_disparity_code/include and run evaluation.m.
It will load estimated depth from baseline_disparity_code/data/Dp and ground truth depth from baseline_disparity_code/data/gt, and display the average bad-p
, RMSE
and inlier ratio
performance.
Events are decompressed from .raw
to .dat
format. To convert raw data to .dat
or .csv
format, we used the Prophesee tools in Prophesee_tools
You can also install the last Prophesee software version follow the instructions on the website
If you need, you can find all tools in usr/share/prophesee_driver/samples/
or usr/share/metavision/sdk/driver/samples/
, depending on what version you are using.
You can use the provided code to generate depth groud truth from camera position and point cloud. Or you can download the example depth ground truth images from ./event_frame_depth_data/depth_ground_truth.
For academic use only. Should you have any questions regarding this paper or datasets, please contact ziwei.wang1@anu.edu.au.
The Australian National University's policy on OneDrive requires the dataset link to expire in 30 days. If the link is not renewed in time, don't hesitate to get in touch with the author or make a request on GitHub.