Closed chensong1995 closed 3 years ago
Hi,
Below are the subsets of the sequences that were used for our evaluation (e.g. for each sequence we kept the frames from start_time_s
to stop_time_s
for evaluation). This should correspond to the 1,670 frames mentioned in the paper.
Hope this helps!
{
"name": "dynamic_6dof",
"config": {
"start_time_s": 5.0,
"stop_time_s": 20.0
}
},
{
"name": "boxes_6dof",
"config": {
"start_time_s": 5.0,
"stop_time_s": 20.0
}
},
{
"name": "poster_6dof",
"config": {
"start_time_s": 5.0,
"stop_time_s": 20.0
}
},
{
"name": "shapes_6dof",
"config": {
"start_time_s": 5.0,
"stop_time_s": 20.0
}
},
{
"name": "office_zigzag",
"config": {
"start_time_s": 5.0,
"stop_time_s": 12.0
}
},
{
"name": "slider_depth",
"config": {
"start_time_s": 1.0,
"stop_time_s": 2.5
}
},
{
"name": "calibration",
"config": {
"start_time_s": 5.0,
"stop_time_s": 20.0
}
}
Thank you for your quick reply!
Hello there,
Thank you for your great work! I am greatly inspired by your work, and would like to test a newly developed method against E2VID on the Event Camera dataset. I noticed this short statement in your paper: We remove the redundant sequences (e.g. that were captured in the same scene) and those for which the frame quality is poor, leaving seven sequences in total that amount to 1,670 ground-truth frames. At your convenience, would you mind sharing some details here? What exactly are the frames that were removed from the .zip files here?
Many thanks in advance!
Sincerely, Chen