Closed chenleexyz closed 4 years ago
I am sorry for the slow reply, I was busy moving to another country and I am catching up with messages now. Please find our suggested difficulty mapping for the datasets attached (in Python format). Notice that this only applies for the used modalities from the paper; for example, if one in addition used the IMU data, or used only the color data but not the depth data, then those difficulty estimates might be different.
# Mapping from dataset name to difficulty level:
# 0: easy
# 1: medium
# 2: hard
# 3: structure-from-motion
dataset_to_difficulty = {
'kidnap_2' : 2
'kidnap_1' : 2
'einstein_global_light_changes_3' : 1
'einstein_global_light_changes_2' : 0
'einstein_global_light_changes_1' : 1
'table_local_light_changes' : 1
'foreground_occlusion' : 1
'table_1' : 0
'large_loop_3' : 2
'motion_1' : 2
'desk_dark_1' : 1
'camera_shake_1' : 2
'camera_shake_3' : 2
'camera_shake_2' : 2
'cables_5' : 2
'cables_4' : 2
'sofa_dark_2' : 1
'sofa_dark_3' : 1
'sofa_dark_1' : 1
'large_non_loop' : 2
'desk_2' : 2
'boxes_dark' : 1
'repetitive' : 1
'boxes' : 0
'planar_1' : 2
'table_3' : 0
'table_4' : 0
'large_loop_1' : 2
'drone' : 0
'kidnap_dark' : 2
'plant_dark' : 1
'planar_2' : 0
'trashbin' : 0
'planar_3' : 1
'table_global_light_changes' : 1
'einstein_flashlight' : 1
'table_2' : 1
'scale_change' : 2
'sofa_shake' : 1
'mannequin_face_2' : 0
'lamp' : 2
'reflective_1' : 2
'desk_1' : 0
'table_7' : 0
'table_scene' : 1
'sofa_4' : 1
'buddha' : 0
'dino' : 1
'sofa_1' : 1
'sofa_2' : 1
'sofa_3' : 1
'desk_ir_light' : 0
'desk_changing_1' : 1
'einstein_2' : 1
'einstein_1' : 0
'motion_3' : 2
'helmet' : 0
'ceiling_1' : 2
'desk_dark_2' : 2
'ceiling_2' : 2
'plant_scene_2' : 1
'plant_scene_3' : 1
'vicon_light_2' : 2
'plant_scene_1' : 1
'mannequin_1' : 1
'mannequin_7' : 2
'cables_3' : 1
'cables_2' : 1
'mannequin_4' : 2
'mannequin_5' : 2
'mannequin_3' : 1
'einstein_dark' : 1
'desk_global_light_changes' : 1
'desk_3' : 2
'sfm_lab_room_1' : 3
'sfm_lab_room_2' : 3
'vicon_light_1' : 1
'desk_changing_2' : 1
'motion_4' : 2
'motion_2' : 1
'sfm_bench' : 3
'plant_4' : 0
'plant_5' : 0
'plant_1' : 0
'plant_2' : 0
'plant_3' : 0
'sfm_house_loop' : 3
'mannequin_face_1' : 0
'mannequin_face_3' : 2
'cables_1' : 0
'table_6' : 0
'table_5' : 0
'large_loop_2' : 2
'sfm_garden' : 3
'reflective_2' : 2
'mannequin_head' : 2
}
Thanks for your reply!
I notices the paper has partitioned the benchmark datasets into 3 categories (easy, medium, hard), but where can I find which dataset belongs to which category?