DLR-RM / AugmentedAutoencoder

Official Code: Implicit 3D Orientation Learning for 6D Object Detection from RGB Images
MIT License
339 stars 97 forks source link

KeyError: 'test_dir' #11

Closed jinjuehui closed 5 years ago

jinjuehui commented 5 years ago

Hi, I' following the documentation and encountered problems: In section "Evaluate and visualize 6D pose estimation of AAE with ground truth bounding boxes" after I run the following command:

ae_eval exp_group/my_autoencoder evaluation --eval_cfg eval_group/eval_my_autoencoder.cfg

(I didn't modify the configuration file template, since estimate_bbs is already set to False) I get following errors :

Processing: /localhome/demo/autoencoder_6d_pose_estimation/AugmentedAutoencoder_ws/cfg_eval/eval_results/hodan-iros15_tless_primesense
test_primesense
Loading object models...

Done.
{1: [5, 7, 9, 17, 18, 20], 2: [1, 12, 20, 9], 3: [9, 12, 20, 7], 4: [9, 18, 20, 5, 17], 5: [11, 2, 3, 4], 6: [2, 6], 7: [17, 2, 12, 18, 6], 8: [11, 3, 4], 9: [17, 18, 11, 12, 5], 10: [16, 11, 5], 11: [16, 3, 6], 12: [16, 3, 6], 13: [16, 19, 7], 14: [16, 19, 7], 15: [16, 19, 7], 16: [16, 19, 7], 17: [16, 19, 7], 18: [19, 3, 7], 19: [8, 10, 13, 14], 20: [8, 10, 13, 14], 21: [8, 10, 13], 22: [8, 10, 14], 23: [8, 10, 13, 14], 24: [8, 10, 19, 14], 25: [1, 15], 26: [4, 15], 27: [5, 15], 28: [4, 13, 15], 29: [1, 15], 30: [1, 19, 15]}
test_primesense
<backports.configparser.ConfigParser object at 0x7f631558cf50>
128 128 3
[[8, 8], [16, 16], [32, 32], [64, 64]]
(?, 128, 128, 3)
(?, 128, 128, 3)
2019-02-25 17:57:11.296768: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-02-25 17:57:11.373947: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-02-25 17:57:11.374378: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.506
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 2.89GiB
2019-02-25 17:57:11.374393: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-02-25 17:57:11.542753: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-02-25 17:57:11.542794: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-02-25 17:57:11.542799: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-02-25 17:57:11.542967: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2019 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
11
scene_id: 11
eval_args: <backports.configparser.ConfigParser object at 0x7f631558cf50>
test_primesense
dataset_name: tless
cam_type: primesense
test_primesense

Traceback (most recent call last):
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/bin/ae_eval", line 11, in <module>
    load_entry_point('auto-pose==0.9', 'console_scripts', 'ae_eval')()
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/ae_eval.py", line 114, in main
    test_imgs = eval_utils.load_scenes(scene_id, eval_args)
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/eval_utils.py", line 152, in load_scenes
    noof_imgs = noof_scene_views(scene_id, eval_args)
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/eval_utils.py", line 138, in noof_scene_views
    noof_imgs = len(os.listdir(os.path.join(p['base_path'], p['test_dir'], '{:02d}', 'rgb').format(scene_id)))
KeyError: 'test_dir'

I set up my sixd_toolkit as follows:

pip install -r requirements.txt
in dataset_params.py:
common_base_path = '/localhome/demo/autoencoder_6d_pose_estimation/t-less/t-less_v2/t-less_v2/'
tless_tk_path = '/localhome/demo/autoencoder_6d_pose_estimation/t-less_toolkit/'

Thanks a lot if someone can help me out..

System Info

by the way it seems that there's a little mistake in Create the evaluation config file:

should be:  mkdir $AE_WORKSPACE_PATH/cfg_eval/eval_group,  instead of ../eval_cfg/..
cp $AE_WORKSPACE_PATH/cfg_eval/eval_template.cfg $AE_WORKSPACE_PATH/eval_cfg/eval_group/eval_my_autoencoder.cfg
gedit $AE_WORKSPACE_PATH/cfg_eval/eval_group/eval_my_autoencoder.cfg ---- instead of ../cfg/..
MartinSmeyer commented 5 years ago

thanks a lot for your comments. Your common_base_path should be:

common_base_path = '/localhome/demo/autoencoder_6d_pose_estimation/'

and if I interpret your current path correctly there could be a redundant t-less_v2: "t-less/t-less_v2/t-less_v2/"

Please change the final folder structure to /localhome/demo/autoencoder_6d_pose_estimation/t-less/t-less_v2/test_primesense/...

I did change the dataset_params test_dir to p['test_dir'] and train_dir to p['train_dir']. Sorry for that, please do a git pull and reinstall auto_pose using

pip install --user --upgrade .

Thanks for pointing out the wrong cfg paths, I fixed it!

jinjuehui commented 5 years ago

thanks a lot for your comments. Your common_base_path should be:

common_base_path = '/localhome/demo/autoencoder_6d_pose_estimation/'

and if I interpret your current path correctly there could be a redundant t-less_v2: "t-less/t-less_v2/t-less_v2/"

Please change the final folder structure to /localhome/demo/autoencoder_6d_pose_estimation/t-less/t-less_v2/test_primesense/...

I did change the dataset_params test_dir to p['test_dir'] and train_dir to p['train_dir']. Sorry for that, please do a git pull and reinstall auto_pose using

pip install --user --upgrade .

Thanks for pointing out the wrong cfg paths, I fixed it!

I've git pulled your branch, and upgrade the package. The Problem still remains, have you pushed your modifications? I think there is a version mismatch with the file "dataset_params.py".

after change the key test_dir and train_dir by myself I first encountered probelm:


Processing: /localhome/demo/autoencoder_6d_pose_estimation/AugmentedAutoencoder_ws/cfg_eval/eval_results/hodan-iros15_tless_primesense
Loading object models...

Done.
{1: [5, 7, 9, 17, 18, 20], 2: [1, 12, 20, 9], 3: [9, 12, 20, 7], 4: [9, 18, 20, 5, 17], 5: [11, 2, 3, 4], 6: [2, 6], 7: [17, 2, 12, 18, 6], 8: [11, 3, 4], 9: [17, 18, 11, 12, 5], 10: [16, 11, 5], 11: [16, 3, 6], 12: [16, 3, 6], 13: [16, 19, 7], 14: [16, 19, 7], 15: [16, 19, 7], 16: [16, 19, 7], 17: [16, 19, 7], 18: [19, 3, 7], 19: [8, 10, 13, 14], 20: [8, 10, 13, 14], 21: [8, 10, 13], 22: [8, 10, 14], 23: [8, 10, 13, 14], 24: [8, 10, 19, 14], 25: [1, 15], 26: [4, 15], 27: [5, 15], 28: [4, 13, 15], 29: [1, 15], 30: [1, 19, 15]}
<backports.configparser.ConfigParser object at 0x7f9a15d58e90>
128 128 3
[[8, 8], [16, 16], [32, 32], [64, 64]]
(?, 128, 128, 3)
(?, 128, 128, 3)
2019-03-04 14:04:51.087845: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-04 14:04:51.163306: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-04 14:04:51.163689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.506
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.04GiB
2019-03-04 14:04:51.163701: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-03-04 14:04:51.333752: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-04 14:04:51.333793: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-03-04 14:04:51.333797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-03-04 14:04:51.333963: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2019 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
(504, 540, 720, 3)
504
(504, 540, 720, 3)
504
Traceback (most recent call last):
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/bin/ae_eval", line 11, in <module>
    load_entry_point('auto-pose==0.9', 'console_scripts', 'ae_eval')()
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/ae_eval.py", line 128, in main
    test_img_crops, test_img_depth_crops, bbs, bb_scores, visibilities = eval_utils.get_gt_scene_crops(scene_id, eval_args, train_args)
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/eval_utils.py", line 48, in get_gt_scene_crops
    visib_gt = inout.load_yaml(data_params['scene_gt_stats_mpath'].format(scene_id, delta))
  File "/localhome/demo/autoencoder_6d_pose_estimation/sixd_toolkit/pysixd/inout.py", line 19, in load_yaml
    with open(path, 'r') as f:
IOError: [Errno 2] No such file or directory: u'/localhome/demo/autoencoder_6d_pose_estimation/t-less/t-less_v2/test_primesense_gt_stats/11_delta=15.yml'

and further:

Traceback (most recent call last):
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/bin/ae_eval", line 11, in <module>
    load_entry_point('auto-pose==0.9', 'console_scripts', 'ae_eval')()
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/ae_eval.py", line 128, in main
    test_img_crops, test_img_depth_crops, bbs, bb_scores, visibilities = eval_utils.get_gt_scene_crops(scene_id, eval_args, train_args)
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/eval_utils.py", line 53, in get_gt_scene_crops
    train_args, visib_gt=visib_gt)
  File "/localhome/demo/autoencoder_6d_pose_estimation/venv/local/lib/python2.7/site-packages/auto_pose/eval/eval_utils.py", line 98, in generate_scene_crops
    vis_frac = None if estimate_bbs else visib_gt[view][bbox_idx]['visib_fract']
KeyError: 'visib_fract'
MartinSmeyer commented 5 years ago

Yes, these are the ground truth visibility statistics of t-less. You are right they are not part of the original dataset. Download them here

http://ptak.felk.cvut.cz/6DB/public/datasets/t-less/

And place the files where they are currently not found.

MartinSmeyer commented 5 years ago

And you are right I forgot to push the change in this repo, sorry. Now it's here also with added docu for evaluation. Thanks for going through the evaluation process, it helps others to reproduce results.

baopingli commented 4 years ago

Excuse me, can share the content of http://ptak.felk.cvut.cz/6DB/public/datasets/t-less/? This link is no longer valid

Pengbo-Sun commented 2 years ago

same question here

Pengbo-Sun commented 2 years ago

Excuse me, can share the content of http://ptak.felk.cvut.cz/6DB/public/datasets/t-less/? This link is no longer valid

do you find the content ?

MartinSmeyer commented 2 years ago

Datasets are found here now: https://bop.felk.cvut.cz/datasets

Pengbo-Sun commented 2 years ago

Datasets are found here now: https://bop.felk.cvut.cz/datasets

yep, but the format of the t-less dataset is totally different, you can see it here http://ptak.felk.cvut.cz/6DB/public/bop_datasets/t-less_test_primesense_all.zip the gt of visibility fraction is stored in scene_gt_info.json, which means we have to rewrite an input function regarding to the new file. Do i see it correctly? welcome to commit. Or is there still any old version of this visibility fraction somewhere?

Pengbo-Sun commented 2 years ago

sorry the link should be http://ptak.felk.cvut.cz/6DB/public/bop_datasets/; and t-less dataset is t-less_test_primesense_all.zip

MartinSmeyer commented 2 years ago

I have updated the repo with instructions to reproduce the BOP19 results.

https://github.com/DLR-RM/AugmentedAutoencoder#reproducing-and-visualizing-bop-challenge-results

Since BOP is the standard now, it should be used for comparisons in new works.