Hello,
I am running into this error message all the time when executing the demo file. I did add myself to video group in the permissions, even into root, sudo etc. Still I get this permission error, can someone help me?
loading 3D models
libEGL warning: failed to open /dev/dri/renderD128: Permission denied
libEGL warning: failed to open /dev/dri/renderD128: Permission denied
Unable to initialize EGL
Command '['/home/kiropro/PoseCNN-PyTorch/tools/../ycb_render/build/test_device', '0']' returned non-zero exit status 1.
Command '['/home/kiropro/PoseCNN-PyTorch/tools/../ycb_render/build/test_device', '1']' died with <Signals.SIGSEGV: 11>.
Traceback (most recent call last):
File "/home/kiropro/PoseCNN-PyTorch/./tools/test_images.py", line 176, in
cfg.renderer = YCBRenderer(width=cfg.TRAIN.SYN_WIDTH, height=cfg.TRAIN.SYN_HEIGHT, gpu_id=args.gpu_id, render_marker=False)
File "/home/kiropro/PoseCNN-PyTorch/tools/../ycb_render/ycb_renderer.py", line 88, in init
self.r = CppYCBRenderer.CppYCBRenderer(width, height, get_available_devices()[gpu_id])
IndexError: list index out of range
Hello, I am running into this error message all the time when executing the demo file. I did add myself to video group in the permissions, even into root, sudo etc. Still I get this permission error, can someone help me?
`+ set -e
./tools/test_images.py --gpu 0 --imgdir data/demo/ --meta data/demo/meta.yml --color 'color.png' --network posecnn --pretrained data/checkpoints/ycb_object/vgg16_ycb_object_self_supervision_epoch_8.checkpoint.pth --dataset ycb_object_test --cfg experiments/cfgs/ycb_object.yml ycb_video_train ycb_video_val ycb_video_keyframe ycb_video_trainval ycb_video_debug ycb_object_train ycb_object_test ycb_self_supervision_train_1 ycb_self_supervision_train_2 ycb_self_supervision_train_3 ycb_self_supervision_train_4 ycb_self_supervision_train_5 ycb_self_supervision_test ycb_self_supervision_all ycb_self_supervision_train_block_median ycb_self_supervision_train_block_median_azure ycb_self_supervision_train_block_median_demo ycb_self_supervision_train_block_median_azure_demo ycb_self_supervision_train_table ycb_self_supervision_debug ycb_self_supervision_train_block ycb_self_supervision_train_block_azure ycb_self_supervision_train_block_big_sim ycb_self_supervision_train_block_median_sim ycb_self_supervision_train_block_small_sim background_coco background_rgbd background_nvidia background_table background_isaac background_texture /home/kiropro/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( /home/kiropro/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or
None
for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passingweights=None
. warnings.warn(msg) Called with args: Namespace(gpu_id=0, pretrained='data/checkpoints/ycb_object/vgg16_ycb_object_self_supervision_epoch_8.checkpoint.pth', pretrained_encoder=None, codebook=None, cfg_file='experiments/cfgs/ycb_object.yml', meta_file='data/demo/meta.yml', dataset_name='ycb_object_test', depth_name='depth.png', color_name='*color.png', imgdir='data/demo/', randomize=False, network_name='posecnn', background_name=None) /home/kiropro/PoseCNN-PyTorch/tools/../lib/fcn/config.py:377: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. yaml_cfg = edict(yaml.load(f)) /home/kiropro/PoseCNN-PyTorch/tools/../lib/fcn/config.py:386: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. yaml_cfg = edict(yaml.load(f)) {'INTRINSICS': [618.0172729492188, 0.0, 312.376953125, 0.0, 618.0033569335938, 232.37530517578125, 0.0, 0.0, 1.0]} Using config: {'ANCHOR_RATIOS': [0.5, 1, 2], 'ANCHOR_SCALES': [8, 16, 32], 'BACKGROUND': '', 'CAD': '', 'DATA_PATH': '', 'EPS': 1e-14, 'EXP_DIR': 'ycb_object', 'FEATURE_STRIDE': 16, 'FLIP_X': False, 'FLOW_HEIGHT': 512, 'FLOW_WIDTH': 640, 'GPU_ID': 0, 'INPUT': 'COLOR', 'INTRINSICS': [618.0172729492188, 0.0, 312.376953125, 0.0, 618.0033569335938, 232.37530517578125, 0.0, 0.0, 1.0], 'MODE': 'TRAIN', 'NETWORK': 'VGG16', 'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]), 'POSE': '', 'RIG': '', 'RNG_SEED': 3, 'ROOT_DIR': '/home/kiropro/PoseCNN-PyTorch', 'TEST': {'ALIGN_Z_AXIS': False, 'BBOX_REG': True, 'BUILD_CODEBOOK': False, 'CHECK_SIZE': False, 'CLASSES': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21], 'DET_THRESHOLD': 0.2, 'GAN': False, 'GEN_DATA': False, 'GLOBAL_SEARCH': False, 'GRID_SIZE': 256, 'HOUGH_INLIER_THRESHOLD': 0.9, 'HOUGH_LABEL_THRESHOLD': 400, 'HOUGH_SKIP_PIXELS': 10, 'HOUGH_VOTING_THRESHOLD': 10, 'IMS_PER_BATCH': 1, 'ITERNUM': 4, 'MEAN_SHIFT': False, 'NMS': 0.3, 'NUM_LOST': 3, 'NUM_SDF_ITERATIONS_INIT': 100, 'NUM_SDF_ITERATIONS_TRACKING': 50, 'POSE_CODEBOOK': False, 'POSE_REFINE': True, 'POSE_REG': False, 'POSE_SDF': True, 'RANSAC': False, 'ROS_CAMERA': 'D435', 'RPN_NMS_THRESH': 0.7, 'RPN_POST_NMS_TOP_N': 300, 'RPN_PRE_NMS_TOP_N': 6000, 'SCALES_BASE': [1.0], 'SDF_ROTATION_REG': 10.0, 'SDF_TRANSLATION_REG': 1000.0, 'SEGMENTATION': True, 'SINGLE_FRAME': True, 'SYMMETRY': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], 'SYNTHESIZE': True, 'VERTEX_REG_2D': False, 'VERTEX_REG_3D': False, 'VISUALIZE': True}, 'TRAIN': {'ADAPT': False, 'ADAPT_NUM': 400, 'ADAPT_RATIO': 1, 'ADAPT_ROOT': '', 'ADAPT_WEIGHT': 0.1, 'ADD_NOISE': True, 'AFFINE': False, 'BATCH_SIZE': 128, 'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0], 'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0], 'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2], 'BBOX_NORMALIZE_TARGETS': True, 'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True, 'BETA': 0.999, 'BG_THRESH_HI': 0.5, 'BG_THRESH_LO': 0.1, 'BOOSTRAP_PIXELS': 20, 'BOX_W': 1.0, 'CHROMATIC': True, 'CLASSES': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21], 'DISPLAY': 20, 'FG_FRACTION': 0.25, 'FG_THRESH': 0.5, 'FG_THRESH_POSE': 0.5, 'FREEZE_LAYERS': True, 'GAMMA': 0.1, 'GAN': False, 'GPUNUM': 1, 'GRID_SIZE': 256, 'HARD_ANGLE': 5.0, 'HARD_LABEL_SAMPLING': 0.0, 'HARD_LABEL_THRESHOLD': 0.9, 'HAS_RPN': True, 'HEATUP': 4, 'HOUGH_INLIER_THRESHOLD': 0.9, 'HOUGH_LABEL_THRESHOLD': 100, 'HOUGH_SKIP_PIXELS': 10, 'HOUGH_VOTING_THRESHOLD': 10, 'IMS_PER_BATCH': 2, 'ITERNUM': 4, 'ITERS': 0, 'LABEL_W': 1.0, 'LEARNING_RATE': 0.001, 'MATCHING': False, 'MAX_ITERS_PER_EPOCH': 1000000, 'MILESTONES': [3], 'MOMENTUM': 0.9, 'NOISE_LEVEL': 0.05, 'NUM_STEPS': 5, 'NUM_UNITS': 64, 'OPTIMIZER': 'MOMENTUM', 'POSE_REG': True, 'POSE_W': 1.0, 'RPN_BATCHSIZE': 256, 'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0], 'RPN_CLOBBER_POSITIVES': False, 'RPN_FG_FRACTION': 0.5, 'RPN_NEGATIVE_OVERLAP': 0.3, 'RPN_NMS_THRESH': 0.7, 'RPN_POSITIVE_OVERLAP': 0.7, 'RPN_POSITIVE_WEIGHT': -1.0, 'RPN_POST_NMS_TOP_N': 2000, 'RPN_PRE_NMS_TOP_N': 12000, 'SCALES_BASE': [1.0], 'SEGMENTATION': True, 'SINGLE_FRAME': False, 'SLIM': False, 'SNAPSHOT_EPOCHS': 1, 'SNAPSHOT_INFIX': 'ycb_object', 'SNAPSHOT_PREFIX': 'vgg16', 'SYMMETRY': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1], 'SYMSIZE': 0, 'SYNITER': 0, 'SYNNUM': 40000, 'SYNROOT': '/home/yuxiang/Projects/Deep_Pose/data/LOV/data_syn/', 'SYNTHESIZE': True, 'SYN_BACKGROUND_AFFINE': False, 'SYN_BACKGROUND_CONSTANT_PROB': 0.1, 'SYN_BACKGROUND_SPECIFIC': True, 'SYN_BACKGROUND_SUBTRACT_MEAN': True, 'SYN_BOUND': 0.3, 'SYN_CLASS_INDEX': 1, 'SYN_CROP': False, 'SYN_CROP_SIZE': 224, 'SYN_HEIGHT': 480, 'SYN_MAX_OBJECT': 8, 'SYN_MIN_OBJECT': 5, 'SYN_ONLINE': False, 'SYN_RATIO': 5, 'SYN_SAMPLE_DISTRACTOR': True, 'SYN_SAMPLE_OBJECT': True, 'SYN_SAMPLE_POSE': False, 'SYN_STD_ROTATION': 15, 'SYN_STD_TRANSLATION': 0.05, 'SYN_TABLE_PROB': 0.8, 'SYN_TFAR': 1.6, 'SYN_TNEAR': 0.5, 'SYN_WIDTH': 640, 'TRAINABLE': True, 'UNIFORM_POSE_INTERVAL': 15, 'USE_FLIPPED': False, 'USE_GT': False, 'VERTEX_REG': True, 'VERTEX_REG_DELTA': False, 'VERTEX_W': 1.0, 'VERTEX_W_INSIDE': 10.0, 'VISUALIZE': False, 'WEIGHT_DECAY': 0.0001}, 'USE_GPU_NMS': True, 'gpu_id': 0, 'instance_id': 0} GPU device 0 /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/002_master_chef_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/003_cracker_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/004_sugar_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/005_tomato_soup_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/006_mustard_bottle/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/007_tuna_fish_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/008_pudding_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/009_gelatin_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/010_potted_meat_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/011_banana/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/019_pitcher_base/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/021_bleach_cleanser/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/024_bowl/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/025_mug/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/035_power_drill/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/036_wood_block/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/037_scissors/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/040_large_marker/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/051_large_clamp/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/052_extra_large_clamp/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/061_foam_brick/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/holiday_cup1/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/holiday_cup2/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/sanning_mug/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/001_chips_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_red_big/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_green_big/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_blue_big/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_yellow_big/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_red_small/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_green_small/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_blue_small/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_yellow_small/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_red_median/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_green_median/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_blue_median/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/block_yellow_median/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/fusion_duplo_dude/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/cabinet_handle/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/002_master_chef_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/003_cracker_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/004_sugar_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/005_tomato_soup_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/006_mustard_bottle/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/007_tuna_fish_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/008_pudding_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/009_gelatin_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/010_potted_meat_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/011_banana/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/019_pitcher_base/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/021_bleach_cleanser/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/024_bowl/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/025_mug/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/035_power_drill/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/036_wood_block/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/037_scissors/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/040_large_marker/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/052_extra_large_clamp/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/061_foam_brick/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/002_master_chef_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/003_cracker_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/004_sugar_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/005_tomato_soup_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/006_mustard_bottle/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/007_tuna_fish_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/008_pudding_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/009_gelatin_box/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/010_potted_meat_can/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/011_banana/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/019_pitcher_base/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/021_bleach_cleanser/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/024_bowl/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/025_mug/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/035_power_drill/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/036_wood_block/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/037_scissors/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/040_large_marker/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/052_extra_large_clamp/points.xyz /home/kiropro/PoseCNN-PyTorch/tools/../lib/datasets/../../data/models/061_foam_brick/points.xyz [[618.01727295 0. 312.37695312] [ 0. 618.00335693 232.37530518] [ 0. 0. 1. ]] => using pre-trained network 'data/checkpoints/ycb_object/vgg16_ycb_object_self_supervision_epoch_8.checkpoint.pth' ModuleList( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace=True) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace=True) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace=True) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace=True) (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU(inplace=True) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace=True) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace=True) (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU(inplace=True) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU(inplace=True) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) ) Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace=True) (5): Dropout(p=0.5, inplace=False) ) model keysfeatures.0.weight features.0.bias features.2.weight features.2.bias features.5.weight features.5.bias features.7.weight features.7.bias features.10.weight features.10.bias features.12.weight features.12.bias features.14.weight features.14.bias features.17.weight features.17.bias features.19.weight features.19.bias features.21.weight features.21.bias features.24.weight features.24.bias features.26.weight features.26.bias features.28.weight features.28.bias classifier.0.weight classifier.0.bias classifier.3.weight classifier.3.bias conv4_embed.0.weight conv4_embed.0.bias conv5_embed.0.weight conv5_embed.0.bias conv_score.0.weight conv_score.0.bias conv4_vertex_embed.weight conv4_vertex_embed.bias conv5_vertex_embed.weight conv5_vertex_embed.bias conv_vertex_score.weight conv_vertex_score.bias fc8.0.weight fc8.0.bias fc9.weight fc9.bias fc10.weight fc10.bias
data keys
features.0.weight features.0.bias features.2.weight features.2.bias features.5.weight features.5.bias features.7.weight features.7.bias features.10.weight features.10.bias features.12.weight features.12.bias features.14.weight features.14.bias features.17.weight features.17.bias features.19.weight features.19.bias features.21.weight features.21.bias features.24.weight features.24.bias features.26.weight features.26.bias features.28.weight features.28.bias classifier.0.weight classifier.0.bias classifier.3.weight classifier.3.bias conv4_embed.0.weight conv4_embed.0.bias conv5_embed.0.weight conv5_embed.0.bias conv_score.0.weight conv_score.0.bias conv4_vertex_embed.weight conv4_vertex_embed.bias conv5_vertex_embed.weight conv5_vertex_embed.bias conv_vertex_score.weight conv_vertex_score.bias fc8.0.weight fc8.0.bias fc9.weight fc9.bias fc10.weight fc10.bias
load the following keys from the pretrained model
features.0.weight features.0.bias features.2.weight features.2.bias features.5.weight features.5.bias features.7.weight features.7.bias features.10.weight features.10.bias features.12.weight features.12.bias features.14.weight features.14.bias features.17.weight features.17.bias features.19.weight features.19.bias features.21.weight features.21.bias features.24.weight features.24.bias features.26.weight features.26.bias features.28.weight features.28.bias classifier.0.weight classifier.0.bias classifier.3.weight classifier.3.bias conv4_embed.0.weight conv4_embed.0.bias conv5_embed.0.weight conv5_embed.0.bias conv_score.0.weight conv_score.0.bias conv4_vertex_embed.weight conv4_vertex_embed.bias conv5_vertex_embed.weight conv5_vertex_embed.bias conv_vertex_score.weight conv_vertex_score.bias fc8.0.weight fc8.0.bias fc9.weight fc9.bias fc10.weight fc10.bias
loading 3D models libEGL warning: failed to open /dev/dri/renderD128: Permission denied
libEGL warning: failed to open /dev/dri/renderD128: Permission denied
Unable to initialize EGL Command '['/home/kiropro/PoseCNN-PyTorch/tools/../ycb_render/build/test_device', '0']' returned non-zero exit status 1. Command '['/home/kiropro/PoseCNN-PyTorch/tools/../ycb_render/build/test_device', '1']' died with <Signals.SIGSEGV: 11>. Traceback (most recent call last): File "/home/kiropro/PoseCNN-PyTorch/./tools/test_images.py", line 176, in
cfg.renderer = YCBRenderer(width=cfg.TRAIN.SYN_WIDTH, height=cfg.TRAIN.SYN_HEIGHT, gpu_id=args.gpu_id, render_marker=False)
File "/home/kiropro/PoseCNN-PyTorch/tools/../ycb_render/ycb_renderer.py", line 88, in init
self.r = CppYCBRenderer.CppYCBRenderer(width, height, get_available_devices()[gpu_id])
IndexError: list index out of range
real 0m4.825s user 0m3.074s sys 0m3.702s`