shanice-l / gdrnpp_bop2022

PyTorch Implementation of GDRNPP, winner (most of the awards) of the BOP Challenge 2022 at ECCV'22
Apache License 2.0
215 stars 49 forks source link

ValueError: invalid literal for int() with base 10: 'post1' #94

Open jfitzg7 opened 6 months ago

jfitzg7 commented 6 months ago

I ran into this error while running the command ./core/gdrn_modeling/train_gdrn.sh configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py 0,1 --strategy ddp --eval-only which seems to be related to imageio:

[1228_181318@eval_calc_errors.py:284] Calculating error vsd - method: convnext-a6-AugCosyAAEGray-BG05-mlL1-DMask-amodalClipBox-classAware-ycbv-test-iter0, dat
aset: ycbv, scene: 48, im: 0                                                                                                                                  
> /home/jack/anaconda3/envs/gdrnpp/lib/python3.8/site-packages/imageio/plugins/pillow.py(48)pillow_version()                                                  
-> return tuple(int(x) for x in pil_version.split("."))                                                                                                       
(Pdb) x for x in pil_version.split(".")                                                                                                                       
*** SyntaxError: invalid syntax                                                                                                                               
(Pdb) [x for x in pil_version.split(".")]                                                                                                                     
['9', '0', '0', 'post1']                                                                                                                                      
(Pdb) c                                                                                                                                                       
Traceback (most recent call last):                                                                                                                            
  File "/data2/6d-pose-estimation/gdrnpp_bop2022/lib/pysixd/scripts/eval_calc_errors.py", line 303, in <module>                                               
    depth_im = inout.load_depth(depth_path)                                                                                                                   
  File "/data2/6d-pose-estimation/gdrnpp_bop2022/lib/pysixd/scripts/../../../lib/pysixd/inout.py", line 56, in load_depth                                     
    d = imageio.imread(path)                                                                                                                                  
  File "/home/jack/anaconda3/envs/gdrnpp/lib/python3.8/site-packages/imageio/__init__.py", line 97, in imread                                                 
    return imread_v2(uri, format=format, **kwargs)                                                                                                            
  File "/home/jack/anaconda3/envs/gdrnpp/lib/python3.8/site-packages/imageio/v2.py", line 360, in imread                                                      
    result = file.read(index=0, **kwargs)                                                                                                                     
  File "/home/jack/anaconda3/envs/gdrnpp/lib/python3.8/site-packages/imageio/plugins/pillow.py", line 254, in read                                            
    image = self._apply_transforms(                                                                                                                           
  File "/home/jack/anaconda3/envs/gdrnpp/lib/python3.8/site-packages/imageio/plugins/pillow.py", line 313, in _apply_transforms                               
    major, minor, patch = pillow_version()                                                                                                                    
  File "/home/jack/anaconda3/envs/gdrnpp/lib/python3.8/site-packages/imageio/plugins/pillow.py", line 48, in pillow_version                                   
    return tuple(int(x) for x in pil_version.split("."))                                                                                                      
  File "/home/jack/anaconda3/envs/gdrnpp/lib/python3.8/site-packages/imageio/plugins/pillow.py", line 48, in <genexpr>                                        
    return tuple(int(x) for x in pil_version.split("."))                                                                                                      
ValueError: invalid literal for int() with base 10: 'post1'                                                                                                   
Traceback (most recent call last):                                                                                                                            
  File "lib/pysixd/scripts/eval_pose_results_more.py", line 301, in <module>                                                                                  
    raise RuntimeError("Calculation of pose errors failed.")                                                                                                  
RuntimeError: Calculation of pose errors failed. 

The line of code return tuple(int(x) for x in pil_version.split(".")) grabs the pillow version which is 9.0.0.post1 and doesn't parse the post1 part correctly.

I've downgraded to imageio==2.23.0 which works, but I just arbitrarily chose that. It may be worth explicitly setting the imageio version to one that works in the requirements.txt, since the latest version (2.33.1) will throw this error.

monajalal commented 6 months ago

@jfitzg7 when I run the command I get this error. Could you please show where you downloaded the image_sets folder of ycbv? I don't have it

(gdrnpp) mona@ada:~/gdrnpp_bop2022$ ./core/gdrn_modeling/train_gdrn.sh configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py 0  --eval-only 
++ dirname ./core/gdrn_modeling/train_gdrn.sh
+ this_dir=./core/gdrn_modeling
+ CFG=configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py
+ CUDA_VISIBLE_DEVICES=0
+ IFS=,
+ read -ra GPUS
+ NGPU=1
+ echo 'use gpu ids: 0 num gpus: 1'
use gpu ids: 0 num gpus: 1
+ NCCL_DEBUG=INFO
+ OMP_NUM_THREADS=1
+ MKL_NUM_THREADS=1
+ PYTHONPATH=./core/gdrn_modeling/../..:/home/mona/realsense-ros/install/realsense2_camera_msgs/local/lib/python3.10/dist-packages:/opt/ros/humble/lib/python3.10/site-packages:/opt/ros/humble/local/lib/python3.10/dist-packages
+ CUDA_VISIBLE_DEVICES=0
+ python ./core/gdrn_modeling/main_gdrn.py --config-file configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py --num-gpus 1 --eval-only
/home/mona/.local/lib/python3.10/site-packages/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
  warnings.warn(
You requested to import horovod which is missing or not supported for your OS.
/home/mona/.local/lib/python3.10/site-packages/mmcv/device/npu/data_parallel.py:22: UserWarning: Torchaudio's I/O functions now support par-call bakcend dispatch. Importing backend implementation directly is no longer guaranteed to work. Please use `backend` keyword with load/save/info function, instead of calling the udnerlying implementation directly.
  if hasattr(sys.modules[m], '_check_balance'):
/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../lib/pysixd/misc.py:586: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def get_obj_im_c(K, t):
/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../lib/pysixd/misc.py:765: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def compute_2d_bbox_xyxy_from_pose(points, pose, K, width=640, height=480, clip=False):
/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../lib/pysixd/misc.py:793: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def compute_2d_bbox_xyxy_from_pose_v2(points, pose, K, width=640, height=480, clip=False):
/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../lib/pysixd/misc.py:822: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def compute_2d_bbox_xywh_from_pose(points, pose, K, width=640, height=480, clip=False):
[0102_233057@main_gdrn:216] soft limit:  500000 hard limit:  1048576
[0102_233057@main_gdrn:227] Command Line Args: Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resume=False, eval_only=True, launcher='none', local_rank=0, fp16_allreduce=False, use_adasum=False, num_gpus=1, num_machines=1, machine_rank=0, dist_url='tcp://127.0.0.1:50154', opts=None, strategy=None)
[0102_233057@main_gdrn:101] optimizer_cfg: {'type': 'Ranger', 'lr': 0.0008, 'weight_decay': 0.01}
[0102_233057@ycbv_d2:594] DBG register dataset: ycbv_train_real
[0102_233057@ycbv_pbr:359] DBG register dataset: ycbv_train_pbr
[0102_233057@ycbv_d2:594] DBG register dataset: ycbv_test
20240102_103057|core.utils.default_args_setup@123: Rank of current process: 0. World size: 1
20240102_103057|core.utils.default_args_setup@124: Environment info:
-------------------------------  -----------------------------------------------------------------------------
sys.platform                     linux
Python                           3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0]
numpy                            1.26.2
detectron2                       0.6 @/home/mona/anaconda3/envs/gdrnpp/lib/python3.10/site-packages/detectron2
Compiler                         GCC 11.4
CUDA compiler                    CUDA 11.8
detectron2 arch flags            8.9
DETECTRON2_ENV_MODULE            <not set>
PyTorch                          2.1.2+cu118 @/home/mona/.local/lib/python3.10/site-packages/torch
PyTorch debug build              False
torch._C._GLIBCXX_USE_CXX11_ABI  False
GPU available                    Yes
GPU 0                            NVIDIA RTX 6000 Ada Generation (arch=8.9)
Driver version                   535.104.12
CUDA_HOME                        /usr/local/cuda-11.8
Pillow                           9.0.0.post1
torchvision                      0.16.2+cu118 @/home/mona/.local/lib/python3.10/site-packages/torchvision
torchvision arch flags           3.5, 5.0, 6.0, 7.0, 7.5, 8.0, 8.6
fvcore                           0.1.5.post20221221
iopath                           0.1.9
cv2                              4.8.1
-------------------------------  -----------------------------------------------------------------------------
PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.1.1 (Git Hash 64f6bcbcbab628e96f33a62c3e975f8535a7bde4)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.8
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.7
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

20240102_103057|core.utils.default_args_setup@126: Command line arguments: Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resume=False, eval_only=True, launcher='none', local_rank=0, fp16_allreduce=False, use_adasum=False, num_gpus=1, num_machines=1, machine_rank=0, dist_url='tcp://127.0.0.1:50154', opts=None, strategy=None)
20240102_103057|core.utils.default_args_setup@128: Contents of args.config_file=configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py:
# about 3 days
_base_ = ["../../_base_/gdrn_base.py"]

OUTPUT_DIR = "output/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv"
INPUT = dict(
    DZI_PAD_SCALE=1.5,
    TRUNCATE_FG=True,
    CHANGE_BG_PROB=0.5,
    COLOR_AUG_PROB=0.8,
    COLOR_AUG_TYPE="code",
    COLOR_AUG_CODE=(
        "Sequential(["
        # Sometimes(0.5, PerspectiveTransform(0.05)),
        # Sometimes(0.5, CropAndPad(percent=(-0.05, 0.1))),
        # Sometimes(0.5, Affine(scale=(1.0, 1.2))),
        "Sometimes(0.5, CoarseDropout( p=0.2, size_percent=0.05) ),"
        "Sometimes(0.4, GaussianBlur((0., 3.))),"
        "Sometimes(0.3, pillike.EnhanceSharpness(factor=(0., 50.))),"
        "Sometimes(0.3, pillike.EnhanceContrast(factor=(0.2, 50.))),"
        "Sometimes(0.5, pillike.EnhanceBrightness(factor=(0.1, 6.))),"
        "Sometimes(0.3, pillike.EnhanceColor(factor=(0., 20.))),"
        "Sometimes(0.5, Add((-25, 25), per_channel=0.3)),"
        "Sometimes(0.3, Invert(0.2, per_channel=True)),"
        "Sometimes(0.5, Multiply((0.6, 1.4), per_channel=0.5)),"
        "Sometimes(0.5, Multiply((0.6, 1.4))),"
        "Sometimes(0.1, AdditiveGaussianNoise(scale=10, per_channel=True)),"
        "Sometimes(0.5, iaa.contrast.LinearContrast((0.5, 2.2), per_channel=0.3)),"
        "Sometimes(0.5, Grayscale(alpha=(0.0, 1.0))),"  # maybe remove for det
        "], random_order=True)"
        # cosy+aae
    ),
)

SOLVER = dict(
    IMS_PER_BATCH=48,
    TOTAL_EPOCHS=40,  # 10
    LR_SCHEDULER_NAME="flat_and_anneal",
    ANNEAL_METHOD="cosine",  # "cosine"
    ANNEAL_POINT=0.72,
    OPTIMIZER_CFG=dict(_delete_=True, type="Ranger", lr=8e-4, weight_decay=0.01),
    WEIGHT_DECAY=0.0,
    WARMUP_FACTOR=0.001,
    WARMUP_ITERS=1000,
)

DATASETS = dict(
    TRAIN=("ycbv_train_real", "ycbv_train_pbr"),
    TEST=("ycbv_test",),
    DET_FILES_TEST=("datasets/BOP_DATASETS/ycbv/test/test_bboxes/yolox_x_640_ycbv_real_pbr_ycbv_bop_test.json",),
    SYM_OBJS=[
        "024_bowl",
        "036_wood_block",
        "051_large_clamp",
        "052_extra_large_clamp",
        "061_foam_brick",
    ],  # used for custom evalutor
)

DATALOADER = dict(
    # Number of data loading threads
    NUM_WORKERS=8,
    FILTER_VISIB_THR=0.3,
)

MODEL = dict(
    LOAD_DETS_TEST=True,
    PIXEL_MEAN=[0.0, 0.0, 0.0],
    PIXEL_STD=[255.0, 255.0, 255.0],
    BBOX_TYPE="AMODAL_CLIP",  # VISIB or AMODAL
    POSE_NET=dict(
        NAME="GDRN_double_mask",
        XYZ_ONLINE=True,
        NUM_CLASSES=21,
        BACKBONE=dict(
            FREEZE=False,
            PRETRAINED="timm",
            INIT_CFG=dict(
                type="timm/convnext_base",
                pretrained=True,
                in_chans=3,
                features_only=True,
                out_indices=(3,),
            ),
        ),
        ## geo head: Mask, XYZ, Region
        GEO_HEAD=dict(
            FREEZE=False,
            INIT_CFG=dict(
                type="TopDownDoubleMaskXyzRegionHead",
                in_dim=1024,  # this is num out channels of backbone conv feature
            ),
            NUM_REGIONS=64,
            XYZ_CLASS_AWARE=True,
            MASK_CLASS_AWARE=True,
            REGION_CLASS_AWARE=True,
        ),
        PNP_NET=dict(
            INIT_CFG=dict(norm="GN", act="gelu"),
            REGION_ATTENTION=True,
            WITH_2D_COORD=True,
            ROT_TYPE="allo_rot6d",
            TRANS_TYPE="centroid_z",
        ),
        LOSS_CFG=dict(
            # xyz loss ----------------------------
            XYZ_LOSS_TYPE="L1",  # L1 | CE_coor
            XYZ_LOSS_MASK_GT="visib",  # trunc | visib | obj
            XYZ_LW=1.0,
            # mask loss ---------------------------
            MASK_LOSS_TYPE="L1",  # L1 | BCE | CE
            MASK_LOSS_GT="trunc",  # trunc | visib | gt
            MASK_LW=1.0,
            # full mask loss ---------------------------
            FULL_MASK_LOSS_TYPE="L1",  # L1 | BCE | CE
            FULL_MASK_LW=1.0,
            # region loss -------------------------
            REGION_LOSS_TYPE="CE",  # CE
            REGION_LOSS_MASK_GT="visib",  # trunc | visib | obj
            REGION_LW=1.0,
            # pm loss --------------
            PM_LOSS_SYM=True,  # NOTE: sym loss
            PM_R_ONLY=True,  # only do R loss in PM
            PM_LW=1.0,
            # centroid loss -------
            CENTROID_LOSS_TYPE="L1",
            CENTROID_LW=1.0,
            # z loss -----------
            Z_LOSS_TYPE="L1",
            Z_LW=1.0,
        ),
    ),
)

VAL = dict(
    DATASET_NAME="ycbv",
    SPLIT_TYPE="",
    SCRIPT_PATH="lib/pysixd/scripts/eval_pose_results_more.py",
    TARGETS_FILENAME="test_targets_bop19.json",
    ERROR_TYPES="vsd,mspd,mssd",
    USE_BOP=True,  # whether to use bop toolkit
)

TEST = dict(EVAL_PERIOD=0, VIS=False, TEST_BBOX_TYPE="est")  # gt | est

20240102_103057|core.utils.default_args_setup@135: Running with full config:
Config (path: configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py): {'OUTPUT_ROOT': 'output', 'OUTPUT_DIR': 'output/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv', 'EXP_NAME': '', 'DEBUG': False, 'SEED': -1, 'CUDNN_BENCHMARK': True, 'IM_BACKEND': 'cv2', 'VIS_PERIOD': 0, 'INPUT': {'FORMAT': 'BGR', 'MIN_SIZE_TRAIN': 480, 'MAX_SIZE_TRAIN': 640, 'MIN_SIZE_TRAIN_SAMPLING': 'choice', 'MIN_SIZE_TEST': 480, 'MAX_SIZE_TEST': 640, 'WITH_DEPTH': False, 'BP_DEPTH': False, 'AUG_DEPTH': False, 'NORM_DEPTH': False, 'DROP_DEPTH_RATIO': 0.2, 'DROP_DEPTH_PROB': 0.5, 'ADD_NOISE_DEPTH_LEVEL': 0.01, 'ADD_NOISE_DEPTH_PROB': 0.9, 'COLOR_AUG_PROB': 0.8, 'COLOR_AUG_TYPE': 'code', 'COLOR_AUG_CODE': 'Sequential([Sometimes(0.5, CoarseDropout( p=0.2, size_percent=0.05) ),Sometimes(0.4, GaussianBlur((0., 3.))),Sometimes(0.3, pillike.EnhanceSharpness(factor=(0., 50.))),Sometimes(0.3, pillike.EnhanceContrast(factor=(0.2, 50.))),Sometimes(0.5, pillike.EnhanceBrightness(factor=(0.1, 6.))),Sometimes(0.3, pillike.EnhanceColor(factor=(0., 20.))),Sometimes(0.5, Add((-25, 25), per_channel=0.3)),Sometimes(0.3, Invert(0.2, per_channel=True)),Sometimes(0.5, Multiply((0.6, 1.4), per_channel=0.5)),Sometimes(0.5, Multiply((0.6, 1.4))),Sometimes(0.1, AdditiveGaussianNoise(scale=10, per_channel=True)),Sometimes(0.5, iaa.contrast.LinearContrast((0.5, 2.2), per_channel=0.3)),Sometimes(0.5, Grayscale(alpha=(0.0, 1.0))),], random_order=True)', 'COLOR_AUG_SYN_ONLY': False, 'RANDOM_FLIP': 'none', 'WITH_BG_DEPTH': False, 'BG_DEPTH_FACTOR': 10000.0, 'BG_TYPE': 'VOC_table', 'BG_IMGS_ROOT': 'datasets/VOCdevkit/VOC2012/', 'NUM_BG_IMGS': 10000, 'CHANGE_BG_PROB': 0.5, 'TRUNCATE_FG': True, 'BG_KEEP_ASPECT_RATIO': True, 'DZI_TYPE': 'uniform', 'DZI_PAD_SCALE': 1.5, 'DZI_SCALE_RATIO': 0.25, 'DZI_SHIFT_RATIO': 0.25, 'SMOOTH_XYZ': False}, 'DATASETS': {'TRAIN': ('ycbv_train_real', 'ycbv_train_pbr'), 'TRAIN2': (), 'TRAIN2_RATIO': 0.0, 'DATA_LEN_WITH_TRAIN2': True, 'PROPOSAL_FILES_TRAIN': (), 'PRECOMPUTED_PROPOSAL_TOPK_TRAIN': 2000, 'TEST': ('ycbv_test',), 'PROPOSAL_FILES_TEST': (), 'PRECOMPUTED_PROPOSAL_TOPK_TEST': 1000, 'DET_FILES_TRAIN': (), 'DET_TOPK_PER_OBJ_TRAIN': 1, 'DET_TOPK_PER_IM_TRAIN': 30, 'DET_THR_TRAIN': 0.0, 'DET_FILES_TEST': ('datasets/BOP_DATASETS/ycbv/test/test_bboxes/yolox_x_640_ycbv_real_pbr_ycbv_bop_test.json',), 'DET_TOPK_PER_OBJ': 1, 'DET_TOPK_PER_IM': 30, 'DET_THR': 0.0, 'INIT_POSE_FILES_TEST': (), 'INIT_POSE_TOPK_PER_OBJ': 1, 'INIT_POSE_TOPK_PER_IM': 30, 'INIT_POSE_THR': 0.0, 'SYM_OBJS': ['024_bowl', '036_wood_block', '051_large_clamp', '052_extra_large_clamp', '061_foam_brick'], 'EVAL_SCENE_IDS': None}, 'DATALOADER': {'NUM_WORKERS': 8, 'PERSISTENT_WORKERS': False, 'MAX_OBJS_TRAIN': 120, 'ASPECT_RATIO_GROUPING': False, 'SAMPLER_TRAIN': 'TrainingSampler', 'REPEAT_THRESHOLD': 0.0, 'FILTER_EMPTY_ANNOTATIONS': True, 'FILTER_EMPTY_DETS': True, 'FILTER_VISIB_THR': 0.3, 'REMOVE_ANNO_KEYS': []}, 'SOLVER': {'IMS_PER_BATCH': 48, 'REFERENCE_BS': 48, 'TOTAL_EPOCHS': 40, 'OPTIMIZER_CFG': {'type': 'Ranger', 'lr': 0.0008, 'weight_decay': 0.01}, 'GAMMA': 0.1, 'BIAS_LR_FACTOR': 1.0, 'LR_SCHEDULER_NAME': 'flat_and_anneal', 'WARMUP_METHOD': 'linear', 'WARMUP_FACTOR': 0.001, 'WARMUP_ITERS': 1000, 'ANNEAL_METHOD': 'cosine', 'ANNEAL_POINT': 0.72, 'POLY_POWER': 0.9, 'REL_STEPS': (0.5, 0.75), 'CHECKPOINT_PERIOD': 5, 'CHECKPOINT_BY_EPOCH': True, 'MAX_TO_KEEP': 5, 'CLIP_GRADIENTS': {'ENABLED': False, 'CLIP_TYPE': 'value', 'CLIP_VALUE': 1.0, 'NORM_TYPE': 2.0}, 'SET_NAN_GRAD_TO_ZERO': False, 'AMP': {'ENABLED': False}, 'WEIGHT_DECAY': 0.01, 'OPTIMIZER_NAME': 'Ranger', 'BASE_LR': 0.0008, 'MOMENTUM': 0.9}, 'TRAIN': {'PRINT_FREQ': 100, 'VERBOSE': False, 'VIS': False, 'VIS_IMG': False}, 'VAL': {'DATASET_NAME': 'ycbv', 'SCRIPT_PATH': 'lib/pysixd/scripts/eval_pose_results_more.py', 'RESULTS_PATH': '', 'TARGETS_FILENAME': 'test_targets_bop19.json', 'ERROR_TYPES': 'vsd,mspd,mssd', 'RENDERER_TYPE': 'cpp', 'SPLIT': 'test', 'SPLIT_TYPE': '', 'N_TOP': 1, 'EVAL_CACHED': False, 'SCORE_ONLY': False, 'EVAL_PRINT_ONLY': False, 'EVAL_PRECISION': False, 'USE_BOP': True, 'SAVE_BOP_CSV_ONLY': False}, 'TEST': {'EVAL_PERIOD': 0, 'VIS': False, 'TEST_BBOX_TYPE': 'est', 'PRECISE_BN': {'ENABLED': False, 'NUM_ITER': 200}, 'AMP_TEST': False, 'COLOR_AUG': False, 'USE_PNP': False, 'SAVE_RESULTS_ONLY': False, 'PNP_TYPE': 'ransac_pnp', 'USE_DEPTH_REFINE': False, 'DEPTH_REFINE_ITER': 2, 'DEPTH_REFINE_THRESHOLD': 0.8, 'USE_COOR_Z_REFINE': False}, 'DIST_PARAMS': {'backend': 'nccl'}, 'MODEL': {'DEVICE': 'cuda', 'WEIGHTS': '', 'PIXEL_MEAN': [0.0, 0.0, 0.0], 'PIXEL_STD': [255.0, 255.0, 255.0], 'LOAD_DETS_TEST': True, 'BBOX_CROP_REAL': False, 'BBOX_CROP_SYN': False, 'BBOX_TYPE': 'AMODAL_CLIP', 'EMA': {'ENABLED': False, 'INIT_CFG': {'decay': 0.9999, 'updates': 0}}, 'POSE_NET': {'NAME': 'GDRN_double_mask', 'XYZ_ONLINE': True, 'XYZ_BP': True, 'NUM_CLASSES': 21, 'USE_MTL': False, 'INPUT_RES': 256, 'OUTPUT_RES': 64, 'BACKBONE': {'FREEZE': False, 'PRETRAINED': 'timm', 'INIT_CFG': {'type': 'timm/convnext_base', 'in_chans': 3, 'features_only': True, 'pretrained': True, 'out_indices': (3,)}}, 'DEPTH_BACKBONE': {'ENABLED': False, 'FREEZE': False, 'PRETRAINED': 'timm', 'INIT_CFG': {'type': 'timm/resnet18', 'in_chans': 1, 'features_only': True, 'pretrained': True, 'out_indices': (4,)}}, 'FUSE_RGBD_TYPE': 'cat', 'NECK': {'ENABLED': False, 'FREEZE': False, 'LR_MULT': 1.0, 'INIT_CFG': {'type': 'FPN', 'in_channels': [256, 512, 1024, 2048], 'out_channels': 256, 'num_outs': 4}}, 'GEO_HEAD': {'FREEZE': False, 'LR_MULT': 1.0, 'INIT_CFG': {'type': 'TopDownDoubleMaskXyzRegionHead', 'in_dim': 1024, 'up_types': ('deconv', 'bilinear', 'bilinear'), 'deconv_kernel_size': 3, 'num_conv_per_block': 2, 'feat_dim': 256, 'feat_kernel_size': 3, 'norm': 'GN', 'num_gn_groups': 32, 'act': 'GELU', 'out_kernel_size': 1, 'out_layer_shared': True}, 'XYZ_BIN': 64, 'XYZ_CLASS_AWARE': True, 'MASK_CLASS_AWARE': True, 'REGION_CLASS_AWARE': True, 'MASK_THR_TEST': 0.5, 'NUM_REGIONS': 64}, 'PNP_NET': {'FREEZE': False, 'LR_MULT': 1.0, 'INIT_CFG': {'type': 'ConvPnPNet', 'norm': 'GN', 'act': 'gelu', 'num_gn_groups': 32, 'drop_prob': 0.0, 'denormalize_by_extent': True}, 'WITH_2D_COORD': True, 'COORD_2D_TYPE': 'abs', 'REGION_ATTENTION': True, 'MASK_ATTENTION': 'none', 'ROT_TYPE': 'allo_rot6d', 'TRANS_TYPE': 'centroid_z', 'Z_TYPE': 'REL'}, 'LOSS_CFG': {'XYZ_LOSS_TYPE': 'L1', 'XYZ_LOSS_MASK_GT': 'visib', 'XYZ_LW': 1.0, 'FULL_MASK_LOSS_TYPE': 'L1', 'FULL_MASK_LW': 1.0, 'MASK_LOSS_TYPE': 'L1', 'MASK_LOSS_GT': 'trunc', 'MASK_LW': 1.0, 'REGION_LOSS_TYPE': 'CE', 'REGION_LOSS_MASK_GT': 'visib', 'REGION_LW': 1.0, 'NUM_PM_POINTS': 3000, 'PM_LOSS_TYPE': 'L1', 'PM_SMOOTH_L1_BETA': 1.0, 'PM_LOSS_SYM': True, 'PM_NORM_BY_EXTENT': False, 'PM_R_ONLY': True, 'PM_DISENTANGLE_T': False, 'PM_DISENTANGLE_Z': False, 'PM_T_USE_POINTS': True, 'PM_LW': 1.0, 'ROT_LOSS_TYPE': 'angular', 'ROT_LW': 0.0, 'CENTROID_LOSS_TYPE': 'L1', 'CENTROID_LW': 1.0, 'Z_LOSS_TYPE': 'L1', 'Z_LW': 1.0, 'TRANS_LOSS_TYPE': 'L1', 'TRANS_LOSS_DISENTANGLE': True, 'TRANS_LW': 0.0, 'BIND_LOSS_TYPE': 'L1', 'BIND_LW': 0.0}}, 'KEYPOINT_ON': False, 'LOAD_PROPOSALS': False}, 'EXP_ID': 'convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv_test', 'RESUME': False}
20240102_103057|core.utils.default_args_setup@144: Full config saved to output/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py
Global seed set to 57917688
20240102_103057|d2.utils.env@41: Using a generated random seed 57917688
20240102_103057|core.utils.default_args_setup@162: Used mmcv backend: cv2
20240102_103057|__main__@157: Used GDRN module name: GDRN_double_mask
20240102_103058|timm.models.helpers@244: Loading pretrained weights from url (https://dl.fbaipublicfiles.com/convnext/convnext_base_1k_224_ema.pth)
20240102_103059|core.gdrn_modeling.models.GDRN_double_mask@600: Check if the backbone has been initialized with its own method!
20240102_103059|__main__@159: Model:
GDRN_DoubleMask(
  (backbone): FeatureListNet(
    (stem_0): Conv2d(3, 128, kernel_size=(4, 4), stride=(4, 4))
    (stem_1): LayerNorm2d((128,), eps=1e-06, elementwise_affine=True)
    (stages_0): ConvNeXtStage(
      (downsample): Identity()
      (blocks): Sequential(
        (0): ConvNeXtBlock(
          (conv_dw): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128)
          (norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=128, out_features=512, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=512, out_features=128, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (1): ConvNeXtBlock(
          (conv_dw): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128)
          (norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=128, out_features=512, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=512, out_features=128, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (2): ConvNeXtBlock(
          (conv_dw): Conv2d(128, 128, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=128)
          (norm): LayerNorm((128,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=128, out_features=512, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=512, out_features=128, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
      )
    )
    (stages_1): ConvNeXtStage(
      (downsample): Sequential(
        (0): LayerNorm2d((128,), eps=1e-06, elementwise_affine=True)
        (1): Conv2d(128, 256, kernel_size=(2, 2), stride=(2, 2))
      )
      (blocks): Sequential(
        (0): ConvNeXtBlock(
          (conv_dw): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256)
          (norm): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=256, out_features=1024, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1024, out_features=256, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (1): ConvNeXtBlock(
          (conv_dw): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256)
          (norm): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=256, out_features=1024, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1024, out_features=256, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (2): ConvNeXtBlock(
          (conv_dw): Conv2d(256, 256, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=256)
          (norm): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=256, out_features=1024, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1024, out_features=256, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
      )
    )
    (stages_2): ConvNeXtStage(
      (downsample): Sequential(
        (0): LayerNorm2d((256,), eps=1e-06, elementwise_affine=True)
        (1): Conv2d(256, 512, kernel_size=(2, 2), stride=(2, 2))
      )
      (blocks): Sequential(
        (0): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (1): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (2): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (3): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (4): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (5): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (6): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (7): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (8): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (9): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (10): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (11): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (12): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (13): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (14): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (15): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (16): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (17): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (18): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (19): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (20): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (21): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (22): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (23): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (24): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (25): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (26): ConvNeXtBlock(
          (conv_dw): Conv2d(512, 512, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=512)
          (norm): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=512, out_features=2048, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=2048, out_features=512, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
      )
    )
    (stages_3): ConvNeXtStage(
      (downsample): Sequential(
        (0): LayerNorm2d((512,), eps=1e-06, elementwise_affine=True)
        (1): Conv2d(512, 1024, kernel_size=(2, 2), stride=(2, 2))
      )
      (blocks): Sequential(
        (0): ConvNeXtBlock(
          (conv_dw): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024)
          (norm): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=1024, out_features=4096, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=4096, out_features=1024, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (1): ConvNeXtBlock(
          (conv_dw): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024)
          (norm): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=1024, out_features=4096, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=4096, out_features=1024, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
        (2): ConvNeXtBlock(
          (conv_dw): Conv2d(1024, 1024, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1024)
          (norm): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=1024, out_features=4096, bias=True)
            (act): GELU(approximate='none')
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=4096, out_features=1024, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
          (drop_path): Identity()
        )
      )
    )
  )
  (geo_head_net): TopDownDoubleMaskXyzRegionHead(
    (features): ModuleList(
      (0): ConvTranspose2d(1024, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1), bias=False)
      (1): GroupNorm(32, 256, eps=1e-05, affine=True)
      (2): GELU(approximate='none')
      (3-4): 2 x ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (norm): GroupNorm(32, 256, eps=1e-05, affine=True)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): GELU(approximate='none')
      )
      (5): UpsamplingBilinear2d(scale_factor=2.0, mode='bilinear')
      (6-7): 2 x ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (norm): GroupNorm(32, 256, eps=1e-05, affine=True)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): GELU(approximate='none')
      )
      (8): UpsamplingBilinear2d(scale_factor=2.0, mode='bilinear')
      (9-10): 2 x ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (norm): GroupNorm(32, 256, eps=1e-05, affine=True)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): GELU(approximate='none')
      )
    )
    (out_layer): Conv2d(256, 1470, kernel_size=(1, 1), stride=(1, 1))
  )
  (pnp_net): ConvPnPNet(
    (act): GELU(approximate='none')
    (dropblock): LinearScheduler(
      (dropblock): DropBlock2D()
    )
    (features): ModuleList(
      (0): Conv2d(69, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (1): GroupNorm(32, 128, eps=1e-05, affine=True)
      (2): GELU(approximate='none')
      (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (4): GroupNorm(32, 128, eps=1e-05, affine=True)
      (5): GELU(approximate='none')
      (6): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (7): GroupNorm(32, 128, eps=1e-05, affine=True)
      (8): GELU(approximate='none')
    )
    (fc1): Linear(in_features=8192, out_features=1024, bias=True)
    (fc2): Linear(in_features=1024, out_features=256, bias=True)
    (fc_r): Linear(in_features=256, out_features=6, bias=True)
    (fc_t): Linear(in_features=256, out_features=3, bias=True)
  )
)
20240102_103059|__main__@171: 102.873543M params
20240102_103059|d2.checkpoint.detection_checkpoint@38: [DetectionCheckpointer] Loading from  ...
20240102_103059|fvcore.common.checkpoint@148: No checkpoint found. Initializing model from scratch
20240102_103100|core.gdrn_modeling.datasets.ycbv_d2@310: loading dataset dicts: ycbv_test
20240102_103100|ERR|__main__@233: An error has been caught in function '<module>', process 'MainProcess' (15333), thread 'MainThread' (140609077046144):
Traceback (most recent call last):

> File "/home/mona/gdrnpp_bop2022/./core/gdrn_modeling/main_gdrn.py", line 233, in <module>
    main(args)
    │    └ Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resume...
    └ <function main at 0x7fe0db649bd0>

  File "/home/mona/gdrnpp_bop2022/./core/gdrn_modeling/main_gdrn.py", line 205, in main
    ).run(args, cfg)
          │     └ Config (path: configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py): {'OUTPUT_ROOT':...
          └ Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resume...

  File "/home/mona/.local/lib/python3.10/site-packages/pytorch_lightning/lite/lite.py", line 408, in _run_impl
    return run_method(*args, **kwargs)
           │           │       └ {}
           │           └ (Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resum...
           └ functools.partial(<bound method LightningLite._run_with_strategy_setup of <__main__.Lite object at 0x7fe218caccd0>>, <bound m...
  File "/home/mona/.local/lib/python3.10/site-packages/pytorch_lightning/lite/lite.py", line 413, in _run_with_strategy_setup
    return run_method(*args, **kwargs)
           │           │       └ {}
           │           └ (Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resum...
           └ <bound method Lite.run of <__main__.Lite object at 0x7fe218caccd0>>

  File "/home/mona/gdrnpp_bop2022/./core/gdrn_modeling/main_gdrn.py", line 183, in run
    return self.do_test(cfg, model)
           │    │       │    └ _LiteModule(
           │    │       │        (_module): GDRN_DoubleMask(
           │    │       │          (backbone): FeatureListNet(
           │    │       │            (stem_0): Conv2d(3, 128, kernel_size=(4, 4),...
           │    │       └ Config (path: configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py): {'OUTPUT_ROOT':...
           │    └ <function GDRN_Lite.do_test at 0x7fe0e2940670>
           └ <__main__.Lite object at 0x7fe218caccd0>

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/engine/engine.py", line 157, in do_test
    data_loader = build_gdrn_test_loader(cfg, dataset_name, train_objs=evaluator.train_objs)
                  │                      │    │                        │         └ ['002_master_chef_can', '003_cracker_box', '004_sugar_box', '005_tomato_soup_can', '006_mustard_bottle', '007_tuna_fish_can',...
                  │                      │    │                        └ <core.gdrn_modeling.engine.gdrn_evaluator.GDRN_Evaluator object at 0x7fe0db232290>
                  │                      │    └ 'ycbv_test'
                  │                      └ Config (path: configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py): {'OUTPUT_ROOT':...
                  └ <function build_gdrn_test_loader at 0x7fe0e2da1bd0>

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/datasets/data_loader.py", line 914, in build_gdrn_test_loader
    dataset_dicts = get_detection_dataset_dicts(
                    └ <function get_detection_dataset_dicts at 0x7fe117c9f010>

  File "/home/mona/anaconda3/envs/gdrnpp/lib/python3.10/site-packages/detectron2/data/build.py", line 253, in get_detection_dataset_dicts
    dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names]
                     │              │                                     └ ['ycbv_test']
                     │              └ <function _DatasetCatalog.get at 0x7fe117c83370>
                     └ DatasetCatalog(registered datasets: coco_2014_train, coco_2014_val, coco_2014_minival, coco_2014_valminusminival, coco_2017_t...
  File "/home/mona/anaconda3/envs/gdrnpp/lib/python3.10/site-packages/detectron2/data/build.py", line 253, in <listcomp>
    dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names]
                     │              │   │                 └ 'ycbv_test'
                     │              │   └ 'ycbv_test'
                     │              └ <function _DatasetCatalog.get at 0x7fe117c83370>
                     └ DatasetCatalog(registered datasets: coco_2014_train, coco_2014_val, coco_2014_minival, coco_2014_valminusminival, coco_2017_t...
  File "/home/mona/anaconda3/envs/gdrnpp/lib/python3.10/site-packages/detectron2/data/catalog.py", line 58, in get
    return f()
           └ <core.gdrn_modeling.datasets.ycbv_d2.YCBV_Dataset object at 0x7fe0db67dc60>

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/datasets/ycbv_d2.py", line 316, in __call__
    dataset_dicts.extend(self._load_from_idx_file(ann_file, image_root))
    │             │      │    │                   │         └ '/home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv/test'
    │             │      │    │                   └ '/home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv/image_sets/keyframe.txt'
    │             │      │    └ <function YCBV_Dataset._load_from_idx_file at 0x7fe0e41ca950>
    │             │      └ <core.gdrn_modeling.datasets.ycbv_d2.YCBV_Dataset object at 0x7fe0db67dc60>
    │             └ <method 'extend' of 'list' objects>
    └ []

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/datasets/ycbv_d2.py", line 100, in _load_from_idx_file
    with open(idx_file, "r") as f:
              └ '/home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv/image_sets/keyframe.txt'

FileNotFoundError: [Errno 2] No such file or directory: '/home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv/image_sets/keyframe.txt'
(gdrnpp) mona@ada:~/gdrnpp_bop2022$ ls /home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv
lrwxrwxrwx 1 mona mona 22 Jan  2 09:43 /home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv -> /data2/data/BOP/YCB-V/
(gdrnpp) mona@ada:~/gdrnpp_bop2022$ ls /data2/data/BOP/YCB-V/
total 126G
drwxr-xr-x 14 mona mona 4.0K Aug  6  2019 test
drwxr-xr-x 82 mona mona 4.0K Aug  6  2019 train_real
drwxr-xr-x 82 mona mona 4.0K Aug 12  2019 train_synt
drwxrwxr-x  2 mona mona 4.0K Sep 27  2019 ycbv
drwxrwx--- 52 mona mona 4.0K Jun 15  2020 train_pbr
drwxrwxr-x  2 mona mona 4.0K Oct 24 16:13 models
drwxrwxr-x  2 mona mona 4.0K Oct 24 16:13 models_eval
drwxrwxr-x  2 mona mona 4.0K Oct 24 16:13 models_fine
drwxrwxr-x 10 mona mona 4.0K Oct 30 08:16 .
drwxrwxr-x  9 mona mona 4.0K Dec 11 15:53 ..
-rw-rw-r--  1 mona mona 501M Aug 15  2019 ycbv_models.zip
-rw-rw-r--  1 mona mona  16K Sep 27  2019 ycbv_base.zip
-rw-rw-r--  1 mona mona  71G Jun 10  2020 ycbv_train_real.zip
-rw-rw-r--  1 mona mona  21G Jun 11  2020 ycbv_train_synt.zip
-rw-rw-r--  1 mona mona  14G Jun 11  2020 ycbv_test_all.zip
-rw-rw-r--  1 mona mona 630M Jun 11  2020 ycbv_test_bop19.zip
-rw-rw-r--  1 mona mona  20G Jun 17  2020 ycbv_train_pbr.zip
monajalal commented 6 months ago

actually downloaded it from the following not just BOP challenge website

https://rse-lab.cs.washington.edu/projects/posecnn/

Screenshot from 2024-01-02 10-46-31

20240102_104611|__main__@171: 102.873543M params
20240102_104611|d2.checkpoint.detection_checkpoint@38: [DetectionCheckpointer] Loading from  ...
20240102_104611|fvcore.common.checkpoint@148: No checkpoint found. Initializing model from scratch
20240102_104612|core.gdrn_modeling.datasets.ycbv_d2@310: loading dataset dicts: ycbv_test
  0%|          | 0/2949 [00:00<?, ?it/s]20240102_104614|core.gdrn_modeling.datasets.ycbv_d2@358: cache models to /home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv/models/models_ycbv_test.pkl
  0%|          | 5/2949 [00:00<09:17,  5.28it/s]
20240102_104614|ERR|__main__@233: An error has been caught in function '<module>', process 'MainProcess' (17317), thread 'MainThread' (140455184784256):
Traceback (most recent call last):

> File "/home/mona/gdrnpp_bop2022/./core/gdrn_modeling/main_gdrn.py", line 233, in <module>
    main(args)
    │    └ Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resume...
    └ <function main at 0x7fbd06b55bd0>

  File "/home/mona/gdrnpp_bop2022/./core/gdrn_modeling/main_gdrn.py", line 205, in main
    ).run(args, cfg)
          │     └ Config (path: configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py): {'OUTPUT_ROOT':...
          └ Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resume...

  File "/home/mona/.local/lib/python3.10/site-packages/pytorch_lightning/lite/lite.py", line 408, in _run_impl
    return run_method(*args, **kwargs)
           │           │       └ {}
           │           └ (Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resum...
           └ functools.partial(<bound method LightningLite._run_with_strategy_setup of <__main__.Lite object at 0x7fbe440b4cd0>>, <bound m...
  File "/home/mona/.local/lib/python3.10/site-packages/pytorch_lightning/lite/lite.py", line 413, in _run_with_strategy_setup
    return run_method(*args, **kwargs)
           │           │       └ {}
           │           └ (Namespace(config_file='configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py', resum...
           └ <bound method Lite.run of <__main__.Lite object at 0x7fbe440b4cd0>>

  File "/home/mona/gdrnpp_bop2022/./core/gdrn_modeling/main_gdrn.py", line 183, in run
    return self.do_test(cfg, model)
           │    │       │    └ _LiteModule(
           │    │       │        (_module): GDRN_DoubleMask(
           │    │       │          (backbone): FeatureListNet(
           │    │       │            (stem_0): Conv2d(3, 128, kernel_size=(4, 4),...
           │    │       └ Config (path: configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py): {'OUTPUT_ROOT':...
           │    └ <function GDRN_Lite.do_test at 0x7fbd0d764670>
           └ <__main__.Lite object at 0x7fbe440b4cd0>

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/engine/engine.py", line 157, in do_test
    data_loader = build_gdrn_test_loader(cfg, dataset_name, train_objs=evaluator.train_objs)
                  │                      │    │                        │         └ ['002_master_chef_can', '003_cracker_box', '004_sugar_box', '005_tomato_soup_can', '006_mustard_bottle', '007_tuna_fish_can',...
                  │                      │    │                        └ <core.gdrn_modeling.engine.gdrn_evaluator.GDRN_Evaluator object at 0x7fbd067b0820>
                  │                      │    └ 'ycbv_test'
                  │                      └ Config (path: configs/gdrn/ycbv/convnext_a6_AugCosyAAEGray_BG05_mlL1_DMask_amodalClipBox_classAware_ycbv.py): {'OUTPUT_ROOT':...
                  └ <function build_gdrn_test_loader at 0x7fbd0d711bd0>

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/datasets/data_loader.py", line 914, in build_gdrn_test_loader
    dataset_dicts = get_detection_dataset_dicts(
                    └ <function get_detection_dataset_dicts at 0x7fbd42ff3010>

  File "/home/mona/anaconda3/envs/gdrnpp/lib/python3.10/site-packages/detectron2/data/build.py", line 253, in get_detection_dataset_dicts
    dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names]
                     │              │                                     └ ['ycbv_test']
                     │              └ <function _DatasetCatalog.get at 0x7fbd431d3370>
                     └ DatasetCatalog(registered datasets: coco_2014_train, coco_2014_val, coco_2014_minival, coco_2014_valminusminival, coco_2017_t...
  File "/home/mona/anaconda3/envs/gdrnpp/lib/python3.10/site-packages/detectron2/data/build.py", line 253, in <listcomp>
    dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in names]
                     │              │   │                 └ 'ycbv_test'
                     │              │   └ 'ycbv_test'
                     │              └ <function _DatasetCatalog.get at 0x7fbd431d3370>
                     └ DatasetCatalog(registered datasets: coco_2014_train, coco_2014_val, coco_2014_minival, coco_2014_valminusminival, coco_2017_t...
  File "/home/mona/anaconda3/envs/gdrnpp/lib/python3.10/site-packages/detectron2/data/catalog.py", line 58, in get
    return f()
           └ <core.gdrn_modeling.datasets.ycbv_d2.YCBV_Dataset object at 0x7fbd067d9c90>

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/datasets/ycbv_d2.py", line 316, in __call__
    dataset_dicts.extend(self._load_from_idx_file(ann_file, image_root))
    │             │      │    │                   │         └ '/home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv/test'
    │             │      │    │                   └ '/home/mona/gdrnpp_bop2022/datasets/BOP_DATASETS/ycbv/image_sets/keyframe.txt'
    │             │      │    └ <function YCBV_Dataset._load_from_idx_file at 0x7fbd0f716950>
    │             │      └ <core.gdrn_modeling.datasets.ycbv_d2.YCBV_Dataset object at 0x7fbd067d9c90>
    │             └ <method 'extend' of 'list' objects>
    └ []

  File "/home/mona/gdrnpp_bop2022/core/gdrn_modeling/../../core/gdrn_modeling/datasets/ycbv_d2.py", line 135, in _load_from_idx_file
    cam_anno = np.array(scene_cam_dicts[scene_id][str_im_id]["cam_K"], dtype=np.float32).reshape(3, 3)
               │  │     │               │         │                          │  └ <class 'numpy.float32'>
               │  │     │               │         │                          └ <module 'numpy' from '/home/mona/.local/lib/python3.10/site-packages/numpy/__init__.py'>
               │  │     │               │         └ '135'
               │  │     │               └ 48
               │  │     └ {48: {'1': {'cam_K': [1066.778, 0.0, 312.9869, 0.0, 1067.487, 241.3109, 0.0, 0.0, 1.0], 'cam_R_w2c': [0.775038, 0.630563, -0....
               │  └ <built-in function array>
               └ <module 'numpy' from '/home/mona/.local/lib/python3.10/site-packages/numpy/__init__.py'>

KeyError: '135'

^^ I am gonna spend sometimes on this error.

Now, my ycbv folder looks like:

(gdrnpp) mona@ada:~/gdrnpp_bop2022$ ls /data2/data/BOP/YCB-V/
total 126G
drwxrwxrwx 14 mona mona 4.0K Sep 13  2017 keyframes
drwxrwxrwx 14 mona mona 4.0K Oct 30  2017 pairs
drwxrwxrwx  2 mona mona 4.0K Nov 27  2017 cameras
drwxr-xr-x 14 mona mona 4.0K Aug  6  2019 test
drwxr-xr-x 82 mona mona 4.0K Aug  6  2019 train_real
drwxr-xr-x 82 mona mona 4.0K Aug 12  2019 train_synt
drwxrwxr-x  2 mona mona 4.0K Sep 27  2019 ycbv
drwxrwx--- 52 mona mona 4.0K Jun 15  2020 train_pbr
drwxrwxrwx  2 mona mona 4.0K Sep 16 23:03 poses
drwxrwxrwx  2 mona mona 4.0K Sep 16 23:15 image_sets
drwxrwxr-x  2 mona mona 4.0K Oct 24 16:13 models_eval
drwxrwxr-x  2 mona mona 4.0K Oct 24 16:13 models_fine
drwxrwxr-x 10 mona mona 4.0K Jan  2 10:42 ..
drwxrwxr-x 15 mona mona 4.0K Jan  2 10:46 .
drwxrwxr-x 23 mona mona 4.0K Jan  2 10:46 models
-rw-r--r--  1 mona mona 1.1K Nov 27  2017 LICENSE
-rw-r--r--  1 mona mona 1.7K Nov 27  2017 README
-rw-rw-r--  1 mona mona 501M Aug 15  2019 ycbv_models.zip
-rw-rw-r--  1 mona mona  16K Sep 27  2019 ycbv_base.zip
-rw-rw-r--  1 mona mona  71G Jun 10  2020 ycbv_train_real.zip
-rw-rw-r--  1 mona mona  21G Jun 11  2020 ycbv_train_synt.zip
-rw-rw-r--  1 mona mona  14G Jun 11  2020 ycbv_test_all.zip
-rw-rw-r--  1 mona mona 630M Jun 11  2020 ycbv_test_bop19.zip
-rw-rw-r--  1 mona mona  20G Jun 17  2020 ycbv_train_pbr.zip
-rw-rw-r--  1 mona mona 374M Jan  2 10:44 YCB-Video-Base.zip

Please let me know about your thoughts

jfitzg7 commented 5 months ago

Hi @monajalal sorry for the late reply, but I have sadly not run into that KeyError before. I believe I downloaded the keyframes.txt file from this website https://github.com/yuxng/YCB_Video_toolbox but the one you posted looks like it might be better and contain more of the necessary files!

However, my thought is that perhaps the keyframes.txt from that website is causing the error? I'm not entirely sure though since I haven't checked the differences between the one you provided and the one from yuxng's GitHub repo.

monajalal commented 5 months ago

@jfitzg7 have you been able to train gdrnpp from scratch on a custom dataset made with blenderproc? would you be able to get in touch with mona.jalal.tmh@gmail.com ? Thank you

monajalal commented 5 months ago

I am seeing this when I search for keyframes.txt

(base) mona@ada:/data2/data/BOP/YCB-V$ ls
total 126G
drwxrwxrwx 14 mona mona 4.0K Sep 13  2017 keyframes
drwxrwxrwx 14 mona mona 4.0K Oct 30  2017 pairs
drwxrwxrwx  2 mona mona 4.0K Nov 27  2017 cameras
drwxrwxrwx 82 mona mona 4.0K Aug  6  2019 train_real
drwxrwxrwx 82 mona mona 4.0K Aug 12  2019 train_synt
drwxrwxrwx 52 mona mona 4.0K Jun 15  2020 train_pbr
drwxrwxrwx  2 mona mona 4.0K Sep 16 23:03 poses
drwxrwxrwx  2 mona mona 4.0K Sep 16 23:15 image_sets
drwxrwxrwx  2 mona mona 4.0K Oct 24 16:13 models_eval
drwxrwxrwx  2 mona mona 4.0K Oct 24 16:13 models_fine
drwxrwxrwx 23 mona mona 4.0K Jan  2 13:34 models
drwxrwxrwx 15 mona mona 4.0K Jan  3 14:29 test
drwxrwxrwx  2 mona mona 4.0K Jan  3 14:33 ycbv
drwxrwxrwx 15 mona mona 4.0K Jan  3 14:33 .
drwxrwxrwx 11 mona mona 4.0K Jan  8 08:38 ..
-rwxrwxrwx  1 mona mona 1.1K Nov 27  2017 LICENSE
-rwxrwxrwx  1 mona mona 1.7K Nov 27  2017 README
-rwxrwxrwx  1 mona mona  137 Aug 12  2019 camera_cmu.json
-rwxrwxrwx  1 mona mona 262K Aug 13  2019 test_targets_bop19.json
-rwxrwxrwx  1 mona mona 4.0K Aug 14  2019 dataset_info.md
-rwxrwxrwx  1 mona mona 501M Aug 15  2019 ycbv_models.zip
-rwxrwxrwx  1 mona mona  16K Sep 27  2019 ycbv_base.zip
-rwxrwxrwx  1 mona mona  137 Sep 27  2019 camera_uw.json
-rwxrwxrwx  1 mona mona  71G Jun 10  2020 ycbv_train_real.zip
-rwxrwxrwx  1 mona mona  21G Jun 11  2020 ycbv_train_synt.zip
-rwxrwxrwx  1 mona mona  14G Jun 11  2020 ycbv_test_all.zip
-rwxrwxrwx  1 mona mona 630M Jun 11  2020 ycbv_test_bop19.zip
-rwxrwxrwx  1 mona mona  20G Jun 17  2020 ycbv_train_pbr.zip
-rwxrwxrwx  1 mona mona 374M Jan  2 10:44 YCB-Video-Base.zip
-rwxrwxrwx  1 mona mona 368M Jan  2 11:20 YCB_Video_Models.zip
(base) mona@ada:/data2/data/BOP/YCB-V$ find . -name "keyframes.txt"
./keyframes/0056/keyframes.txt
./keyframes/0059/keyframes.txt
./keyframes/0049/keyframes.txt
./keyframes/0053/keyframes.txt
./keyframes/0051/keyframes.txt
./keyframes/0048/keyframes.txt
./keyframes/0055/keyframes.txt
./keyframes/0057/keyframes.txt
./keyframes/0052/keyframes.txt
./keyframes/0054/keyframes.txt
./keyframes/0050/keyframes.txt
./keyframes/0058/keyframes.txt

I also noticed it here

https://raw.githubusercontent.com/yuxng/YCB_Video_toolbox/master/keyframe.txt

From what you say, except not sure why the data I downloaded has so many keyframes.txt?

@jfitzg7

monajalal commented 5 months ago

@jfitzg7

Also, have you been able to train gdrnpp on a custom data from scratch?

kevinDrawn commented 4 months ago

@monajalal Do you complete to train your custom data about pose estimation??

shanice-l commented 3 months ago

I think this event is relevant to the bop renderer building with a wrong imageio version.