Hey!
I believe I found a small bug in the code. In the dataset files, you transform the images twice from BGR2RGB during training. Once when loading the images and the second time before applying augmentations (https://github.com/pixelite1201/BEDLAM/blob/master/train/dataset/dataset.py#L134). This results in the model being trained on BGR images, but evaluated on RGB images. After fixing this and retraining the BEDLAM-CLIFF model, the metrics on 3DPW improve for ~1mm w.r.t. the numbers reported in the paper.
Hey! I believe I found a small bug in the code. In the dataset files, you transform the images twice from BGR2RGB during training. Once when loading the images and the second time before applying augmentations (https://github.com/pixelite1201/BEDLAM/blob/master/train/dataset/dataset.py#L134). This results in the model being trained on BGR images, but evaluated on RGB images. After fixing this and retraining the BEDLAM-CLIFF model, the metrics on 3DPW improve for ~1mm w.r.t. the numbers reported in the paper.