Closed Shaheerahmadzai closed 1 month ago
@Shaheerahmadzai can you share the entire error log?
Looks like you are using gpu ids in the .sh script which the machine does not have.
okay, this is the entire error log;
(sapiens-full) sahmadzai@nextcloud:~/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/pose$ python3 demo/image_demo.py /home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TAS
KS/sapiens/pose/INPUT/img.jpg
/home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/pose/configs/sapiens_pose/coco_wholebody/sapiens_0.3b-210e_coco_wholebody-1024x768.py
/home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/sapiens_host/pose/checkpoints/sapiens_1b/sapiens_1b_goliath_best_goliath_AP_640.pth?download=true --out-file /home/
sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/pose/OUTPUT/pose_image.jpg
/home/sahmadzai/miniconda3/envs/sapiens-full/lib/python3.8/site-packages/albumentations/init.py:13: UserWarning: A new version of Albumentations is available: 1.4.15 (you have 1.4.14).
Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
check_for_updates()
/home/sahmadzai/miniconda3/envs/sapiens-full/lib/python3.8/site-packages/torchvision/datapoints/init.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespa
ces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://gith
ub.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You ca
n silence this warning by calling torchvision.disable_beta_transforms_warning().
warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/sahmadzai/miniconda3/envs/sapiens-full/lib/python3.8/site-packages/torchvision/transforms/v2/init.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 name
spaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://g
ithub.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You
can silence this warning by calling torchvision.disable_beta_transforms_warning().
warnings.warn(_BETA_TRANSFORMS_WARNING)
No module named 'airstore'
Warning! Make sure you are not training.
Loads checkpoint by local backend from path: /home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/sapiens_host/pose/checkpoints/sapiens_1b/sapiens_1b_goliath_best_goliat
h_AP_640.pth?download=true
09/16 13:28:48 - mmengine - INFO - Resize the pos_embed shape from torch.Size([1, 3072, 1536]) to torch.Size([1, 3072, 1024]).
The model and loaded state dict do not match exactly
size mismatch for backbone.pos_embed: copying a param with shape torch.Size([1, 3119, 1536]) from checkpoint, the shape in current model is torch.Size([1, 3072, 1024]).
size mismatch for backbone.patch_embed.projection.weight: copying a param with shape torch.Size([1536, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([1024, 3, 16, 16
]).
size mismatch for backbone.patch_embed.projection.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.0.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.0.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.0.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.0.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.0.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.0.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.0.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.0.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.0.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.0.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.0.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.0.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.1.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.1.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.1.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.1.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.1.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.1.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.2.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.2.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.2.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.2.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.2.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.2.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.2.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.2.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.2.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.2.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.2.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.2.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.3.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.3.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.3.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.3.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.3.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.3.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.3.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.3.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.3.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.3.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.3.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.3.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.4.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.4.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.4.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.4.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.4.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.4.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.10.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.10.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.10.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.10.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.10.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.10.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.11.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.11.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.11.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.11.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.11.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.11.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.11.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.11.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.11.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.11.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.11.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.11.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.12.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.12.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.12.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.12.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.12.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.12.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.12.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.12.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.12.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.12.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.12.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.12.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.13.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.13.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.13.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.13.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.13.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.13.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.13.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.13.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.13.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.14.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.14.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.14.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.15.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.15.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.15.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.15.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.15.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.15.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.15.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.15.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.15.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.15.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.15.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.15.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.16.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.16.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.16.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.16.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.16.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.16.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.16.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.16.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.16.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.16.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.16.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.16.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.17.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.17.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.17.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.17.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.17.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.17.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.17.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.17.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.17.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.17.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.17.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.17.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.18.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.18.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.18.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.18.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.18.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.18.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.18.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.18.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.18.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.18.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.18.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.18.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.19.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.19.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.19.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.19.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.19.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.19.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.19.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.19.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.19.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.19.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.19.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.19.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.20.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.20.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.20.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.20.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.20.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.20.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.20.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.20.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.20.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.20.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.20.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.20.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.21.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.21.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.21.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.21.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.21.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.21.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.21.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.21.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.21.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.21.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.21.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.21.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.22.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.22.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.22.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.22.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.22.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.22.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.22.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.22.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.22.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.22.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.22.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.22.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.23.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.23.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.23.attn.qkv.weight: copying a param with shape torch.Size([4608, 1536]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
size mismatch for backbone.layers.23.attn.qkv.bias: copying a param with shape torch.Size([4608]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for backbone.layers.23.attn.proj.weight: copying a param with shape torch.Size([1536, 1536]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for backbone.layers.23.attn.proj.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.23.ln2.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.23.ln2.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.layers.23.ffn.layers.0.0.weight: copying a param with shape torch.Size([6144, 1536]) from checkpoint, the shape in current model is torch.Size([4096, 1024]).
size mismatch for backbone.layers.23.ffn.layers.0.0.bias: copying a param with shape torch.Size([6144]) from checkpoint, the shape in current model is torch.Size([4096]).
size mismatch for backbone.layers.23.ffn.layers.1.weight: copying a param with shape torch.Size([1536, 6144]) from checkpoint, the shape in current model is torch.Size([1024, 4096]).
size mismatch for backbone.layers.23.ffn.layers.1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.ln1.weight: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for backbone.ln1.bias: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head.deconv_layers.0.weight: copying a param with shape torch.Size([1536, 768, 4, 4]) from checkpoint, the shape in current model is torch.Size([1024, 768, 4, 4]).
size mismatch for head.final_layer.weight: copying a param with shape torch.Size([308, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([133, 768, 1, 1]).
size mismatch for head.final_layer.bias: copying a param with shape torch.Size([308]) from checkpoint, the shape in current model is torch.Size([133]).
unexpected key in source state_dict: backbone.layers.24.ln1.weight, backbone.layers.24.ln1.bias, backbone.layers.24.attn.qkv.weight, backbone.layers.24.attn.qkv.bias, backbone.layers.24.att
n.proj.weight, backbone.layers.24.attn.proj.bias, backbone.layers.24.ln2.weight, backbone.layers.24.ln2.bias, backbone.layers.24.ffn.layers.0.0.weight, backbone.layers.24.ffn.layers.0.0.bia
s, backbone.layers.24.ffn.layers.1.weight, backbone.layers.24.ffn.layers.1.bias, backbone.layers.25.ln1.weight, backbone.layers.25.ln1.bias, backbone.layers.25.attn.qkv.weight, backbone.lay
ers.25.attn.qkv.bias, backbone.layers.25.attn.proj.weight, backbone.layers.25.attn.proj.bias, backbone.layers.25.ln2.weight, backbone.layers.25.ln2.bias, backbone.layers.25.ffn.layers.0.0.w
eight, backbone.layers.25.ffn.layers.0.0.bias, backbone.layers.25.ffn.layers.1.weight, backbone.layers.25.ffn.layers.1.bias, backbone.layers.26.ln1.weight, backbone.layers.26.ln1.bias, back
bone.layers.26.attn.qkv.weight, backbone.layers.26.attn.qkv.bias, backbone.layers.26.attn.proj.weight, backbone.layers.26.attn.proj.bias, backbone.layers.26.ln2.weight, backbone.layers.26.l
n2.bias, backbone.layers.26.ffn.layers.0.0.weight, backbone.layers.26.ffn.layers.0.0.bias, backbone.layers.26.ffn.layers.1.weight, backbone.layers.26.ffn.layers.1.bias, backbone.layers.27.l
n1.weight, backbone.layers.27.ln1.bias, backbone.layers.27.attn.qkv.weight, backbone.layers.27.attn.qkv.bias, backbone.layers.27.attn.proj.weight, backbone.layers.27.attn.proj.bias, backbon
e.layers.27.ln2.weight, backbone.layers.27.ln2.bias, backbone.layers.27.ffn.layers.0.0.weight, backbone.layers.27.ffn.layers.0.0.bias, backbone.layers.27.ffn.layers.1.weight, backbone.layer
s.27.ffn.layers.1.bias, backbone.layers.28.ln1.weight, backbone.layers.28.ln1.bias, backbone.layers.28.attn.qkv.weight, backbone.layers.28.attn.qkv.bias, backbone.layers.28.attn.proj.weight
, backbone.layers.28.attn.proj.bias, backbone.layers.28.ln2.weight, backbone.layers.28.ln2.bias, backbone.layers.28.ffn.layers.0.0.weight, backbone.layers.28.ffn.layers.0.0.bias, backbone.l
ayers.28.ffn.layers.1.weight, backbone.layers.28.ffn.layers.1.bias, backbone.layers.29.ln1.weight, backbone.layers.29.ln1.bias, backbone.layers.29.attn.qkv.weight, backbone.layers.29.attn.q
kv.bias, backbone.layers.29.attn.proj.weight, backbone.layers.29.attn.proj.bias, backbone.layers.29.ln2.weight, backbone.layers.29.ln2.bias, backbone.layers.29.ffn.layers.0.0.weight, backbo
ne.layers.29.ffn.layers.0.0.bias, backbone.layers.29.ffn.layers.1.weight, backbone.layers.29.ffn.layers.1.bias, backbone.layers.30.ln1.weight, backbone.layers.30.ln1.bias, backbone.layers.3
0.attn.qkv.weight, backbone.layers.30.attn.qkv.bias, backbone.layers.30.attn.proj.weight, backbone.layers.30.attn.proj.bias, backbone.layers.30.ln2.weight, backbone.layers.30.ln2.bias, back
bone.layers.30.ffn.layers.0.0.weight, backbone.layers.30.ffn.layers.0.0.bias, backbone.layers.30.ffn.layers.1.weight, backbone.layers.30.ffn.layers.1.bias, backbone.layers.31.ln1.weight, ba
ckbone.layers.31.ln1.bias, backbone.layers.31.attn.qkv.weight, backbone.layers.31.attn.qkv.bias, backbone.layers.31.attn.proj.weight, backbone.layers.31.attn.proj.bias, backbone.layers.31.l
n2.weight, backbone.layers.31.ln2.bias, backbone.layers.31.ffn.layers.0.0.weight, backbone.layers.31.ffn.layers.0.0.bias, backbone.layers.31.ffn.layers.1.weight, backbone.layers.31.ffn.laye
rs.1.bias, backbone.layers.32.ln1.weight, backbone.layers.32.ln1.bias, backbone.layers.32.attn.qkv.weight, backbone.layers.32.attn.qkv.bias, backbone.layers.32.attn.proj.weight, backbone.la
yers.32.attn.proj.bias, backbone.layers.32.ln2.weight, backbone.layers.32.ln2.bias, backbone.layers.32.ffn.layers.0.0.weight, backbone.layers.32.ffn.layers.0.0.bias, backbone.layers.32.ffn.
layers.1.weight, backbone.layers.32.ffn.layers.1.bias, backbone.layers.33.ln1.weight, backbone.layers.33.ln1.bias, backbone.layers.33.attn.qkv.weight, backbone.layers.33.attn.qkv.bias, back
bone.layers.33.attn.proj.weight, backbone.layers.33.attn.proj.bias, backbone.layers.33.ln2.weight, backbone.layers.33.ln2.bias, backbone.layers.33.ffn.layers.0.0.weight, backbone.layers.33.
ffn.layers.0.0.bias, backbone.layers.33.ffn.layers.1.weight, backbone.layers.33.ffn.layers.1.bias, backbone.layers.34.ln1.weight, backbone.layers.34.ln1.bias, backbone.layers.34.attn.qkv.we
ight, backbone.layers.34.attn.qkv.bias, backbone.layers.34.attn.proj.weight, backbone.layers.34.attn.proj.bias, backbone.layers.34.ln2.weight, backbone.layers.34.ln2.bias, backbone.layers.3
4.ffn.layers.0.0.weight, backbone.layers.34.ffn.layers.0.0.bias, backbone.layers.34.ffn.layers.1.weight, backbone.layers.34.ffn.layers.1.bias, backbone.layers.35.ln1.weight, backbone.layers
.35.ln1.bias, backbone.layers.35.attn.qkv.weight, backbone.layers.35.attn.qkv.bias, backbone.layers.35.attn.proj.weight, backbone.layers.35.attn.proj.bias, backbone.layers.35.ln2.weight, ba
ckbone.layers.35.ln2.bias, backbone.layers.35.ffn.layers.0.0.weight, backbone.layers.35.ffn.layers.0.0.bias, backbone.layers.35.ffn.layers.1.weight, backbone.layers.35.ffn.layers.1.bias, ba
ckbone.layers.36.ln1.weight, backbone.layers.36.ln1.bias, backbone.layers.36.attn.qkv.weight, backbone.layers.36.attn.qkv.bias, backbone.layers.36.attn.proj.weight, backbone.layers.36.attn.
proj.bias, backbone.layers.36.ln2.weight, backbone.layers.36.ln2.bias, backbone.layers.36.ffn.layers.0.0.weight, backbone.layers.36.ffn.layers.0.0.bias, backbone.layers.36.ffn.layers.1.weig
ht, backbone.layers.36.ffn.layers.1.bias, backbone.layers.37.ln1.weight, backbone.layers.37.ln1.bias, backbone.layers.37.attn.qkv.weight, backbone.layers.37.attn.qkv.bias, backbone.layers.3
7.attn.proj.weight, backbone.layers.37.attn.proj.bias, backbone.layers.37.ln2.weight, backbone.layers.37.ln2.bias, backbone.layers.37.ffn.layers.0.0.weight, backbone.layers.37.ffn.layers.0.
0.bias, backbone.layers.37.ffn.layers.1.weight, backbone.layers.37.ffn.layers.1.bias, backbone.layers.38.ln1.weight, backbone.layers.38.ln1.bias, backbone.layers.38.attn.qkv.weight, backbon
e.layers.38.attn.qkv.bias, backbone.layers.38.attn.proj.weight, backbone.layers.38.attn.proj.bias, backbone.layers.38.ln2.weight, backbone.layers.38.ln2.bias, backbone.layers.38.ffn.layers.
0.0.weight, backbone.layers.38.ffn.layers.0.0.bias, backbone.layers.38.ffn.layers.1.weight, backbone.layers.38.ffn.layers.1.bias, backbone.layers.39.ln1.weight, backbone.layers.39.ln1.bias,
backbone.layers.39.attn.qkv.weight, backbone.layers.39.attn.qkv.bias, backbone.layers.39.attn.proj.weight, backbone.layers.39.attn.proj.bias, backbone.layers.39.ln2.weight, backbone.layers
.39.ln2.bias, backbone.layers.39.ffn.layers.0.0.weight, backbone.layers.39.ffn.layers.0.0.bias, backbone.layers.39.ffn.layers.1.weight, backbone.layers.39.ffn.layers.1.bias
/home/sahmadzai/miniconda3/envs/sapiens-full/lib/python3.8/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.Loca
lVisBackend'>, please provide the save_dir
argument.
warnings.warn(f'Failed to add {vis_backend.class}, '
/home/sahmadzai/miniconda3/envs/sapiens-full/lib/python3.8/site-packages/mmengine/visualization/visualizer.py:196: UserWarning: Failed to add <class 'mmengine.visualization.vis_backend.Tens
orboardVisBackend'>, please provide the save_dir
argument.
warnings.warn(f'Failed to add {vis_backend.class}, '
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [32,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [33,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [34,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [35,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [36,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [37,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [38,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [39,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [40,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [41,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [42,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [43,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [44,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [45,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [46,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [47,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [48,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [49,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [50,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [47,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [48,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [49,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [50,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [51,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [52,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [53,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [54,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [55,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [56,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [57,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [58,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [59,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [60,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [61,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [62,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [10264,0,0], thread: [63,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds"
failed.
Traceback (most recent call last):
File "demo/image_demo.py", line 110, in TORCH_USE_CUDA_DSA
to enable device-side assertions.
python Version: 3.8.19 Driver Version: 535.183.01 CUDA Version: 12.2
@Shaheerahmadzai you config and model checkpoint do not match. config is for sapiens-0.3b. checkpoint is sapiens-1b.
/home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/pose/configs/sapiens_pose/coco_wholebody/sapiens_0.3b-210e_coco_wholebody-1024x768.py /home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/sapiens_host/pose/checkpoints/sapiens_1b/sapiens_1b_goliath_best_goliath_AP_640.pth?download=true --out-file /home/
I appreciate your help, I tried the same config and model checkpoint, but the problem still exists.
python3 demo/image_demo.py /home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/pose/INPUT/img.jpg /home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/pose/configs/sapiens_pose/coco_wholebody/sapiens_1b-210e_coco_wholebody-1024x768.py /home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/sapiens_host/pose/checkpoints/sapiens_1b/sapiens_1b_goliath_best_goliath_AP_640.pth?download=true --out-file /home/sahmadzai/00_PROJECT/00_FULLBODY/SAPIENS_H_R_M_HUMAN_TASKS/sapiens/pose/OUTPUT/new_pose_image.jpg
@Shaheerahmadzai you are now using a coco-wholebody (133 kps) config with a goliath (308 kps) checkpoint.
Please use the provided shell scripts as reference for consistent config and model sizes.
I want to run image_demo.py from pose but I face a bunch of errors, I solved most of them but this one, I don't know how to set the device.