Closed Mchapariniya closed 1 month ago
@Mchapariniya you can ignore the weight_only warning. However the missing keys in source state_dict warning is worrying.
Can you share your entire error log along with the command you used to run it? Thanks.
First, I commented on line 196 from the demo_vis.py (override_ckpt_meta=True, # don't load the checkpoint metadata, load from config file) because I got the below error and the code can not run.
Error:
Distributing 100 image paths into 1 jobs.
/home/masoumeh/anaconda3/envs/pose/lib/python3.10/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: TorchScript
support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the torch.compile
optimizer instead.
from torch.distributed.optim import \
Loads checkpoint by local backend from path: /home/masoumeh/sapiens_host/detector/checkpoints/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth
/home/masoumeh/anaconda3/envs/pose/lib/python3.10/site-packages/mmengine/runner/checkpoint.py:347: FutureWarning: You are using torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(filename, map_location=map_location)
Traceback (most recent call last):
File "/home/masoumeh/sapiens/pose/demo/demo_vis.py", line 246, in
# Run the model
cd pose/scripts/demo/local/
chmod +x *
export SAPIENS_CHECKPOINT_ROOT="$SAPIENS_ROOT/checkpoints/sapiens_host"
export OUTPUT="$SAPIENS_ROOT/output"
export INPUT="$SAPIENS_ROOT/pose/demo/data/itw_videos/reel1"
# In keypoints308.sh uncomment the 0.3b model and comment the others
./keypoints308.sh
after running the commands I got the below warning:
(pose) masoumeh@Masoumeh:~/sapiens/pose/scripts/demo/local$ ./keypoints308.sh
Distributing 100 image paths into 1 jobs.
/home/masoumeh/anaconda3/envs/pose/lib/python3.10/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: TorchScript
support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the torch.compile
optimizer instead.
from torch.distributed.optim import \
Loads checkpoint by local backend from path: /home/masoumeh/sapiens_host/detector/checkpoints/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth
/home/masoumeh/anaconda3/envs/pose/lib/python3.10/site-packages/mmengine/runner/checkpoint.py:347: FutureWarning: You are using torch.load
with weights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only
will be flipped to True
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals
. We recommend you start setting weights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(filename, map_location=map_location)
Loads checkpoint by local backend from path: /home/masoumeh/sapiens_host/pose/checkpoints/sapiens_0.3b/sapiens_0.3b_goliath_best_goliath_AP_573.pth
The model and loaded state dict do not match exactly
missing keys in source state_dict: head.deconv_layers.1.weight, head.deconv_layers.1.bias, head.deconv_layers.1.running_mean, head.deconv_layers.1.running_var, head.deconv_layers.4.weight, head.deconv_layers.4.bias, head.deconv_layers.4.running_mean, head.deconv_layers.4.running_var, head.conv_layers.1.weight, head.conv_layers.1.bias, head.conv_layers.1.running_mean, head.conv_layers.1.running_var, head.conv_layers.4.weight, head.conv_layers.4.bias, head.conv_layers.4.running_mean, head.conv_layers.4.running_var
0%| | 0/100 [00:00<?, ?it/s]/home/masoumeh/anaconda3/envs/pose/lib/python3.10/site-packages/mmdet/models/layers/se_layer.py:158: FutureWarning: torch.cuda.amp.autocast(args...)
is deprecated. Please use torch.amp.autocast('cuda', args...)
instead.
with torch.cuda.amp.autocast(enabled=False):
/home/masoumeh/anaconda3/envs/pose/lib/python3.10/site-packages/mmdet/models/backbones/csp_darknet.py:118: FutureWarning: torch.cuda.amp.autocast(args...)
is deprecated. Please use torch.amp.autocast('cuda', args...)
instead.
with torch.cuda.amp.autocast(enabled=False):
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [02:43<00:00, 1.63s/it]
/home/masoumeh/sapiens/pose/scripts/demo/local
Processing complete.
Results saved to /home/masoumeh/Desktop/sapiens/pose/Outputs/vis/itw_videos/reel2/sapiens_0.3b
@Mchapariniya It looks like your environment is still using standard mmlab dependencies - likely because the full-installation step is missing or ran into an issue.
"/home/masoumeh/anaconda3/envs/pose/lib/python3.10/site-packages/mmengine/optim/optimizer/zero_optimizer.py" eg: this is pointing to the standard mmengine package.
It is important to use the forked libraries provided in the repo. Please install them from source and try again https://github.com/facebookresearch/sapiens/blob/main/_install/conda.sh
Our demos should run without any modification to the python code.
@rawalkhirodkar It seems the forked libraries differ from the standard mmlab a lot. At first I use the standard running this repo, the detected pose result is bad. While install them from source, I got a pretty result. So, what's the difference between them?
Same error for me too sapiens/pose/demo/demo_vis.py", line 196, in main pose_estimator = init_pose_estimator( TypeError: init_model() got an unexpected keyword argument 'override_ckpt_meta'
@SamiraJahangiri please install from source the modified libs provided in the repo. Likely you are using standard mmlab libs.
Hi! First I appreciate your great repository. My issue is I am trying to run the Sapiens-0.3B model to extract 308 Keypoints from the demo example. It works but I got the below warning: FutureWarning: You are using
torch.load
withweights_only=False
(the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value forweights_only
will be flipped toTrue
. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user viatorch.serialization.add_safe_globals
. We recommend you start settingweights_only=True
for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. checkpoint = torch.load(filename, map_location=map_location) Loads checkpoint by local backend from path: /home/masoumeh/sapiens_host/pose/checkpoints/sapiens_0.3b/sapiens_0.3b_goliath_best_goliath_AP_573.pth The model and loaded state dict do not match exactlymissing keys in source state_dict: head.deconv_layers.1.weight, head.deconv_layers.1.bias, head.deconv_layers.1.running_mean, head.deconv_layers.1.running_var, head.deconv_layers.4.weight, head.deconv_layers.4.bias, head.deconv_layers.4.running_mean, head.deconv_layers.4.running_var, head.conv_layers.1.weight, head.conv_layers.1.bias, head.conv_layers.1.running_mean, head.conv_layers.1.running_var, head.conv_layers.4.weight, head.conv_layers.4.bias, head.conv_layers.4.running_mean, head.conv_layers.4.running_var Should I consider this warning? and why I got this warning?