Closed hualuluu closed 1 year ago
time CUDA_VISIBLE_DEVICES=0 python -m openpifpaf.train --lr=0.0003 --momentum=0.95 --b-scale=15.0 --clip-grad-value=10 \ --epochs=250 --lr-decay 220 240 --lr-decay-epochs=6 --batch-size=10 --weight-decay=1e-5 --dataset=posetrack2018 --posetrack-upsample=2 \ --posetrack-eval-extended-scale --posetrack-eval-orientation-invariant=0.1 --basenet=resnet18
I used it.
python -m openpifpaf.predict image_path0 --image-output --json-output --checkpoint myself_pointpath
I used it to predict . but get errors
File "/mnt/TrackPoints/openpifpaf/src/openpifpaf/decoder/tracking_pose.py", line 222, in call ], dim=0) TypeError: expected Tensor as element 1 in argument 0, but got NoneType
I found out that the tcaf head has to input two images So the tcaf head does not allow single image input?
Hello hualuluu,
To use the tracking framework, you should set the backbone (argument --basenet
) to a version compatible it in the training command. These are the options starting with a 't'. For example, --basenet=tresnet50
for a tracking ResNet-50.
You can find the possible values, as well as all the other options to set up the code in the documentation.
The commands you used look good apart from that.
The TCAF head is used to link detected poses between images. So yes, it can only be used with pairs of images, to link them together.
Thank you. I'll try~ 😄
Where can I find an example of posetrack2018? predict & train Thank u for your help. QAQ