Open guerrifrancesco opened 3 years ago
you need to visualize the output to check if the tracking if correct.
@rirri93 same issure. idx always be 0. Did you find out how to solve it?
I am seeing the same issue. The video shows correct tracking but the JSON file show nothing.
same problem whenusing colab,can any one help?
Hi dear AlphaPose users,
I spent a lot of time finding a way to identify persons and track their skeletons during the time (frames). I read several works/websites and the trackers part of AlphaPose with their different options (--pose_track, --detector tracker, and --pose_flow) but without success to track correctly the skeletons. I succeed to generate the results with --poseflow option which are in a folder "matching" which content several files "numfram{i}numframe{i+1}_orb.txt" that have content similar to the following:
215 31 215 31 7.000000 151 326 151 326 8.000000 545 324 545 324 8.000000 288 323 288 323 1.000000 279 323 279 323 3.000000 288 319 288 319 8.000000 280 319 280 319 8.000000 525 316 525 316 5.000000 38 315 38 315 4.000000 570 310 570 310 4.000000 562 310 562 310 9.000000 473 32 473 32 7.000000
But I can't understand the content of different files of "matching" folder and how we can use them to track the skeleton of every single person in a video, like the last video of this link https://medium.com/deepvisionguru/poseflow-real-time-pose-tracking-7f8062a7c996
I find this tool as well https://github.com/Guanghan/lighttrack which answer to my question. But as I did all the work for one person based on the skeletons generated by AlphaPose I prefer an option related to AlphaPose.
I wonder if one of you has found the solution to do that. I will be thankful if someone can help.
Kind regards, Yassine
Hi, I have encountered this situation and it has been solved. The problem is simple. If you use .sh
to start the service, the parameter --pose_track
was not passed successfully. Please use Python xxxxx --pose_ track
.
Hi @NaUyLL, Thank you for your time and your answer.
I had this problem when I was using google colab, It seems that there is a problem to use tracking with google colab which is open here until now.
But when I get my workstation, by following the step explained in the doc, both approaches --pose_track and --pose_flow gave me the tracking correctly. But by using python scripts/demo_inference.py
as you have mentioned above.
Thank you again :)
Hi Michael, Have you the same color during the whole video? Have you several persons in the video who appear simultaneously in the same frame? Best regards, Yassine
Le mer. 13 oct. 2021 à 21:19, Michael Moore @.***> a écrit :
Hi All, I have this issue running on Ubuntu. I use the command:
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --video /home/mmoore/alphapose/MA002/MA002_Updated.mp4 --outdir /home/mmoore/alphapose/MA002/run1 --save_video --pose_track]
In the video, I see solid color skeletons indicating tracking, but in the json file, there is only idx = 0 for all results.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MVIG-SJTU/AlphaPose/issues/750#issuecomment-942640530, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFAU5OMAD7CTRG5RMATZT6LUGXLUDANCNFSM4USHPZBA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- ZAIM Yassine PhD in Applied Mathematics
Hi Yassine, I had intended to delete my comment. It turns out that my tracking is working fine. This is a nice tool. Thanks
Hi, I'm trying to start from a certain video and obtain a JSON with the 2D poses in order to use it with another code to transform 2D skeletons to 3D skeletons. Now the problem is that when I use Alphapose-generated JSON I see that the first listed person (for example) is not the same in every frame. In other words, there is not a correct ordering of people for each frame in the JSON file, and the idx field is always 0.0 I'm using this command:
./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} ${OUTPUT_DIR}, --pose_track
How can I solve this problem?