Open daquang opened 4 years ago
Hi,
I can't exactly pinpoint the error... The strange thing is that both the Detectron script and the visualization script use the same ffmpeg syntax for reading the video, but you said that the first step was successful.
It looks like the script is reading a partial frame from the stream at 21.95 seconds. Are you using the same video? You could perhaps try to re-encode the video. If it is corrupted for some reason, that should fix it.
Hi,
Thanks for your quick response. Perhaps there's something wrong with how my video is encoded. Do you have an example of an .mp4 file that you know will work well with your workflow?
Here's the video I've been using: https://www.dropbox.com/s/58azh2wnqu7q95o/dance.mp4?dl=0
-Daniel
Just to add to this, I'm doing this on Windows. I've been using a combination of WSL 2 and just the base Windows 10 system, so the behavior is very inconsistent. Anyways, I managed to get something working!
Any way to output the results to a format readable that can be read into a 3D rendering program (eg .bvh files)?
The argument --viz-export
allows you to export the predicted 3D joint positions to a NumPy archive.
If you want to use them to animate a rigged skeleton, you'll probably need to run inverse kinematics on top of them.
I ran the first four steps fine without error, but step 5 is giving my the following error: