Closed Zumbalamambo closed 7 years ago
First, get stacked hourglass detections, then pass them through our network. Let me know if you run into a problem.
May I know How do i pass it into your network?
You need to pass the --sample
flag when calling python predict_3dpose.py
. Please refer to the sample function.
You might find it useful to follow the quick demo from our README, as it trains and evaluates 2d stacked hourglass detections, which seems to be what you want to do.
Thank you.. Yes it is now training. How long does it take in a cpu?
It takes around 20 minutes in my Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz. Let me know how long it takes in yours!
ok I ran python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise --use_sh --epochs 1 experiments/All/dropout_0.5/epochs_1/lr_0.001/residual/depth_2/linear_size1024/batch_size_64/no_procrustes/maxnorm/batch_normalization/use_stacked_hourglass/predict_17
.
It has finished training 2700 batches so far. Do you have any pretrained model?
You can produce a model by running
python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise --use_sh
This will train for 200 epochs, so it should take 200 times longer than the quick demo.
If you don't have the computation to do that don't worry, I'll train overnight and upload a model tomorrow.
Ya thank you... By the way, this is very very very impressive... Im trying to get this.. Thank yo so much for your work. I appreciate. I will be waiting for the model. After uploading, please also add how I can run the pretrained model to get the skeleton as in the video. Its very very awesome
Thanks! Please considering giving this project a star :star: on github. I'll let you know when I have the model tomorrow.
you are a genius!... Yes I have starred it. I will star it million times if Im given such an option.
2d model got finished training. Took me around 2 hours :( . How do I use this model to process realtime video stream?
As we mention in our paper (https://arxiv.org/pdf/1705.03098.pdf), our code only produces 3d poses given 2d poses as input. Therefore, if you want to process a video stream, first you have to go from an image to 2d pose. For this, you should use stacked hourglass. Once you get the 2d pose, you can use our model to obtain a 3d pose.
You can find a pre-trained model at https://drive.google.com/file/d/0BxWzojlLp259MF9qSFpiVjl0cU0/view?usp=sharing. Decompress at root and you can call python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise --use_sh --epochs 1 --sample --load 4874200
to use this model.
so I should convert the video to 2d pose images using[ stacked hourglass] which is written in lua (https://github.com/anewell/pose-hg-demo)
Then I have to convert it to 3d pose.?
Yes. Then you can use our code to convert to 3d pose.
On Sun, Jul 16, 2017 at 7:37 PM, Zumbalamambo notifications@github.com wrote:
so I should convert the video to 2d pose images using[ stacked hourglass] which is written in lua (https://github.com/anewell/pose-hg-demo)
Then I have to convert it to 3d pose.?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/una-dinosauria/3d-pose-baseline/issues/14#issuecomment-315658093, or mute the thread https://github.com/notifications/unsubscribe-auth/ADj5zEtc3AU6YpaBSC2ZR0gOZQAic3XSks5sOsjngaJpZM4OZE1w .
-- Julieta
Thank you. I will try lua . is it possible to get real world dimensions of lines after skeletonization?
Can I use it via coding instead of the command line, there seams to be no proper API to call. I can not easily find the out put I can get.
How do i get the skeleton in a realtime camera?