trrahul / densepose-video

Code to run densepose on video with detectron. https://github.com/facebookresearch/Detectron
GNU General Public License v3.0
62 stars 18 forks source link

How to use other pretrained models? #11

Open MuhammadAsadJaved opened 5 years ago

MuhammadAsadJaved commented 5 years ago

Hi, I am using ResNet for infer on video file with this given command and it is working well.

python2 tools/infer_vid.py
--cfg configs/DensePose_ResNet101_FPN_s1x-e2e.yaml
--output-dir DensePoseData/infer_out/
--wts https://s3.amazonaws.com/densepose/DensePose_ResNet101_FPN_s1x-e2e.pkl --input-file filename

My question is can I use the other network specified in paper like VGG16 etc? these pre-trained networks like ResNet are also available for this task ? if yes then How can I use them? like in the above command we specify with --cfg and --wts arguments . then what will be the link of that networks ? or I need to train them and only then I can use it?
My second Question is . Is the ResNet is fastest for this task? or any other available network perform better on video file because output using ResNet is very slow.?

trrahul commented 5 years ago

Hi, very sorry for the late reply. As mentioned in the readme, you can get the files from these links.

(https://github.com/facebookresearch/DensePose/tree/master/configs) https://github.com/facebookresearch/DensePose/blob/master/MODEL_ZOO.md

As the answer to your second question, ResNet was the fastest when I run the tests, it was a while ago. I can not say how are the performance now with other models.