facebookresearch / DensePose

A real-time approach for mapping all human pixels of 2D RGB images to a 3D surface-based model of the body
http://densepose.org
Other
6.95k stars 1.3k forks source link

How to use other pretrained models? #183

Open MuhammadAsadJaved opened 5 years ago

MuhammadAsadJaved commented 5 years ago

Hi, I am using ResNet for infer on single image with this given command and it is working well.

python2 tools/infer_simple.py \ --cfg configs/DensePose_ResNet101_FPN_s1x-e2e.yaml \ --output-dir DensePoseData/infer_out/ \ --image-ext jpg \ --wts https://dl.fbaipublicfiles.com/densepose/DensePose_ResNet101_FPN_s1x-e2e.pkl \ DensePoseData/demo_data/demo_im.jpg

My question is can I use the other network specified in paper like VGG16 etc? these pre-trained networks like ResNet are also available ? if yes then How can I use them? like in the above command we specify with --cfg and --wts arguments . then what will be the link of that networks ? or I need to train them and only then I can use it?
My second Question is . Is the ResNet is fastest for this task? or any other available network perform better on video file?

vkhalidov commented 5 years ago

@MuhammadAsadJaved All the available pretrained models are listed on the Model Zoo page. There are many aspects that affect execution speed of a model. Usually there are several criteria to consider (like speed, accuracy, memory footprint etc). One would then typically perform optimization on the selected criteria with constraints.