Closed AlbertoRemus closed 2 years ago
Curiosely the above huge inference time was achieved by running the script from Visual Studio Code
Testing today from standard terminal provided an inference time is a much better ~0.3 seconds that become ~0.2 seconds without saving results
python main.py --dataset_name cub --dataset_dir path2mcmr/mcmr/datasets/cub/UCMR_CUB_data/cub/ \
--classes all \
--single_mean_shape \
--subdivide 4 \
--sdf_subdivide_steps 351 \
--use_learned_class \
--num_learned_shapes 1 \
--checkpoint_dir path2mcmr/mcmr/checkpoint/meanshape01 \
--log_dir log \
--pretrained_weights path2mcmr/mcmr/checkpoint/meanshape01/net_latest.pth \
--cam_loss_wt 2.0 \
--cam_reg_wt 0.1 \
--mask_loss_wt 100.0 \
--deform_reg_wt 0.005 \
--laplacian_wt 6.0 \
--laplacian_delta_wt 1.8 \
--graph_laplacian_wt 0.0 \
--tex_percept_loss_wt 0.8 \
--tex_color_loss_wt 0.03 \
--tex_pixel_loss_wt 0.005 \
--save_dir path2mcmr/mcmr/output
--save_results \
--qualitative_results \
--faster
That's strange, maybe the python scripts from VS Code are executed attaching a debugger and that slows down a lot the diff. renderer or the model was running entirely on cpu for some reason.
Hello for each image of the CUB dataset the test script https://github.com/aimagelab/mcmr#cub takes around 10 seconds (even without saving the reconstrucred .obj) on my
NVIDIA GTX 1050
+Intel® Core™ i7-7700HQ CPU @ 2.80GHz × 8
architecture compared to few milliseconds on theNVIDIA GTX 1080 Ti
+Intel Core i7-7700K
stated in your paper and I would like to ask if you have any insights about itThis is the list of parameters
Thanks in advance