Closed AmyWan closed 5 years ago
Hello,
If you check the bash script in scripts/demo_lm.sh, you will see that it runs metrics.py with the argument '--visualize'. You will be able to visualize the results using metrics.py itself if you set this flag.
Regarding your second question, the network currently predicts 2048 points, and we have provided ground truth data for it. Only during evaluation we sample 1024 points from the predicted output so as to be comparable to other methods. We have provided ground truth data for 1024 points as well. yes,I have show the 2048 points.and your demo is for evaluation.I have another question:how to use the pix3d dataset in your net,the code is for the image in cad .?hope to reply ,thanks.
when I use the real data in Pix3D,I cropped the imge with the size 128*128,using img_mask to extract the object in compex blackground.The result I produce is bad,Could u like to share me with yours?
could you like to tell me the detailes to applay the net in real world image
Original question has a response here: https://github.com/val-iisc/3d-lmnet/issues/3#issuecomment-444863530 Follow-up question is a duplicate of https://github.com/val-iisc/3d-lmnet/issues/4.
Hello,
If you check the bash script in scripts/demo_lm.sh, you will see that it runs metrics.py with the argument '--visualize'. You will be able to visualize the results using metrics.py itself if you set this flag.
Regarding your second question, the network currently predicts 2048 points, and we have provided ground truth data for it. Only during evaluation we sample 1024 points from the predicted output so as to be comparable to other methods. We have provided ground truth data for 1024 points as well.