Closed GraceJary closed 5 years ago
They are ground truth points_2d.
Thank you ,I understand it now. By the way, how can I get .npy file like the 'cat_pose.npy' in the demo?
pose = np.load(os.path.join(demo_dir_path, 'cat_pose.npy'))
@GraceJary You can log out npy and the program can be changed so that you only need to input a picture and add Point3d and bb83d to get the output as shown below.
When I use a new 3d model to render images. It seems like to need /media/srt/dataset/pvnet-rendering/data/LINEMOD/redbox/training_range.txt
.Because the sample pose function need it . I don't know how to solve the problem .
@GraceJary I have not done my own model, but I have done rendering LINEMOD to get pictures.
I have also done rendering with Linemod Dataset. And I get the results like yours. But I don't know how to solve the problem about rendering a new model ,when I lack the training_range.txt
file.
You need to read the code and revise it.
If I only have the render images, I would not be able to generate "training_range.txt". So this confuses me.
When I run demo.py ,I am confused that why the point2d is calculated according to the pose file you loaded, but not the one which network predict?