DC1991 / FS_Net

the code for CVPR2021 work: FS-Net
MIT License
93 stars 12 forks source link

Question about training process and evaluation #6

Open lolrudy opened 3 years ago

lolrudy commented 3 years ago

I ran the training codes for all 6 objects in NOCS dataset and trained a detection model using CAMERA dataset. Excluding laptop, other objects does not work well using evaluation code from Shape Prior Deformation. I'm wondering whether hyperparameter or training process should be changed for other objects. Moreover, can you please release your evaluation code and trained model weight?

codewfun commented 3 years ago

Same problem, we trained the model with preprocessed data following Shape Prior Deformation and evaluate the trained model with evaluation code from that repo but only get the following results, which are far from the paper results. I'm wondering if there are other tricks that are missing apart from the released designed model and shape deformation data aug.

IOU 50 IOU 75 5 deg, 5cm 10 deg, 5cm
paper result 92.2 63.5 28.2 64.6
Our implement 47.7 33.3 19.3 41.8
codewfun commented 3 years ago

@lolrudy Did you get better results and can you share your result?

lolrudy commented 3 years ago

I did not get anything meaningful... The result is even worse than yours. I doubt that there are bugs in the releasing code.

codewfun commented 3 years ago

@DC1991 Could you provide some advice? Thanks.

makangzhe commented 3 years ago

When i run gen_pts.py after file obj convert to file ply, i can't generate labeled data , can you help me ? can you share how do you train this model ? @codewfun

codewfun commented 3 years ago

I didn't use the gen_pts.py and the ply objects. Instead, I use the label and sampled model points from the Shape Prior Deformation repo. I think the pre-processing won't affect the performance much. The performance of our trained model are far from the paper results. I'm waiting for the author's reply.

lolrudy commented 3 years ago

We reimplement FS-Net and find out it actually works. The result is similar to the paper.

taeyeopl commented 3 years ago

@lolrudy Q1. May I ask what part is different from the first version (worse performance)?? Q2. Can you share your reproducing code? include training & evaluation code & pretrained weights?? It would be really helpful.

lolrudy commented 3 years ago

Sorry I can't share the code in this moment. We preserve the network architecture, and rewrite the training code. The evaluation code is from shape prior deformation. There might be problem in the computation of rotation matrix from two vectors in the original code, but I'm not sure. Also, it should be careful to recover the translation and size from the network output.

codewfun commented 3 years ago

@lolrudy What do you mean "There might be problem in the computation of rotation matrix from two vectors in the original code"? Could you share more detail on the difference about this part?

codewfun commented 3 years ago

@lolrudy I tried fix the rotation matrix part and get better results. But there are still some gap to the paper result. Could you share your result? And did you use the detection result from shape prior deformation or YOLO v3 during evaluation?

dedoogong commented 2 years ago
@DC1991 , @lolrudy I also failed to reproduce the paper performance My train results are as below(I didn't use CAMERA, but used only REAL dataset) mAP
3D IoU at 25: 79.32
3D IoU at 50: 64.38
3D IoU at 75: 17.48
5 degree, 2cm: 0.04
5 degree, 5cm: 0.14
10 degree, 2cm: 1.36
10 degree, 5cm: 3.76

Please help me find where is the buggy point in the training code~!

amaj17 commented 2 years ago

@dedoogong can you share your evaluation code with me? I'm having troubles with evaluating the results using the code from the Shape Prior Deformation repository and it seems like you got it to work...