hzxie / Pix2Vox

The official implementation of "Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images". (Xie et al., ICCV 2019)
https://haozhexie.com/project/pix2vox
MIT License
474 stars 115 forks source link

One question on F-Score #76

Closed LiyingCV closed 3 years ago

LiyingCV commented 3 years ago

Greetings, In your paper, there are two evaluation metrics, Iou and F-Score. But there is not including F-Score in your test code. Could you provide some details about realizing F-Score in the test code? Thank you in advance.

faridyagubbayli commented 3 years ago

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

LiyingCV commented 3 years ago

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

Thanks for your help!I solved.

YYTYTY commented 3 years ago

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

Thanks for your help!I solved.

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

LiyingCV commented 3 years ago

Hi, I don't know the exact code used to report the results but in the Pix2Vox++ paper, a reference to this work was given when talking about F-score. The code for the referenced paper includes the F-score calculation.

Thanks for your help!I solved.

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1.

YYTYTY commented 3 years ago

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1. Thank you for your reply ! In fact, I have tried that, but failed, so I want to know how you solved it

LiyingCV commented 3 years ago

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1. Thank you for your reply ! In fact, I have tried that, but failed, so I want to know how you solved it

Could you show me about the difficulties which you met?

YYTYTY commented 3 years ago

Hi, Could you provide some details about realizing F-Score in the test code? Thank you in advance.

You can refer to this link https://github.com/lmb-freiburg/what3d and the issue https://github.com/lmb-freiburg/what3d/issues/1. Thank you for your reply ! In fact, I have tried that, but failed, so I want to know how you solved it

Could you show me about the difficulties which you met?

Thanks for your help ! I solved.

YYTYTY commented 3 years ago

I'm sorry to trouble you again. I want to know the results of f-score in the test process. I can't get the accuracy as the Pix2Vox++. A little help from you would be greatly appreciated!

1

LiyingCV commented 3 years ago

I'm sorry to trouble you again. I want to know the results of f-score in the test process. I can't get the accuracy as the Pix2Vox++. A little help from you would be greatly appreciated!

1

Actually, we also get the wrong result. We test the single-view result and the value is 0.449. We still do not find out the reason. Besides. we find that the sampling is calculated by CPU which will waste too much time, do you also meet this trouble?

YYTYTY commented 3 years ago

I'm sorry to trouble you again. I want to know the results of f-score in the test process. I can't get the accuracy as the Pix2Vox++. A little help from you would be greatly appreciated! 1

Actually, we also get the wrong result. We test the single-view result and the value is 0.449. We still do not find out the reason. Besides. we find that the sampling is calculated by CPU which will waste too much time, do you also meet this trouble?

Yes. Because Numpy cannot read CUDA tensor and needs to convert it to CPU tensor.

wangleishan commented 2 years ago

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

LiyingCV commented 2 years ago

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

Yes. The single-view test value is different from paper.

wangleishan commented 2 years ago

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

Yes. The single-view test value is different from paper. I tested several other Multiviews and the results are all different from those in the article. In addition, do you know how to train multi view, After I train the whole network (without the context-aware fusion module) with a single-view image for 250 epochs. What should I do to fix the encoder and decoder and train the rest of the network for 100 epochs ? How to fix the encoder and decoder?

LiyingCV commented 2 years ago

I also get the wrong result, I test the single-view result and the value is 0.415, Have you solve this?

Yes. The single-view test value is different from paper. I tested several other Multiviews and the results are all different from those in the article. In addition, do you know how to train multi view, After I train the whole network (without the context-aware fusion module) with a single-view image for 250 epochs. What should I do to fix the encoder and decoder and train the rest of the network for 100 epochs ? How to fix the encoder and decoder?

Just follow the instruction on paper, keep merger on when training multi-view.