sshan-zhao / ACMNet

Adaptive Context-Aware Multi-Modal Network for Depth Completion
66 stars 11 forks source link

Are the released pre-trained models the best models? #7

Closed intellisense-team closed 3 years ago

intellisense-team commented 3 years ago

Hi! Thanks for the code and the pretrained model. I have tested the pretrained model on the validation set, but I couldn't get the same results as the paper said. The results are as follow. image

Besides, I trained the model for about 40 epochs and retested the validation set, but only got the results as follow. image

So, how can I get the results as the paper described? Thanks!

sshan-zhao commented 3 years ago

Hi! Thanks for the code and the pretrained model. I have tested the pretrained model on the validation set, but I couldn't get the same results as the paper said. The results are as follow. image

Besides, I trained the model for about 40 epochs and retested the validation set, but only got the results as follow. image

So, how can I get the results as the paper described? Thanks!

The RMSE is 1047? How did you evaluate the model? The results are wrong.

intellisense-team commented 3 years ago

Hi! Thanks for the code and the pretrained model. I have tested the pretrained model on the validation set, but I couldn't get the same results as the paper said. The results are as follow. image Besides, I trained the model for about 40 epochs and retested the validation set, but only got the results as follow. image So, how can I get the results as the paper described? Thanks!

The RMSE is 1047? How did you evaluate the model? The results are wrong.

I just built the same environment and run the script "python val.py --model test --channels 64 --model_path model_64.pth --clip --knn 6 6 6 --nsamples 10000 5000 2500 --flip_input".

sshan-zhao commented 3 years ago

Hi! Thanks for the code and the pretrained model. I have tested the pretrained model on the validation set, but I couldn't get the same results as the paper said. The results are as follow. image Besides, I trained the model for about 40 epochs and retested the validation set, but only got the results as follow. image So, how can I get the results as the paper described? Thanks!

The RMSE is 1047? How did you evaluate the model? The results are wrong.

I just built the same environment and run the script "python val.py --model test --channels 64 --model_path model_64.pth --clip --knn 6 6 6 --nsamples 10000 5000 2500 --flip_input".

Hi, I just evaluate the model again using the code in my machine (should be the same as the repository), and get the results:
205.35173 762.745 0.8842766 2.1444786 0.997923 0.99935645 0.9997207 0.011058769 Did you modify the code?

sshan-zhao commented 3 years ago

Hi! Thanks for the code and the pretrained model. I have tested the pretrained model on the validation set, but I couldn't get the same results as the paper said. The results are as follow. image Besides, I trained the model for about 40 epochs and retested the validation set, but only got the results as follow. image So, how can I get the results as the paper described? Thanks!

The RMSE is 1047? How did you evaluate the model? The results are wrong.

I just built the same environment and run the script "python val.py --model test --channels 64 --model_path model_64.pth --clip --knn 6 6 6 --nsamples 10000 5000 2500 --flip_input".

In addition, which version of pytorch do you use? Sometimes, different functions are used in different versions of pytorch.

intellisense-team commented 3 years ago

Hi! I just re-run the evaluate script. All the things are not changed. But this time I get the result: image It makes me confused. BTW, I didn't modify the code, and the version of pytorch is 1.2. The only thing different from the README is that the version of ubuntu is 18.04.

intellisense-team commented 3 years ago

Hi! I just re-run the evaluate script. All the things are not changed. But this time I get the result: image It makes me confused. BTW, I didn't modify the code, and the version of pytorch is 1.2. The only thing different from the README is that the version of ubuntu is 18.04.

@sshan-zhao

sshan-zhao commented 3 years ago

Hi! I just re-run the evaluate script. All the things are not changed. But this time I get the result: image It makes me confused. BTW, I didn't modify the code, and the version of pytorch is 1.2. The only thing different from the README is that the version of ubuntu is 18.04.

@sshan-zhao

The method needs to sample 10000, 5000, 2500 points randomly, which might result in the issue. I guess. I never have such issue.

intellisense-team commented 3 years ago

Hi! I just re-run the evaluate script. All the things are not changed. But this time I get the result: image It makes me confused. BTW, I didn't modify the code, and the version of pytorch is 1.2. The only thing different from the README is that the version of ubuntu is 18.04.

@sshan-zhao

The method needs to sample 10000, 5000, 2500 points randomly, which might result in the issue. I guess. I never have such issue.

Thanks!!!