Open XIAN-XIAN-X opened 7 months ago
您好,我想请问您在复现的时候有没有出现 Hugging Face 的模型库 (huggingface.co) 下载模型文件时出现连接超时问题呢 虽然我怀疑是服务器没办法访问该网站,但是我已经手动下载相关权重放置/.cache/torch/hub/checkpoints/还是报错
你好,我没有出现这样的问题。
Very interesting, thank you for your work.
I have briefly checked it and it is consistent with what you said.
Have you tried using other depth estimation models to train on this particular partition?
Have you tried using other depth estimation models to train on this particular partition?
The Author of this issue trained the Monodepth2(Resnet34) on this particular dataset, you can check.
Thanks to the authors for their outstanding contribution. I have successfully reproduced the results of the paper. However, I have encountered new confusion.
At first, I failed to reproduce the results of the paper. However, I discovered from the authors' comments that using the previous version could reproduce the results.
So I obtained the previous version by executing the git checkout 6a1e997 command.
Here are the experimental results of my ResNet50 model at a resolution of 192x640.
Compare to the new version,the previous version that affects the experimental results is the 'train_files.txt' file in 'eigen_zhou', which contains about 71k images.However, the training set mentioned in the author's paper contains only 26k images.
In addition, I trained using the previous train split (71k images)on Monodepth2 and achieved results similar to the SQLdepth.
I am very confused by the experimental results.My question is:
- Did author use a train set containing 71k images during training.
- How to interpret Monodepth2 achieving similar results when trained on a dataset containing 71k images. @hisfog "Looking forward to your reply.Thanks a lot!
I also want to know about this issue,Have you cleared up your doubts?
Thanks to the authors for their outstanding contribution. I have successfully reproduced the results of the paper. However, I have encountered new confusion. At first, I failed to reproduce the results of the paper. However, I discovered from the authors' comments that using the previous version could reproduce the results. So I obtained the previous version by executing the git checkout 6a1e997 command. Here are the experimental results of my ResNet50 model at a resolution of 192x640. Compare to the new version,the previous version that affects the experimental results is the 'train_files.txt' file in 'eigen_zhou', which contains about 71k images.However, the training set mentioned in the author's paper contains only 26k images. In addition, I trained using the previous train split (71k images)on Monodepth2 and achieved results similar to the SQLdepth. I am very confused by the experimental results.My question is:
- Did author use a train set containing 71k images during training.
- How to interpret Monodepth2 achieving similar results when trained on a dataset containing 71k images. @hisfog "Looking forward to your reply.Thanks a lot!
I also want to know about this issue,Have you cleared up your doubts?
Not yet.
Thanks to the authors for their outstanding contribution. I have successfully reproduced the results of the paper. However, I have encountered new confusion.
At first, I failed to reproduce the results of the paper. However, I discovered from the authors' comments that using the previous version could reproduce the results.
So I obtained the previous version by executing the git checkout 6a1e997f97caef8de080bb2873f71cfbad9a8047 command.
Here are the experimental results of my ResNet50 model at a resolution of 192x640.
Compare to the new version,the previous version that affects the experimental results is the 'train_files.txt' file in 'eigen_zhou', which contains about 71k images.However, the training set mentioned in the author's paper contains only 26k images.
In addition, I trained using the previous train split (71k images)on Monodepth2 and achieved results similar to the SQLdepth.
I am very confused by the experimental results.My question is: