Closed ghost closed 4 years ago
Hi, there shouldn’t be any modification across different datasets. You might need to confirm the resolution though, as I provided two different versions for CityScapes. In the original paper, I used the one with smaller resolution.
The improved performance is foreseeable as I explained in the readme document that this code is further optimized for readability, thus I also observed some improvements compared to the numbers in the original paper. However, as from the reports of other authors, they all agree that the relative ranking across different approaches stays the same.
One suggestion as I always say to authors who try to reproduce my results: you really don’t have to achieve the exactly same numbers from the paper. This is not a benchmark, just keep the consistent experimental setting, and run the models by yourself. Then you are good to go. As I believe, the numbers will always change across different hardware platforms, the way of implementations and pytorch versions.
Thanks for your reply.
Hello, I ran the model_segnet_single.py using the provided cityscape dataset (batch size is set to 8). The relative error is around 24 which is much better than that reported in your paper (which is around 34) . Is there any other modification about the code should be made when using cityscape rather than nyud?