Hi and congratulations on your work. It is astonishing how much ground UDA has gained over the years.
Looking over the code, I noticed that you changed some parts of the last layers of DeepLabv2. In particular, you add group normalization, ReLu activations, and dropout in the last layers. What is the inspiration behind those changes? How much did they contribute to the results? Similar methods have shown much improved results using DeepLabv3. Is it fair to compare your improved model with other DeepLabv2 approaches without explaining how much those changes contributed to the result?
Thank you for your time and consideration. I am looking forward to your response.
Hi and congratulations on your work. It is astonishing how much ground UDA has gained over the years.
Looking over the code, I noticed that you changed some parts of the last layers of DeepLabv2. In particular, you add group normalization, ReLu activations, and dropout in the last layers. What is the inspiration behind those changes? How much did they contribute to the results? Similar methods have shown much improved results using DeepLabv3. Is it fair to compare your improved model with other DeepLabv2 approaches without explaining how much those changes contributed to the result?
Thank you for your time and consideration. I am looking forward to your response.