Closed mtchiu2 closed 2 years ago
Thanks for interest. We are still preparing the code. We will release it as soon as we can.
Thanks for the response.
I'm curious about the details for evaluating speed and memory for various models. I'm comparing your table 1 with MagNet's table 8. I noticed that for some models (e.g. MagNet-Fast), both tables reported the same mIoU, memory usage and speed, while for others (e.g. DeepLabv3+ and FCN-8s), the mIoUs are the same but memory usages are vastly different. Am I missing some detail for the comparison?
I also tried testing memory usage for MagNet on a single 2080Ti in their pip environment, but got 2800MB usage, which is different from both your reported numbers and the original paper's, does this mean GPU usage could also depend on machines?
1) DeepLabv3+ and FCN-8s are common methods, there are two inference stragegies for these methods: local inference that inputs small patches and global inference that inputs down-scale images. The size of patches and down-scale images is different, so memory usages are differently.(the detials can be found GLNet 's footnote. 2) Table 1 shows that different inputs will leads to different mIoU, like DeepLabV3+ global inference 63.5 and local inference 69.69. 3) we use official results of MagNet. However we do found that GPU usage depends on machines. I guess the CPU and bus bandwidth can also effect the memory usages.
Thank you very much for the quick response. It is very much appreciated.
Just to clarify, do you mean you use the mIoU of all models (global, local, UHR) from MagNet? Do you also use their memory usage numbers for some of the models? I see that the GPU usage for UHR methods are the same as their paper, but only some global/local models have different GPU usage despite the same mIoUs?
1) MagNet is a UHR method, so I only use its UHR result. 2) About the UHR methods we use official results. For some global/local methods, we use the models in mmsegmention repo to test their memory usage.
Hi,
Thank you for your interesting work. Will the code be released any time soon?