LikeLy-Journey / SegmenTron

Support PointRend, Fast_SCNN, HRNet, Deeplabv3_plus(xception, resnet, mobilenet), ContextNet, FPENet, DABNet, EdaNet, ENet, Espnetv2, RefineNet, UNet, DANet, HRNet, DFANet, HardNet, LedNet, OCNet, EncNet, DuNet, CGNet, CCNet, BiSeNet, PSPNet, ICNet, FCN, deeplab)
Apache License 2.0
705 stars 162 forks source link

reproduce mIoU #37

Closed changwenkai101 closed 4 years ago

changwenkai101 commented 4 years ago

Thank you for your project. We deployed this project on the newly installed 8GPU server and intend to run all the demos. We did not change the configs and tested three demos: DeepLabv3_plus_resnet101_cityscape (≈78.28), DFANet_xceptionA (≈59.10), Fast_SCNN (≈60.08), the latter two are far from the original results, so the experiment was stopped.

For some well-known reasons, the results of the AI paper are not easy to reproduce, but we noticed that you mentioned in a previous reply that the configuration of a demo needs to be modified mode = 'testval', whether each demo should modify some code in order to reach the mIoU listed in the original text or the project? Or every configs need not change,the low mIoU just because the original results are difficult to reproduce?

LikeLy-Journey commented 4 years ago
  1. Fast scnn can easily reproduce the result of paper mentioned by using our configs. you just need to set total batch size=12. batch size in config is img of a single gpu. #18
  2. I didn't train dfanet, also can not reproduce the speed that paper mentioned. Maybe you should dive into the paper for better result.
  3. you do not need to pay attention to mode if you train on cityscape dataset. because the size of images are always (1024, 2048)
changwenkai101 commented 4 years ago

Thank you very much for your reply. According to your suggestion, we reduced the batch of Fast_SCNN and achieved the original accuracy. Similarly, the Deep_lab_mobile was almost the same as the original one.