Closed ydhongHIT closed 3 years ago
@ydhongHIT Please try the pytorch1.7 branch and you could achieve much better results on both PASCAL-Context and ADE20K.
@ydhongHIT Please try the pytorch1.7 branch and you could achieve much better results on both PASCAL-Context and ADE20K.
Hi, I find that in the log of pacsal_context models which achieves 55.11% mIoU, the number of val images is about 1270. Should it be 5105? Or maybe I get something wrong.
@ydhongHIT 1270 is the batch number, and if you test 4 images per batch, it's correct.
@ydhongHIT 1270 is the batch number, and if you test 4 images per batch, it's correct.
Thanks for your reply. I have updated to pytorch1.7 but still didn't reproduce the results. My results are about 51 (single scale) and 53 (multi-scale and flip) on pascal_context while the paper is 54.8. And the results are similar to pytorch 1.3. I use resnet101-baseocr with multi-grid, 16 batchsize on 8 gpus with sycnbn, 120epoch (about 37k, the paper is 30k), no ohem and without random brightness.
@ydhongHIT Please try the HRNet+OCR to check whether you can reproduce the performance above all and we have not tested the ResNet+OCR with Pytorch-1.7
@ydhongHIT Please try the HRNet+OCR to check whether you can reproduce the performance above all and we have not tested the ResNet+OCR with Pytorch-1.7
Which version you use to do the experiments in the paper? Pytorch 0.4?
All the results in the paper are based on pytorch0.4.1.
Hi, I want to reproduce the results of ocr paper, specially for pascal context and ade20k. Should I use the HRNet-OCR repo or this repo? In fact, I follow the default settings of HRNet-OCR and just replace HRNet with resnet101, but I can not reproduce the results on pascal context (54.8%mIoU) and ade20k (45.3%mIoU).