Closed samernanoxx closed 11 months ago
Okay I saw similiar issue https://github.com/pmj110119/RenderOcc/issues/10 already posted. Will try to reproduce this!
Have you used an earlier commit version?
There were bugs in the early version. But the latest commit should be replicable.
Hi, has anyone been able to reproduce the results from the paper? I am using the same config as well as the same total batch size of 16 on 4xNVIDIA RTX A6000 (samples_per_gpu=4). I have the same torch==1.10.1+cu111, torchvision==0.10.1 and mmcv-full==1.6.0 versions. The training loss reaches "0.6" at epoch 1, starts to fluctuate, and never converges. The eval mIoU is around 3. Thanks!