We reimplement the Deeplabv3+ model and run with the following configuration.
But the final mIoU is just 78.9%. So my question is how to get the 80.7%mIoU performance reported in the DecoupleNet paper? We want to reimplement the outstanding result. Due the the GPU memory limitation, we use AMP training to save the memory requirement. Does it influence the performance mostly? Hope the author or many good brothers help me solve the question.
We reimplement the Deeplabv3+ model and run with the following configuration.
But the final mIoU is just 78.9%. So my question is how to get the 80.7%mIoU performance reported in the DecoupleNet paper? We want to reimplement the outstanding result. Due the the GPU memory limitation, we use AMP training to save the memory requirement. Does it influence the performance mostly? Hope the author or many good brothers help me solve the question.