Closed Turlan closed 4 years ago
I think the result might be normal. The adaptation result has some perturbation. In my experiments, I have results better than or worse than 44.3. I think 43.6 is a little low but still makes sense. You can try another time to see whether it remains the same.
Thanks, I will try again.
I tried again, and the result is still not right, I am training for the third time. For the second training, I draw the mIOU curve on cityscape val dataset. I find that the overfitting issue seems to be severe. For this training, the best result is at round 40k, not at the suggested 80k. I wanna know if this normal too. Is it ok that just take 40k's model as the best model for the following SSL training?
The overfitting problem does exist. But in my experiments, it only happens after 80000 iterations. I list all the results for every 10000 iterations below: 10000,42.77->20000,43.13->30000,45.31->40000,44.51->50000,44.03->60000,44.54->70000,43.96->80000,44.22->90000,42.35->10000,41.69->110000,41.35 I repeat the experiments for several times and find 70000 or 80000 can give a stable performance. I don't suggest you to pick up the best performance. That is not what I did when I wrote the paper. But when you do self-training, you will find there is no overfitting problem.
I followed your training steps by using
I took the initial model DeepLabV2 as initial weights. Source data is the provided translated images GTA5 as CityScapes (DeepLab). Target data is the training set of Cityscapes. Then I only got 43.6 mIou while I should get 44.3.