The official code for the paper: https://openreview.net/forum?id=_PHymLIxuI
MIT License
360
stars
43
forks
source link
Thanks for the earlier answer. Do you have any previous model test results of training directly in ADE20K without using classification pre-training? #15
I'm using your model directly, with the configuration unchanged, just setting the pre-trained model to None. Do you think the 80000iter training is reasonable?
I'm using your model directly, with the configuration unchanged, just setting the pre-trained model to None. Do you think the 80000iter training is reasonable?