hellochick / ICNet-tensorflow

TensorFlow-based implementation of "ICNet for Real-Time Semantic Segmentation on High-Resolution Images".
405 stars 153 forks source link

Optimizing training #34

Open manuel-88 opened 6 years ago

manuel-88 commented 6 years ago

Hey, during training of the network the loss alternates a lot and is not converging to a limit. How can I improve this? Do you have any suggestion on how to change learning rate and the optimizer or some other things I can try?

Thanks in advance

hellochick commented 6 years ago

@manuel-88, I think the most importance thing is to train on the un-pruned model, and then do model compression based on the l1 norm ( described in the paper). The training code I provided is to directly train on the pruned model ( half sized filter ), and the performance is limited. I wonder if somebody else can release handy api to compress the model.

MikeyLev commented 6 years ago

Hi @hellochick , can you explain " The training code I provided is to directly train on the pruned model"? How do you know the original filter size? Secondly you provide code for training the bnnomerge code which is also before merge and should therefore have higher performance. I am a bit confused there ...

hellochick commented 6 years ago

Hi @MikeyLev, you can take a look at the paper. The author said they pruned the filter size to half. So the filter size is double before pruning. Can you understand what I say? For the problem of bnnomerge model, please see another issue #32, there is also a problem I have not solved yet.

manuel-88 commented 6 years ago

Thank you for uploading the code to train with an unpruned model. I tried to train it with 200.000 iterations but the result on the validation set is just 59%. According to your unsolved problem you mentioned above Is it better to train the model without bn?

I`m also confused what this bnnomerge model exactly is? Is it a model trained with bn and afterwards compressed to a model without bn? @hellochick