Open wangbofei11 opened 7 years ago
Het, @wangbofei11 there's three possibility:
--update-mean-var
and freeze beta/gamma first. After some period, start to train beta/gamma variable using --train-beta-gamma
. @hellochick Thank you。I will have a try。By the way,I see you have got a good result with the NYU indoor image, Did you start your training weights from the pretrained model of cityscape provided by author or without any pretrained model?
Hello @wangbofei11 , I started my training weights from the pre-trained model of cityscapes, and then trained on the ADE20k dataset ( I use 27 classes instead 150 classes ).
Hello @hellochick , How can I train with other dataset from scratch? I mean without starting with a pre-trained model. I checked the train.py, it seems that what I should do is either to have something in the snapshots directory or load from a existing model. And to my understanding, the snapshots directory is the a place for snapshots of the model during training. So how can I train the model in the beginning?
And by the way, I am a newbie in this area (both Python (Tensorflow) and the theory of deep neuron network). The only experience I have comes from roughly going through the CS231n lecture of Stanford online. So I have some basic questions, maybe a little stupid.
And thank you very much for your implementation.
Yes, @wangbofei11 of course you can train the model in the beginning, but I suggest you to load the Imagenet pretrained model at first, or it cannot recognize anything.
For your questions, here are my opinions:
tools.py
is used to visualize, have no relationship with training process.max_keep
variable to decide the numbers of checkpoints to keep. The train.py
will automatically detect the latest version of checkpoint.IMG_MEAN
is calculated upon PASCAL dataset, and we using IMG_MEAN to shift the value of pixel from 0-255 to -128-128, just like normalization.If you have another question, feel free to ask me.
@hellochick Thank you for your reply. But according to the if-else brunch in train.py, it will either continue the training using snapshot or load an existing model. But if I train from scratch, neither the snapshot nor the model do I have. If a just comment out if-else part (line 180-186 in train.py), the loss is always Nan. And the reason I want to train from scratch is I have some new labels (the lane mark and the ego lane) to train. For a classification network (which has fully-connected layers), I know it is possible to tune the network by re-train the fully-connected layers and the conv layers almost keep the same. But for segmentation network (which has no fully-connected layers), I don't know how to tune it when new labels are involved, maybe keep the encoders and re-train the decoder? I'm not sure and don't know how to implement the tuning either. And if I continue the training based on pre-trained model, I should first train with --update-mean-var and then --train-beta-gamma, am I right? Additionally, what I want to do is try to detect the lane markings and the ego driving lane for a car. I think it is a segmentation problem, and since the inference efficiency is very important in this case, so I choose ICNet to do it. I think both lane marks and ego lane have relatively simple features comparing with other complicated objects like pedestrians or vehicles, so I expect a well trained ICNet can well perform on both accuracy and efficiency. Am I right? And how many training examples do I need? Currently I'm really struggling with the lack of training data. It seems that the mainstream image dataset (I checked KITTI, COCO, Cityscapes, PASCAL and so on) do not have labels for lane marking so I have to label it myself. And doing the pixel-wise labeling is really inefficient. If you have any suggestions I will really appreciate it. And again, thank you for taking your time to answer my questions.
@hellochick And for question 1, where can I specify the mapping between the label and the class? For example if I want to mark a pixel as class 1, which color should I use to do the labeling? Thank you.
happy new year, everyone, I am trying to training the ICNet with voc2012 dataset and coco2017 dataset, after the training , the loss was about 0.05. but the inference result and the evaluate result was terrible wrong? i thing you @wangbofei11 @hellochick may had doing this. so can you talk me ,what's you train result using voc or coco, and what parameter you using?
Dear @hellochick, thank you very much for opening the source code of your implementation! I have few questions. Your response is very important to me!
python train.py --update-mean-var
and freeze beta/gamma first. After some period, start to train beta/gamma variable using code python train.py --train-beta-gamma
? How can I know when to stop training? In which way we can see the loss function's value? --update-mean-var
and --train-beta-gamma
means? Which layers do they freeze in the training process?
I used voc2012(21 categories including background) dataset for training without pretrained model,only change the 'NUM_CLASSES' to 21 in your training code,but after about 200 steps,the total loss can not be droped(about 0.5),the result is completely wrong. can you give some suggestion on training with other dataset ? tks