hellochick / ICNet-tensorflow

TensorFlow-based implementation of "ICNet for Real-Time Semantic Segmentation on High-Resolution Images".
406 stars 153 forks source link

Results are bad when training cityscapes on my own #100

Open harora opened 5 years ago

harora commented 5 years ago

Hi

1.) I'm using the model for cityscapes. When I'm using the pre-trained model I'm getting right mIoU. But when I'm using my custom list(of 75% data) and training it, I'm only getting mIoU as 2-3%. My loss is decreasing till 0.6-0.7 but still, mIoU is really bad. Can somebody help with this?

Train command i used:

python train.py --update-mean-var --train-beta-gamma \ --random-scale --random-mirror --dataset cityscapes --filter-scale 1

All other parameters are the same. LR: 5e-4 , Batch size: 8 , etc.

2.) Using trained weights I'm getting the mentioned mIoU(67) but when I'm using it to finetune my model my initial loss is ~10-11. Shouldn't it be around 0.5-1 considering the model is already trained on the same data? I notice that, as the loss goes down the test accuracy also decreases.

pkuqgg commented 5 years ago

Hi

1.) I'm using the model for cityscapes. When I'm using the pre-trained model I'm getting right mIoU. But when I'm using my custom list(of 75% data) and training it, I'm only getting mIoU as 2-3%. My loss is decreasing till 0.6-0.7 but still, mIoU is really bad. Can somebody help with this?

Train command i used:

python train.py --update-mean-var --train-beta-gamma --random-scale --random-mirror --dataset cityscapes --filter-scale 1

All other parameters are the same. LR: 5e-4 , Batch size: 8 , etc.

2.) Using trained weights I'm getting the mentioned mIoU(67) but when I'm using it to finetune my model my initial loss is ~10-11. Shouldn't it be around 0.5-1 considering the model is already trained on the same data? I notice that, as the loss goes down the test accuracy also decreases.

Hi, I have meet the same problems with you. Have you solved the problem?Waiting for reply! Thank you.

harora commented 5 years ago

Hi. I couldn't solve the problem. I moved onto this implementation - https://github.com/oandrienko/fast-semantic-segmentation . This works well

LinRui9531 commented 5 years ago

Hi

1.) I'm using the model for cityscapes. When I'm using the pre-trained model I'm getting right mIoU. But when I'm using my custom list(of 75% data) and training it, I'm only getting mIoU as 2-3%. My loss is decreasing till 0.6-0.7 but still, mIoU is really bad. Can somebody help with this?

Train command i used:

python train.py --update-mean-var --train-beta-gamma --random-scale --random-mirror --dataset cityscapes --filter-scale 1

All other parameters are the same. LR: 5e-4 , Batch size: 8 , etc.

2.) Using trained weights I'm getting the mentioned mIoU(67) but when I'm using it to finetune my model my initial loss is ~10-11. Shouldn't it be around 0.5-1 considering the model is already trained on the same data? I notice that, as the loss goes down the test accuracy also decreases.

Would u tell me yours cityscapes dataset path? or the type of self.image_list and self.label_list? when i run python train.py., i found that in util/image_reader.py , the value returned by from_tensor_slices is null.

dataset = tf.data.Dataset.from_tensor_slices((self.image_list, self.label_list)) dataset = dataset.map(lambda x, y: _parse_function(x, y, cfg.IMG_MEAN), num_parallel_calls=cfg.N_WORKERS)