Open Asthestarsfalll opened 2 years ago
Well, I find the max_eval which might be the reason of results.
In the yaml config, the following two fields show the data during training and during validation:
train_split: trainval val_split: val
The trainval config trains on train+val and is meant for submitting to the test server. During training, it evaluates on the val set (which is seen during traing) only as a sanity check, so that's why the accuracy is so high.
The cityscapes_1000epochs.yaml config trains on train only and evaluates on val, so this might suit your needs better. Pay attention to the fields train_crop_size and loss_type. cityscapes_trainval_1000epochs.yaml uses a larger crop size and a better loss function than cityscapes_1000epochs.yaml at the cost of longer training time, so you can copy the two fields over if you can afford longer the training time.
Btw, it has nothing to do with max_eval. There are 500 images in the validation set and max_eval is set to 600, so no issues there
Thank you very much! I misunderstood the trainval as evaluating model during training and the train as not evaluating model.
@RolandGao
Hi,
I wonder what is class_uniform_sampling, I can't find it on google and github.
Could you please give the source of it, or paper.
Thank you a lot !
It's first introduced by Nvidia in this paper https://arxiv.org/abs/1812.01593. The code is from here https://github.com/NVIDIA/semantic-segmentation.
@RolandGao Hi, I have send a email about some research of RegSeg to you, Did you received that?
When training, how did the miou and accuracy calculate? On train dataset or validate dataset? I think it's calculated on val dataset due to https://github.com/RolandGao/RegSeg/blob/main/train.py#L238. I trained the base regseg model with config cityscapes_trainval_1000epochs.yam on Cityscapes and got the unbelievable results.![840794c66f23deb33666dcffc4af5b5](https://user-images.githubusercontent.com/72954905/156297394-def6b1bf-2d8d-43f8-b8f6-ded970000a26.png)