VITA-Group / FasterSeg

[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
MIT License
526 stars 107 forks source link

TranA and TrainB is overlapped according to your implementation #51

Closed jianyin2016 closed 3 years ago

jianyin2016 commented 3 years ago

Hi, According to the statement of ur paper, the training dataset are split into two parts,one is to update model weights and the other to update arch params ,which are thought to be disjointed. but they are not in your implemetaton, you can verify this via calling the _get_file_names method of Dataset , I think this may lead some of the training data never be used during Arch searching, Am I right? Correct me if I were wrong.

chenwydj commented 3 years ago

Hi @jianyin2016,

Thank you for your interest in our work!

  1. C.train_portion = 0.5 in config_search.py
  2. train_loader_model and train_loader_arch are built in train_search.py, with half-half split training data for updating model weights and architecture parameters, respectively.
  3. Each dataset object get split in the whole training data based on the portion in BaseDataset.py.

I double verified: train_loader_model and train_loader_arch get 1487 and 1488 images, which are disjointed.

Hope this helps!

jianyin2016 commented 3 years ago

Hi, @chenwydj Did you dive into the name list of the two dataloader?I calculated the intersection between the 1487 images and 1488 images ending up with a overlapped list with length about 70. I think this is caused by the shuffle calling of the whole training data before you get the former half and the latter half, everytime you initialize a dataloader there is a shuffle call, and the shuffled results are not the same even with a common random seed.what you want is W->shuffledW->shuffledW[:len(W)/2]train_loader_model+shuffledW[len(W)/2]train_loader_arch, but what you actually get is W->shuffledW1->shuffledW1[:len(W)/2]train_loader_model+W->shuffledW2->shuffledW2[len(W)/2]train_loader_arch.

chenwydj commented 3 years ago

You are right. Thank you very much for pointing it out!

I have updated BaseDataset.py where shuffle(files) is applied after the dataset splitting.

jianyin2016 commented 3 years ago

I don't think it is a good choice to just applying shuffle(files) after the dataset splitting cause the files are sorted before splitting, this may result in severe class imbalance between trainA and trainB, I would suggest to provide the dataloader class with a shuffled filenames as param for my limited engineering experinece. I am wondering how will this affect the finally results? If you have any futher results, plz let me know. Thanks for your work.

chenwydj commented 3 years ago

Thanks again for the suggestion! You are right, and sorry for my rush update.

In the latest push I first build a shuffled list of index, and then do the dataset split in two dataloaders based on this shared list of index.

Thank you again!