Closed jianyin2016 closed 3 years ago
Hi @jianyin2016,
Thank you for your interest in our work!
C.train_portion = 0.5
in config_search.pytrain_loader_model
and train_loader_arch
are built in train_search.py, with half-half split training data for updating model weights and architecture parameters, respectively.I double verified: train_loader_model
and train_loader_arch
get 1487 and 1488 images, which are disjointed.
Hope this helps!
Hi, @chenwydj
Did you dive into the name list of the two dataloader
?I calculated the intersection between the 1487 images and 1488 images ending up with a overlapped list with length about 70.
I think this is caused by the shuffle
calling of the whole training data before you get the former half and the latter half, everytime you initialize a dataloader there is a shuffle call, and the shuffled results are not the same even with a common random seed.what you want is W->shuffledW->shuffledW[:len(W)/2]train_loader_model
+shuffledW[len(W)/2]train_loader_arch
, but what you actually get is W->shuffledW1->shuffledW1[:len(W)/2]train_loader_model
+W->shuffledW2->shuffledW2[len(W)/2]train_loader_arch
.
You are right. Thank you very much for pointing it out!
I have updated BaseDataset.py
where shuffle(files)
is applied after the dataset splitting.
I don't think it is a good choice to just applying shuffle(files)
after the dataset splitting cause the files are sorted before splitting, this may result in severe class imbalance between trainA and trainB, I would suggest to provide the dataloader class with a shuffled filenames as param for my limited engineering experinece.
I am wondering how will this affect the finally results? If you have any futher results, plz let me know.
Thanks for your work.
Thanks again for the suggestion! You are right, and sorry for my rush update.
In the latest push I first build a shuffled list of index, and then do the dataset split in two dataloaders based on this shared list of index.
Thank you again!
Hi, According to the statement of ur paper, the training dataset are split into two parts,one is to update model weights and the other to update arch params ,which are thought to be disjointed. but they are not in your implemetaton, you can verify this via calling the
_get_file_names
method ofDataset
, I think this may lead some of the training data never be used during Arch searching, Am I right? Correct me if I were wrong.