Closed leoil closed 1 year ago
Hey Leoil, In order to train ProCST with different datasets you should implement dataloaders to both source and target datasets. You are obligated to preserve the basic structure of the loader. I would suggest that you'll use the original GTA5 dataloader and Cityscapes dataloader as examples to source and target dataloaders, respectively. Implement them and then save them in the data_handlers folder. The last step would be to update the data_handlers/_init_.py file, and add your datalodaers to the wrapper functions CreateSrcDataLoader and CreateTrgDataLoader. After setting that up, you may control the desired source and target datasets based on configuration flags. Shahaf
Thanks for your timely reply.
Do I still need to train a segmentation model on the source domain (Cityscapes, for example) separately before I train PROCST.
I noticed that there is a train_semseg_on_source.py
file, if I want to use it to train the segmentation model, what augments do I need to state in the command line?
You're welcome :)
Yes, you'll still need to train a segmentation net on your source domain. Note that you can start training the network without it, as for scales n<N the training process does not incorporate Label Loss.
In order to train a segmentation network on the source domain, you'll need first to create dataloaders as my first comment suggenets. Than, you can use train_semseg_on_source.py
by using the following command line:
python ./train_semseg_on_source.py --source=your/source/name --src_data_dir=your/source/data/dir
Shahaf
I close this issue because no further questions were asked :) Shahaf
Hello, I'm looking to train a semantic segmentation network on domains other than GAT/Synthia and was wondering if you could provide me with a training script. Also, I'm curious to know approximately how long it would take to train a ProCST Translation Model from scratch. Thank you for your help!