Closed houjie8888 closed 10 months ago
I would like to ask you a question about calculating the metrics during the training process. Specifically, the training process is usually interspersed with a validation step, do you perform the computation of the evaluation metrics during the validation step, which seems to be time consuming. So I'm wondering how you schedule the evaluation during the training process?
Hi @houjie8888 ,
we used clean-fid for FID evaluation, tifa-score for measuring text controllability (see a script example here tifa_eval.txt). For mIoU evaluation, following prior works, we used pretrained DRN-D-105 and UperNet101 for Cityscapes and ADE20K, respectively. The pretrained segementers can be found from their repos, i.e., drn, upernet-encoder and upernet-decoder.
There was no quantitative evaluation conducted during training, but rather visual logging here .
Hope this helps :)
@ct2034 Thank you very much for your reply!
So, the quantitative evaluation is only done once at the end of the training.
Hi author, could you please provide the code for the evaluation calculation?