boschresearch / ALDM

Official implementation of "Adversarial Supervision Makes Layout-to-Image Diffusion Models Thrive" (ICLR 2024)
https://yumengli007.github.io/ALDM/
GNU Affero General Public License v3.0
52 stars 3 forks source link

Evaluation Metrics. #1

Closed houjie8888 closed 10 months ago

houjie8888 commented 10 months ago

Hi author, could you please provide the code for the evaluation calculation?

houjie8888 commented 10 months ago

I would like to ask you a question about calculating the metrics during the training process. Specifically, the training process is usually interspersed with a validation step, do you perform the computation of the evaluation metrics during the validation step, which seems to be time consuming. So I'm wondering how you schedule the evaluation during the training process?

YumengLi007 commented 10 months ago

Hi @houjie8888 ,

we used clean-fid for FID evaluation, tifa-score for measuring text controllability (see a script example here tifa_eval.txt). For mIoU evaluation, following prior works, we used pretrained DRN-D-105 and UperNet101 for Cityscapes and ADE20K, respectively. The pretrained segementers can be found from their repos, i.e., drn, upernet-encoder and upernet-decoder.

There was no quantitative evaluation conducted during training, but rather visual logging here .

Hope this helps :)

houjie8888 commented 10 months ago

@ct2034 Thank you very much for your reply!

So, the quantitative evaluation is only done once at the end of the training.