As far as I know, the evaluation metric in the paper Style-ERD is FMD, not FID. Is the fid.py the feature extractor mentioned in the paper?
If yes, I have some questions about this metric.
What's the motivation why you didn't use the FMD in the paper [38]? Do you have the results of FMD in the paper [38]? What are the differences/advantages of your "Denoise Autoencoder"? Why it's called the denoise autoencoder? I feel a bit confused about the metric in your paper.
[38] applied a content classification model as feature extractor, but we have applied content classification as part of the loss in our work. So we think it's unfair to use the features of content classification model.
What are the differences/advantages of your "Denoise Autoencoder"? Why it's called the denoise autoencoder?
Denoising autoencoder is just an autoencoder that's used to reconstruct the noisy input. It is a popular choice for learning latent features.
Hi Tianxin,
As far as I know, the evaluation metric in the paper Style-ERD is FMD, not FID. Is the fid.py the feature extractor mentioned in the paper?
If yes, I have some questions about this metric. What's the motivation why you didn't use the FMD in the paper [38]? Do you have the results of FMD in the paper [38]? What are the differences/advantages of your "Denoise Autoencoder"? Why it's called the denoise autoencoder? I feel a bit confused about the metric in your paper.
Thanks a lot for your time!