Closed jasong-ovo closed 3 weeks ago
Thanks for your interest. Use real high-resolution images as the ground truth. From each high-resolution image, crop several patches. These patches represent the real data distribution and will serve as the GT patches. For each generated image, crop patches of the same size as those in the GT set. we use implementation for fid from https://github.com/chaofengc/IQA-Pytorch
What's the patch size and how many patches you crop for each image?
The patch size corresponds to the input image size of the feature extractor used to calculate FID. For example, if the input size is 224, then the patch size would also be 224 (though I’m not entirely sure if it is exactly 224). For each image, we crop three patches.
thanks a lot !
----- 原始邮件 ----- 发件人: "catcathh" @.> 收件人: "catcathh/UltraPixel" @.> 抄送: "jason" @.>, "Author" @.> 发送时间: 星期二, 2024年 10 月 29日 下午 12:23:13 主题: Re: [catcathh/UltraPixel] how to calculate fid-patch (Issue #22)
The patch size corresponds to the input image size of the feature extractor used to calculate FID. For example, if the input size is 224, then the patch size would also be 224 (though I’m not entirely sure if it is exactly 224). For each image, we crop three patches.
-- Reply to this email directly or view it on GitHub: https://github.com/catcathh/UltraPixel/issues/22#issuecomment-2443173776 You are receiving this because you authored the thread.
Message ID: @.***> -- 敬颂 时祺
龚俊超
Hello, thanks for your amazing work. Could you give the information about how to calculate FID-patch?