cure-lab / PnPInversion

[ICLR2024] Official repo for paper "PnP Inversion: Boosting Diffusion-based Editing with 3 Lines of Code"
212 stars 5 forks source link

question about benchmark #5

Closed wangzhen-ing closed 6 months ago

wangzhen-ing commented 7 months ago

hello, thanks for your work. Can you share some code or more details for building benchmark? For example, how to generate captions in batches through blip2, and how to match text and images to generate JSON files? and how to compute the metrics in the table of COMPARISON WITH INVERSION-BASED EDITING, is it the average of every image in the benchmark dataset for every methods.

juxuan27 commented 6 months ago

Hi, @wangzhen-ing . Unfortunately, we will not share the scripts for generating the benchmark. But you can find how to use blip2 to caption images in huggingface. For the metrics calculation, we have already release here. You can refer to readme for more usage.

wangzhen-ing commented 6 months ago

Hi, @wangzhen-ing . Unfortunately, we will not share the scripts for generating the benchmark. But you can find how to use blip2 to caption images in huggingface. For the metrics calculation, we have already release here. You can refer to readme for more usage.

OK, thank you for your reply!