LituRout / PSLD

Posterior Sampling using Latent Diffusion
124 stars 12 forks source link

Question about Table 1 #4

Closed HyoungwonCho closed 6 months ago

HyoungwonCho commented 6 months ago

Hello, thanks for the nice work! I have a question regarding the figures in Table 1 of the paper. The results for DPS in Table 1, specifically the figures for Inpaint(random) and Inpaint(box), are marked differently from the Ours figures in the DPS paper. It seems that all other figures match those in the DPS paper, except for these two items. I'm curious about the reason for the discrepancy in these two figures.

image

image

LituRout commented 6 months ago

Hi HyoungwonCho,

Thanks for the question. We used the official DPS codebase to reproduce the quantitative results from DPS paper (bottom table above). These numbers did not match with the reported results in the paper. Therefore, we emailed the DPS authors asking for clarification. The authors confirmed that we can report our reproduced numbers for DPS in the top table above. They also mentioned that a different group also got numbers similar to ours. The authors said that the mismatch in DPS numbers was due to randomness in posterior sampling.

Since DPS was the strongest baseline at that time and there was no other latent diffusion based inverse problem solver (PSLD was the first framework to employ LDMs for inverse problems), we tried to run both DPS and PSLD in the same experimental setup. The follow-up paper P2L by Chung et al. (lead author DPS) also confirmed our hypothesis that SD-v1.4/1.5 (the LDM used in PSLD) is a better prior than DPS priors. Now that there are several LDM based inverse problem solvers, we compare the results with the LDM-based solvers in our recent work STSL.

Please let me know if this clarifies your concern. I'd be happy to clarify any remaining concerns.