Closed PoisonKiller closed 3 months ago
Hello @PoisonKiller,
in https://arxiv.org/pdf/2310.10325, Appendix A, Inference Speed the authors state:
"We run benchmarking on A100 GPUs using 20 denoising steps, and 5 denoising steps for bitrates higher than 0.05 bits per pixel. We compute the runtime for all the Kodak images at resolution 512 × 768 and provide the average time in seconds with its standard deviation in Table 3."
However, it is not clear whether the entropy coding functionality is included in this calculation or not. It is also not clear, what implementation is used for entropy coding (e.g. we use https://github.com/fab-jul/torchac). For an exact comparison, I recommend contacting the authors directly. Feel free, to share the details here.
To get a rough idea, you can measure the inference speed using compress and decompress functionality. Please note that PerCo (SD) is a reference implementation that is not particularly optimized for inference speed.
Regarding your other question: we do not offer batch functionality at this point - this should of course generally be possible.
Hope this helps! Nikolai
Thanks for your impressive and interesting work! I have some questions and look forward to your reply!
How to keep track of the Encoding Speed (in sec.) and Decoding Speed (in sec.) in evaluate_ds function, compression_utils.py?my settings are: encoding begin (1)Read image (2)Get image caption (BLIP 2) (3)Compress caption (zlib) --> just to measure bpp (text) (4)Run VAE encoder (5)Run hyper-encoder (6)Compress hyper-latent (AC) --> just to measure bpp (hyper-latent) encoding end decoding begin (7)Generate reconstruction (skip compress -> decompress for speed reasons, logic remains correct) decoding end I want to know if my settings to recard time are correct.
In compression_utils.py, when I use the evaluate_ds fuction to evaluate coco30k, I find that only one image can be compressed and decompressed at a time. So I want to know if I can perform batch compression and decompression.
Thanks again!