-
The paper mentions a codebook size of 4096 for all models with 128/64/32 tokens for 256x256 and 128/64 tokens for 512x512.
I was wondering why the example configuration in `README.md` and `titok.py` …
-
In the paper you said that you reset the codebook every 20 iterations to prevent codebook collapse. However, in the training loop https://github.com/exitudio/MMM/blob/68d850095a0640405c0d1deb1fc101f11…
-
Credit for idea: @e-kotov
-
How to merge two codebook into one codebook?
-
Thanks for your work, I'm interested in your work, I tried to reproduce Dynamic Visual Tokenizer , but the reconstruction loss is around 0.3, can you give me some suggestions for training? Thanks
-
Very nice work! Have you compared VQGAN-LC with other quantizers like [FSQ](https://arxiv.org/abs/2309.15505) or [LFQ](https://arxiv.org/abs/2310.05737), which can also acheive high codebook utilizati…
-
Hello, based on the information you provided, I successfully replicated the model, and the results are excellent! In your article, you mentioned the function of the codebook 'to learn to cluster the a…
-
Thank you for conducting and sharing such good research!
I couldn't find anything in the current code that corresponds to codebook. Is there a code for codebook by any chance? I have additional que…
-
Currently, `codebook` uses the `flextable` package to create codebooks as a Word document. I would like to add the option of creating codebooks as HTML documents that can be hosted on, for example, Gi…
-
what is the codebook size / vocab size for encoded snac data for the various models?