lucidrains / vector-quantize-pytorch

Vector (and Scalar) Quantization, in Pytorch
MIT License
2.12k stars 179 forks source link

FSQ-indices of codes out of range #107

Closed wanghao14 closed 4 months ago

wanghao14 commented 4 months ago

Hi, thanks for this high-quality codebase.

Recently, I've been utilizing your implemented FSQ functionality, setting the levels to [8, 8, 8, 6, 5]. However, during my testing, I encountered an unexpected issue where the computed indices exceed the range of the codebook. Specifically, I observed indices reaching 15360, which exceeds the expected range of 0 to 15359.

I'm reaching out to seek guidance on debugging this issue. One potential consideration is whether the use of torch.bfloat16 as the data type in my model could be contributing to this issue, given that FSQ is a component of it.

Any suggestions or assistance you could provide would be greatly appreciated.

lucidrains commented 4 months ago

could you get a script that reproduces the error?

wanghao14 commented 4 months ago

Sorry for late response. Upon further investigation, I found that the issue I encountered was due to incorrect type casting of tensors in my code, rather than a bug in your implementation.

lucidrains commented 4 months ago

@wanghao14 ah great! had me worried for a bit

are you seeing the best results with FSQ? tried any of the other scalar quants?

wanghao14 commented 4 months ago

@lucidrains Hi, sorry for reporting the wrong bug. I have tried VQ-VAE and some optimization strategies like Lower codebook dimension and Cosine Similarity. However, it's evident that these measures did not yield performance comparable to FSQ in my task.

lucidrains commented 4 months ago

@wanghao14 that's amazing and what i wanted to hear! future of scalar quantization is bright! (and so much simpler and less janky than vector quantization)

JohnHerry commented 1 month ago

Hi, thanks for this high-quality codebase.

Recently, I've been utilizing your implemented FSQ functionality, setting the levels to [8, 8, 8, 6, 5]. However, during my testing, I encountered an unexpected issue where the computed indices exceed the range of the codebook. Specifically, I observed indices reaching 15360, which exceeds the expected range of 0 to 15359.

I'm reaching out to seek guidance on debugging this issue. One potential consideration is whether the use of torch.bfloat16 as the data type in my model could be contributing to this issue, given that FSQ is a component of it.

Any suggestions or assistance you could provide would be greatly appreciated.

so, what is the designed value range of codebooks in this project? [0, 155359] or [1, 155360]?