Open wzmsltw opened 11 months ago
I'm currently working on implementing the entropy penalty in the code. For reference, I suggest checking out this section from the vector-quantize-pytorch repository
I'm not entirely confident about this part either. Perhaps we can discuss it.
lucidrains's implementation seems good, I can try his codes first
Hi, I have successfully train LFQ with my own implementation (similar to your implementation) without entropy penalty. But I failed to implement entropy penalty mentioned in paper. Will you release this part? (or do you have any idea to achieve this?)