Closed denadai2 closed 4 months ago
@denadai2 hey Marco! thanks for testing out this new quantizer
could you see if the latest version fixes your issue?
@denadai2 are you seeing a lot of success with this technique?
@denadai2 are you seeing a lot of success with this technique?
zero success until now ahah but I'll keep you updated! What about you?
@denadai2 i'm seeing better results with FSQ
but i haven't tried this spherical flavor on any real data just yet
I see! I'll try that one as well. I just have to better understand the paper :P
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "/tmp/ray/session_2024-06-30_09-41-50_254745_1/runtime_resources/pip/b1dd9d9db9545febf3d5ce2059c5b9fc44317bfb/virtualenv/lib/python3.10/site-packages/vector_quantize_pytorch/lookup_free_quantization.py", line 321, in forward
distance = -2 * einsum('... i d, j d -> ... i j', original_input, codebook)
File "/opt/conda/lib/python3.10/site-packages/torch/functional.py", line 380, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: expected scalar type Float but found Half
it kinda work better than before, but we have this.
question: this https://github.com/lucidrains/vector-quantize-pytorch/blob/1bce1c3b80296f64612f808942460c3a955dec3f/vector_quantize_pytorch/lookup_free_quantization.py#L244 disables autocast while the recent #145 enables it by default. Should we enable it only if it is already enabled?
@denadai2 oops, try one more time?
@denadai2 working?
@denadai2 also try latest version with this setting turned False
Thanks @lucidrains!! I'll test it tomorrow or the day after. Got caught by bugs in the pipeline before it eheh
woooorkiiiinnggg! thx @lucidrains . I'll keep you updated with the exps :))
@denadai2 happy training Marco!
question: this
disables autocast while the recent #145 enables it by default. Should we enable it only if it is already enabled?
Sorry for opening this again, but while LFQ works with AMP, FSQ still doesn't for me due to this line. I still get: RuntimeError: mat1 and mat2 must have the same dtype, but got Float and BFloat16
. Removing the line resolves the issue.
@hummat i think you are on an older version of the library
@lucidrains thanks for the quick reply. I'm on 1.17.1
from pip but even this master branch has it, right?
@hummat oh oops, that should have been removed a long time ago
could you try 1.17.3?
@lucidrains aha, yes, in 1.17.3 it's fine :)
I believe there is a similar problem to #116.
thxxx
PS: I'd like to use it as float32 within a bfloat16 module in FSDP but I do not know how