kyegomez / BitNet

Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
https://discord.gg/qUtxnK2NMf
MIT License
1.55k stars 143 forks source link

1.58bit algorithm implement recommend #46

Closed princepride closed 4 months ago

princepride commented 6 months ago

https://github.com/kyegomez/BitNet/blob/914bad9ba188dfc32e34a0a0a9ee042d7962e604/bitnet/bitbnet_b158.py#L52

I noticed that you're attempting to implement 1.58-bit quantization, but it seems you only quantize the values during the forward pass, then proceed with the model inference, using the original values for the backward pass. In 4-bit quantization, we store two quantized values in one byte for representation, and the computation and gradients of the new data type are implemented with CUDA. You should consider this approach as well. Keep it up, I'm rooting for you.

Upvote & Fund

Fund with Polar

AwaitFuture commented 6 months ago

I think what youre saying is that youre saying is that they should use the 1.58 bit quantization for the backward pass? Its not really talked about in the paper for whatever reason, but the 1.58bit quantization destroys the gradient, making backprop with it impossible, so they keep the original weights to do backprop, while using the quantized weights for forward pass.

github-actions[bot] commented 4 months ago

Stale issue message