Closed princepride closed 4 months ago
I think what youre saying is that youre saying is that they should use the 1.58 bit quantization for the backward pass? Its not really talked about in the paper for whatever reason, but the 1.58bit quantization destroys the gradient, making backprop with it impossible, so they keep the original weights to do backprop, while using the quantized weights for forward pass.
Stale issue message
https://github.com/kyegomez/BitNet/blob/914bad9ba188dfc32e34a0a0a9ee042d7962e604/bitnet/bitbnet_b158.py#L52
I noticed that you're attempting to implement 1.58-bit quantization, but it seems you only quantize the values during the forward pass, then proceed with the model inference, using the original values for the backward pass. In 4-bit quantization, we store two quantized values in one byte for representation, and the computation and gradients of the new data type are implemented with CUDA. You should consider this approach as well. Keep it up, I'm rooting for you.
Upvote & Fund