kyegomez / BitNet

Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
https://discord.gg/qUtxnK2NMf
MIT License
1.56k stars 145 forks source link

what is the purpose of detach here? #51

Closed Weitian-Wang-Bosch closed 5 months ago

Weitian-Wang-Bosch commented 5 months ago

Hi @kyegomez. Thanks for your work. When reading your code I'm a bit confused about the purpose of these two lines: https://github.com/kyegomez/BitNet/blob/9c3e7dc5dcdc65e54b01b26113e17ac664bbfb5e/bitnet/bitlinear.py#L57-L58 Can you shortly explaining it?

Upvote & Fund

Fund with Polar

Mrw33554432 commented 5 months ago

Ask ChatGPT about STE

Weitian-Wang-Bosch commented 5 months ago

Ask ChatGPT about STE

Thanks for the advice. It actually works way better than googling it. I will just paste the answer below for anyone who is wondering the same question:

"STE" typically stands for "Straight-Through Estimator." STE is a technique used during the training of neural networks, particularly in scenarios where there are discrete inputs or outputs, such as in quantized neural networks.

When gradients are backpropagated through discrete operations, such as rounding or quantization, the gradients are usually undefined or zero, which can cause training instabilities. STE addresses this issue by using a straight-through approximation during backpropagation.

Essentially, during forward propagation, the input is passed through the discrete operation, but during backpropagation, the gradient is passed straight-through without modification. This allows gradients to flow through the network properly, enabling effective training even with discrete operations.