QData / TextAttack

TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
https://textattack.readthedocs.io/en/master/
MIT License
2.98k stars 397 forks source link

(Bug) Update sentence_encoder.py: clamping cos_sim between -1 and 1 to avoid floating point precision errors in torch.acos(cos_sim) #804

Open Aniloid2 opened 2 months ago

Aniloid2 commented 2 months ago

What does this PR do?

Summary

If we compare two equal embeddings, emb1 == emb2, the cosine similarity should be 1. However, due to floating point precision, we might end up with a value slightly greater than 1, such as 1.00004. This results in an undefined NaN in torch.acos(cos_sim), causing get_angular_sim to return NaN instead of 1. By using cos_sim = torch.clamp(cos_sim, -1.0, 1.0), we ensure that the cos_sim value remains within the valid range expected by torch.acos(cos_sim).

I am using TextAttack to perform attacks on LLMs. For testing, I mostly run custom attacks that lead to different embeddings, emb1 and emb2. Occasionally, my attacks do not change any words, but due to the internal randomness of LLMs during the attack search, performing a second inference step results in a misclassification. Since the two samples are the same but classified differently during the USE metric evaluation, they should result in a cosine similarity of 1. However, I am encountering NaN values after conducting USE evaluations. I found that the issue is due to floating-point precision.

Additions

Changes

Deletions

Checklist