If we compare two equal embeddings, emb1 == emb2, the cosine similarity should be 1. However, due to floating point precision, we might end up with a value slightly greater than 1, such as 1.00004. This results in an undefined NaN in torch.acos(cos_sim), causing get_angular_sim to return NaN instead of 1. By using cos_sim = torch.clamp(cos_sim, -1.0, 1.0), we ensure that the cos_sim value remains within the valid range expected by torch.acos(cos_sim).
I am using TextAttack to perform attacks on LLMs. For testing, I mostly run custom attacks that lead to different embeddings, emb1 and emb2. Occasionally, my attacks do not change any words, but due to the internal randomness of LLMs during the attack search, performing a second inference step results in a misclassification. Since the two samples are the same but classified differently during the USE metric evaluation, they should result in a cosine similarity of 1. However, I am encountering NaN values after conducting USE evaluations. I found that the issue is due to floating-point precision.
Additions
Added a torch.clamp to avoid floating point precision errors
Changes
No changes
Deletions
No deletions made
Checklist
[ x] The title of your pull request should be a summary of its contribution.
[ x] Please write detailed description of what parts have been newly added and what parts have been modified. Please also explain why certain changes were made.
[ x ] If your pull request addresses an issue, please mention the issue number in the pull request description to make sure they are linked (and people consulting the issue know you are working on it)
[ x ] To indicate a work in progress please mark it as a draft on Github.
[ x ] Make sure existing tests pass.
[ x ] Add relevant tests. No quality testing = no merge.
[ x ] All public methods must have informative docstrings that work nicely with sphinx. For new modules/files, please add/modify the appropriate .rst file in TextAttack/docs/apidoc.'
What does this PR do?
Summary
If we compare two equal embeddings, emb1 == emb2, the cosine similarity should be 1. However, due to floating point precision, we might end up with a value slightly greater than 1, such as 1.00004. This results in an undefined NaN in torch.acos(cos_sim), causing get_angular_sim to return NaN instead of 1. By using cos_sim = torch.clamp(cos_sim, -1.0, 1.0), we ensure that the cos_sim value remains within the valid range expected by torch.acos(cos_sim).
I am using TextAttack to perform attacks on LLMs. For testing, I mostly run custom attacks that lead to different embeddings, emb1 and emb2. Occasionally, my attacks do not change any words, but due to the internal randomness of LLMs during the attack search, performing a second inference step results in a misclassification. Since the two samples are the same but classified differently during the USE metric evaluation, they should result in a cosine similarity of 1. However, I am encountering NaN values after conducting USE evaluations. I found that the issue is due to floating-point precision.
Additions
Changes
Deletions
Checklist
.rst
file inTextAttack/docs/apidoc
.'