huggingface / peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
https://huggingface.co/docs/peft
Apache License 2.0
16.57k stars 1.64k forks source link

TST: Eva: Speed up consistency tests #2224

Closed BenjaminBossan closed 1 week ago

BenjaminBossan commented 1 week ago

The EVA consistency tests are currently among the slowest tests overall, taking roughly 4x 50 sec with an overall test runtime of 15-20 min, so they make up a significant fraction of that runtime.

With this PR, the number of iterations until convergence is reduced by passing a lower tau value. Testing locally, the similarity threshold is still crossed for the tests to pass, so this change should be good. On CI, the similarity threshold had to be reduced a bit for the tests still to pass.

Overall, this cuts the runtime ~20 sec or less. Thus they're still slow but it's a noticeable difference.

Besides this change, I made some smaller adjustments to EVA:

HuggingFaceDocBuilderDev commented 1 week ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

BenjaminBossan commented 1 week ago

@sirluk I wanted to speed up some of the slow EVA tests and this is what I came up with. Do these changes make sense to you? Do you have other suggestions how to speed these tests up?

sirluk commented 1 week ago

@BenjaminBossan great, these changes definitely make sense. I think for testing purposes a lower tau value is reasonable. What we could maybe also do is to set NUM_SEEDS to 2 instead of 3. Should in theory reduce the runtime of affected tests by 2/3.

BenjaminBossan commented 1 week ago

I think for testing purposes a lower tau value is reasonable. What we could maybe also do is to set NUM_SEEDS to 2 instead of 3. Should in theory reduce the runtime of affected tests by 2/3.

I was thinking about that but was unsure how important this would be for the validity of the tests. Unfortunately, going down to 2 seeds makes the tests fail because similarity drops to 0.5720. This is surprising to me, as I thought that using 2 seeds would run a subset of the tests, but as the means are taken, that's not quite true.

When inspecting the generated cos sims, there appears to be quite a lot of variance, do you think that's expected?

defaultdict(<class 'list'>,
            {'transformer.h.0.attn.c_attn': [0.990085647735472,
                                             0.9949693531521786,
                                             0.8864109886528887,
                                             0.4033661330781806,
                                             0.33826436771873547,
                                             0.8489517097148578,
                                             0.0921391729040838,
                                             0.0218558344106295,
                                             0.9903735585752069,
                                             0.9927730189958451,
                                             0.8528471954967656,
                                             0.9540084285193433,
                                             0.8420322307677242,
                                             0.7776223067186468,
                                             0.008977810376633059,
                                             0.0479187370264011,
                                             0.9953101063589287,
                                             0.9962984941144873,
                                             0.9183812804279174,
                                             0.4824854816180644,
                                             0.5407685750892638,
                                             0.8686988671309182,
                                             0.5281818067841444,
                                             0.40635619074373797],
             'transformer.h.1.attn.c_attn': [0.9801320362025273,
                                             0.9799141604045022,
                                             0.9896008624159045,
                                             0.9745239643651712,
                                             0.9574332264929228,
                                             0.9567110804101506,
                                             0.9361806918880903,
                                             0.15938577747671628,
                                             0.827231062270758,
                                             0.7823852328391434,
                                             0.9717784298487702,
                                             0.915248389435023,
                                             0.18227436640546596,
                                             0.13041979243320756,
                                             0.8459964120569107,
                                             0.1432219555510228,
                                             0.9122523895384267,
                                             0.8671367828874276,
                                             0.9844849159520411,
                                             0.8905986229210756,
                                             0.27301255955114545,
                                             0.2662485816177606,
                                             0.9100346301712139,
                                             0.08061927272559466]})

(this is for EvaConfig(rho=1, tau=0.9))

BenjaminBossan commented 1 week ago

ping @sirluk

sirluk commented 1 week ago

@BenjaminBossan I will investigate this and get back to you

sirluk commented 1 week ago

@BenjaminBossan The reason is like because singular vectors are unique up to their sign between different runs. This does not effect the performance but effects the cosine similarity between different runs. It is probably better for this test if we take compute the cosine similarity of the absolute values. That should fix the issue

BenjaminBossan commented 1 week ago

Thanks for figuring this out @sirluk. I made some adjustments, LMK if that makes sense. I verified that with these changes, we get rid of the close to 0 similarities we saw before.

sirluk commented 1 week ago

thanks for improving the eva tests, looks good!