Open hudson-ai opened 1 month ago
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
Attention: Patch coverage is 31.25000%
with 22 lines
in your changes missing coverage. Please review.
Project coverage is 63.53%. Comparing base (
917fe35
) to head (8d17370
).
Files with missing lines | Patch % | Lines |
---|---|---|
guidance/models/transformers/_transformers.py | 31.25% | 22 Missing :warning: |
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Passing all the model specific tests in the CI Tests workflow. General tests failing due to azurecloud auth issues -- not sure if I am able to rerun with the right perms. But I believe all is fine and dandy with the PR.
Changes since submitting PR:
Cache.reset
if possible in order to avoid reallocating the cache. This also prevents the cache-doubling from resetting.@paulbkoch any feedback?
Build currently fails due to gemma2's usage of a HybridCache which doesn't support tuple slicing like the friendlier DynamicCache.
"Fixing" this issue (just throwing the cache away...) immediately uncovered another one -- the HybridCache has a maximum size. If we don't set this manually, it is set to the sequence length of the first token sequence the model is called with. Trying to do another forward pass with more tokens leads to exceptions deep down inside of gemma's implementation. Current "fix" is to again... throw the cache away.
Hoping for something more elegant. But I don't think this is too insane for now.
Note: now taking advantage of
Cache.crop
for cache implementations that support it. This should prevent conversion back and forth from the "legacy" cache format that we previously assumed. (Should fix #986).