AutoResearch / autodoc

MIT License
0 stars 1 forks source link

Load models from huggingface instead of blob storage #22

Closed carlosgjs closed 8 months ago

carlosgjs commented 9 months ago

During test I found loading models from the HuggingFace hub is faster, likely due to additional optimization/parallel loading vs loading from blob storage mounted as a file system. This always simplifies testing different models since they don't need to be copied over to blob storage first.

Some opportunistic/small changes too.

codecov[bot] commented 9 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (45bd148) 95.83% compared to head (8b029a1) 95.83%.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #22 +/- ## ======================================= Coverage 95.83% 95.83% ======================================= Files 3 3 Lines 120 120 ======================================= Hits 115 115 Misses 5 5 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

uwcdc commented 8 months ago

I agree with @lsetiawan Everything looks good, just take a look at the PyPI publishing.

codecov-commenter commented 8 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Comparison is base (45bd148) 95.83% compared to head (9f1bb61) 95.83%.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #22 +/- ## ======================================= Coverage 95.83% 95.83% ======================================= Files 3 3 Lines 120 120 ======================================= Hits 115 115 Misses 5 5 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.