huggingface / huggingface_hub

The official Python client for the Huggingface Hub.
https://huggingface.co/docs/huggingface_hub
Apache License 2.0
2.1k stars 547 forks source link

Make the fastai integration test more robust by comparing fastai Learners to be the same #771

Open omarespejel opened 2 years ago

omarespejel commented 2 years ago

Description

Test that a fastai.learner pushed to the hub is the same that when we load it from the hub. That is, it keeps its properties after being pushed and loaded.

How

The change would be in the test_fastai_integration.py test. @muellerzr offered some good advice for the next steps:

... this is how I'd recommend checking learners because we need to ensure two things line up, model weights and the transform pipeline. (First is real code, second is pseudocode OOTOMH):

The model itself lives in learn.model. So save/load the weights and make sure that all the params in learn.model.parameters() align, etc. (Look at saving/loading in Transformers or Accelerate to see how we can do this).

For the preprocessing, we should make sure all the transforms in learn.dls.after_batch and learn.dls.after_item are the exact same. Not too sure how that can be pulled off easily pragmatically, but if you can't quite get there lmk Omar and we could pair program that to figure it out

_Originally posted by @muellerzr in https://github.com/huggingface/huggingface_hub/pull/678#issuecomment-1054321458_

adrinjalali commented 2 years ago

@osanseviero not sure if this is fixed yet or not.

osanseviero commented 2 years ago

This issue corresponds to the huggingface_hub integration, so we can keep the issue there