Test that a fastai.learner pushed to the hub is the same that when we load it from the hub. That is, it keeps its properties after being pushed and loaded.
How
The change would be in the test_fastai_integration.py test.
@muellerzr offered some good advice for the next steps:
... this is how I'd recommend checking learners because we need to ensure two things line up, model weights and the transform pipeline. (First is real code, second is pseudocode OOTOMH):
The model itself lives in learn.model. So save/load the weights and make sure that all the params in learn.model.parameters() align, etc. (Look at saving/loading in Transformers or Accelerate to see how we can do this).
For the preprocessing, we should make sure all the transforms in learn.dls.after_batch and learn.dls.after_item are the exact same. Not too sure how that can be pulled off easily pragmatically, but if you can't quite get there lmk Omar and we could pair program that to figure it out
Description
Test that a
fastai.learner
pushed to the hub is the same that when we load it from the hub. That is, it keeps its properties after being pushed and loaded.How
The change would be in the
test_fastai_integration.py
test. @muellerzr offered some good advice for the next steps: