Open jonrimmer opened 2 years ago
Maybe GPU processing was not enabled.
Same thing was happening to me when I used Kaggle to run it, but then I enabled GPU processing and I got the fine-tuning to happen within a minute.
This happened to me too. On Colab, the first time I ran it it took ~45 minutes. Then when I saw this post I went back and enabled GPU and it ran in ~3 minutes. Maybe it's worth updating the instructions in the book to mention enabling GPU?
Chapter one of the book asks you to run a cell that builds a model can get recognise cat or dog photos:
https://colab.research.google.com/github/fastai/fastbook/blob/master/01_intro.ipynb#scrollTo=yutkpiHJ0rA5
The text of the chapter suggests that building and fine-tuning this model will be a quick operation—a minute or less. This is not correct. The fine-tuning step is much slower. Running it in Colab takes over 30 minutes. Perhaps something has changed in Colab or in the fastai library since the book was written, but it's a very bad experience to hit a roadblock like this so early on in the book.