Closed turgut090 closed 4 years ago
Thank you for your interest in my package. I tested the code in a Colab notebook (installing UPIT in the first cell), and I don't get any error. What version of fastai are you using (check fastai.__version__
)?
>>> fastai.__version__
'2.0.16'
@henry090 Very odd... I have the same version of fastai and it works.
Please send over a full traceback of the error.
Also, check this colab notebook and see if you can find any meaningful differences between what you do and what I did. I created the notebook using the same code you provided.
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/schedule.py", line 228, in lr_find
with self.no_logging(): self.fit(n_epoch, cbs=cb)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/logargs.py", line 56, in _f
return inst if to_return else f(*args, **kwargs)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 207, in fit
self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 155, in _with_events
try: self(f'before_{event_type}') ;f()
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 197, in _do_fit
self._with_events(self._do_epoch, 'epoch', CancelEpochException)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 155, in _with_events
try: self(f'before_{event_type}') ;f()
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 191, in _do_epoch
self._do_epoch_train()
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 183, in _do_epoch_train
self._with_events(self.all_batches, 'train', CancelTrainException)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 155, in _with_events
try: self(f'before_{event_type}') ;f()
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 161, in all_batches
for o in enumerate(self.dl): self.one_batch(*o)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 179, in one_batch
self._with_events(self._do_one_batch, 'batch', CancelBatchException)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 157, in _with_events
finally: self(f'after_{event_type}') ;final()
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 133, in __call__
def __call__(self, event_name): L(event_name).map(self._call_one)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/foundation.py", line 342, in map
def map(self, f, *args, gen=False, **kwargs): return self._new(map_ex(self, f, *args, gen=gen, **kwargs))
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/foundation.py", line 202, in map_ex
return list(res)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastcore/foundation.py", line 185, in __call__
return self.fn(*fargs, **kwargs)
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in _call_one
[cb(event_name) for cb in sort_by_run(self.cbs)]
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/learner.py", line 137, in <listcomp>
[cb(event_name) for cb in sort_by_run(self.cbs)]
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/fastai/callback/core.py", line 44, in __call__
if self.run and _run: res = getattr(self, event_name, noop)()
File "/home/turgut/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/upit/train/cyclegan.py", line 122, in after_batch
fake_A, fake_B = self.learn.pred[0].detach(), self.learn.pred[1].detach()
AttributeError: 'Learner' object has no attribute 'pred'
A new environment with fresh installation still throws this error. Interesting. However, it works in Colab.
@henry090 Have you tried running regular fastai code and checked if any error comes? For example, the metric functionality of fastai also uses learn.pred
(see here) so I would expect a similar error if you use simple metrics and training as well.
Works fine:
>>> from fastai.vision.all import *
>>> from fastai.vision.gan import *
>>>
>>> path = 'oxford-iiit-pet'
>>> path_anno = 'oxford-iiit-pet/annotations'
>>> path_img = 'oxford-iiit-pet/images'
>>> fnames = get_image_files(path_img)
>>> fnames
(#7390) [Path('oxford-iiit-pet/images/newfoundland_90.jpg'),Path('oxford-iiit-pet/images/american_pit_bull_terrier_129.jpg'),Path('oxford-iiit-pet/images/wheaten_terrier_140.jpg'),Path('oxford-iiit-pet/images/havanese_1.jpg'),Path('oxford-iiit-pet/images/Birman_182.jpg'),Path('oxford-iiit-pet/images/english_setter_174.jpg'),Path('oxford-iiit-pet/images/pug_17.jpg'),Path('oxford-iiit-pet/images/scottish_terrier_185.jpg'),Path('oxford-iiit-pet/images/boxer_191.jpg'),Path('oxford-iiit-pet/images/english_cocker_spaniel_40.jpg')...]
>>>
>>> dls = ImageDataLoaders.from_name_re(
... path, fnames, pat=r'(.+)_\d+.jpg$', item_tfms=Resize(460), bs=20,
... batch_tfms=[*aug_transforms(size=224, min_scale=0.75), Normalize.from_stats(*imagenet_stats)],
... device='cuda')
>>>
>>> cnn = cnn_learner(dls, resnet18, metrics=[accuracy,error_rate])
>>> cnn.fit_one_cycle(2)
epoch train_loss valid_loss accuracy error_rate time
0 0.869473 0.373598 0.878214 0.121786 00:23
1 0.542021 0.312157 0.895805 0.104195 00:22
I found out why. It is weird. But a load size fixed the problem.
get_dls(trainA_path, trainB_path, num_A = 100, load_size = 100,crop_size = 100,bs=4) # load size and crop size
epoch train_loss id_loss_A id_loss_B gen_loss_A gen_loss_B cyc_loss_A cyc_loss_B D_A_loss D_B_loss time
------ ----------- ---------- ---------- ----------- ----------- ----------- ----------- --------- --------- ------
0 11.219095 1.689572 1.781705 0.425382 0.400838 3.470019 3.671166 0.367610 0.367610 00:08
1 10.067577 1.352973 1.465599 0.354002 0.349632 2.821623 3.051174 0.254647 0.254647 00:08
2 9.473677 1.291322 1.370914 0.364927 0.374129 2.676564 2.848548 0.236779 0.236779 00:08
3 9.007480 1.182500 1.285101 0.384088 0.418259 2.485179 2.705824 0.222265 0.222265 00:08
4 8.644550 1.099371 1.273532 0.385339 0.413444 2.304044 2.691824 0.217611 0.217611 00:08
5 8.281004 1.058014 1.214808 0.372960 0.412336 2.192252 2.531816 0.239461 0.239461 00:08
6 7.982881 0.992101 1.175483 0.412752 0.418220 2.028669 2.515495 0.220087 0.220087 00:08
7 7.642418 0.916930 1.150054 0.372994 0.443032 1.892262 2.369661 0.205539 0.205539 00:08
8 7.336694 0.873163 1.128663 0.369428 0.436099 1.799056 2.310677 0.203636 0.203636 00:08
9 7.096091 0.841627 1.099711 0.394596 0.417606 1.700073 2.295223 0.205812 0.205812 00:08
Instead of CUDA out of memory, it threw another strange error. Thanks for your time! I am closing the issue.
Hi. I am getting the following error: