utterworks / fast-bert

Super easy library for BERT based NLP models
Apache License 2.0
1.87k stars 341 forks source link

AttributeError: module 'torch.nn' has no attribute 'backends' after updating pytorch #89

Open DanyalAndriano opened 5 years ago

DanyalAndriano commented 5 years ago

After updating pytorch to 1.3, I'm getting the following error when fitting the learner:

<ipython-input-16-70eeb9444499> in <module>
      3             validate=True, # Evaluate the model after each epoch
      4             schedule_type="warmup_cosine",
----> 5             optimizer_type="adamw")

~\Anaconda3\envs\d-learn\lib\site-packages\fast_bert\learner_cls.py in fit(self, epochs, lr, validate, schedule_type, optimizer_type)
    182             except ImportError:
    183                 raise ImportError('Please install apex to use fp16 training')
--> 184             self.model, optimizer = amp.initialize(self.model, optimizer, opt_level=self.fp16_opt_level)
    185 
    186         # Get scheduler

~\Anaconda3\envs\d-learn\lib\site-packages\apex\amp\frontend.py in initialize(models, optimizers, enabled, opt_level, cast_model_type, patch_torch_functions, keep_batchnorm_fp32, master_weights, loss_scale, cast_model_outputs, num_losses, verbosity, min_loss_scale, max_loss_scale)
    355         maybe_print("{:22} : {}".format(k, v), True)
    356 
--> 357     return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs)
    358 
    359 

~\Anaconda3\envs\d-learn\lib\site-packages\apex\amp\_initialize.py in _initialize(models, optimizers, properties, num_losses, cast_model_outputs)
    239     if properties.patch_torch_functions:
    240         # handle is unused here. It's accessible later through a global value anyway.
--> 241         handle = amp_init(loss_scale=properties.loss_scale, verbose=(_amp_state.verbosity == 2))
    242         for optimizer in optimizers:
    243             # Disable Amp casting for the optimizer step, because it should only be

~\Anaconda3\envs\d-learn\lib\site-packages\apex\amp\amp.py in init(enabled, loss_scale, enable_caching, verbose, allow_banned)
     99             try_caching = (cast_fn == utils.maybe_half)
    100             wrap.cached_cast(module.MODULE, fn, cast_fn, handle,
--> 101                              try_caching, verbose)
    102 
    103     # 1.5) Pre-0.4, put the blacklist methods on HalfTensor and whitelist

~\Anaconda3\envs\d-learn\lib\site-packages\apex\amp\wrap.py in cached_cast(mod, fn, cast_fn, handle, try_caching, verbose)
     31 def cached_cast(mod, fn, cast_fn, handle,
     32                 try_caching=False, verbose=False):
---> 33     if not utils.has_func(mod, fn):
     34         return
     35 

~\Anaconda3\envs\d-learn\lib\site-packages\apex\amp\utils.py in has_func(mod, fn)
    130 
    131 def has_func(mod, fn):
--> 132     if isinstance(mod, torch.nn.backends.backend.FunctionBackend):
    133         return fn in mod.function_classes
    134     elif isinstance(mod, dict):

AttributeError: module 'torch.nn' has no attribute 'backends'
kaushaltrivedi commented 5 years ago

just reinstall Apex from https://github.com/NVIDIA/apex. This should be fixed after that.

DanyalAndriano commented 5 years ago

Thanks. I did, and now the error is this https://github.com/kaushaltrivedi/fast-bert/issues/87. Only when I downgrade to pytorch 1.2 it runs.