OpenNMT / OpenNMT-py

Open Source Neural Machine Translation and (Large) Language Models in PyTorch
https://opennmt.net/
MIT License
6.75k stars 2.25k forks source link

AssertionError: Torch not compiled with CUDA enabled #47

Closed azureskyL closed 7 years ago

azureskyL commented 7 years ago

when I run

python train.py -data data/multi30k.atok.low.train.pt -save_model multi30k_model -gpus 0

An error occurs, Could anyone help me?

Namespace(batch_size=64, brnn=False, brnn_merge='concat', curriculum=False, data='data/multi30k.atok.low.train.pt', dropout=0.3, epochs=13, extra_shuffle=False, gpus=[0], input_feed=1, layers=2, learning_rate=1.0, learning_rate_decay=0.5, log_interval=50, max_generator_batches=32, max_grad_norm=5, optim='sgd', param_init=0.1,pre_word_vecs_dec=None, pre_word_vecs_enc=None, rnn_size=500, save_model='multi30k_model', start_decay_at=8, start_epoch=1, train_from='', train_from_state_dict='', word_vec_size=500) Loading data from 'data/multi30k.atok.low.train.pt' * vocabulary size. source = 9799; target = 18006 * number of training sentences. 29000 * maximum batch size. 64 Building model... Traceback (most recent call last): File "train.py", line 356, in <module> main() File "train.py", line 315, in main model.cuda() File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 147, in cuda return self._apply(lambda t: t.cuda(device_id)) File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 118, in _apply module._apply(fn) File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 118, in _apply module._apply(fn) File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 124, in _apply param.data = fn(param.data) File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 147, in <lambda> return self._apply(lambda t: t.cuda(device_id)) File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/_utils.py", line 65, in _cuda return new_type(self.size()).copy_(self, async) File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/cuda/__init__.py", line 272, in __new__ _lazy_init() File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/cuda/__init__.py", line 84, in _lazy_init _check_driver() File "/home/ljy/anaconda2/lib/python2.7/site-packages/torch/cuda/__init__.py", line 51, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

guillaumekln commented 7 years ago

Did you install PyTorch with CUDA support? If not, you should not use the -gpus option.

albertotonon commented 7 years ago

the problem also arises when translating with one of the provided models (I tried with onmt_model_en_fr_b1M). Setting -gpu -1 doesn't help.

albertotonon commented 7 years ago

the following workaround worked in my case: substitute this line with this one checkpoint = torch.load(opt.model, map_location=lambda storage, loc: storage)