facebookresearch / ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
https://parl.ai
MIT License
10.48k stars 2.09k forks source link

seq2seq example fails with torch (v0.3.1) #1013

Closed dykang closed 6 years ago

dykang commented 6 years ago

I got an error message below when running the seq2seq example: python examples/train_model.py -t babi:task10k:1 -m seq2seq -mf /tmp/model_s2s -bs 32 -vtim 30 -vcut 0.95

I pulled the latest ParlAI and used pytorch (0.3.1). Do I need to downgrade the torch?

Dictionary: loading dictionary from /tmp/model_s2s.dict [ num words = 25 ] [creating task(s): babi:task10k:1] [loading fbdialog data:/home/dongyeopk/work/ParlAI/data/bAbI/tasks_1-20_v1-2/en-valid-10k-nosf/qa1_train.txt] [ training... ] Traceback (most recent call last): File "examples/train_model.py", line 28, in TrainLoop(opt).train() File "/home/dongyeopk/work/ParlAI/parlai/scripts/train_model.py", line 292, in train world.parley() File "/home/dongyeopk/work/ParlAI/parlai/core/worlds.py", line 638, in parley batch_act = self.batch_act(agent_idx, batch_observations[agent_idx]) File "/home/dongyeopk/work/ParlAI/parlai/core/worlds.py", line 611, in batch_act batch_actions = a.batch_act(batch_observation) File "/home/dongyeopk/work/ParlAI/parlai/agents/seq2seq/seq2seq.py", line 618, in batch_act predictions, cand_preds = self.predict(xs, ys, cands, cand_inds, is_training) File "/home/dongyeopk/work/ParlAI/parlai/agents/seq2seq/seq2seq.py", line 524, in predict raise e File "/home/dongyeopk/work/ParlAI/parlai/agents/seq2seq/seq2seq.py", line 500, in predict out = self.model(xs, ys, rank_during_training=cands is not None) File "/home/dongyeopk/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, kwargs) File "/home/dongyeopk/work/ParlAI/parlai/agents/seq2seq/modules.py", line 130, in forward enc_out, hidden = self.encoder(xs) File "/home/dongyeopk/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, *kwargs) File "/home/dongyeopk/work/ParlAI/parlai/agents/seq2seq/modules.py", line 298, in forward xes = self.dropout(self.lt(xs)) File "/home/dongyeopk/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(input, kwargs) File "/home/dongyeopk/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 103, in forward self.scale_grad_by_freq, self.sparse RuntimeError: save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition

stephenroller commented 6 years ago

Looks like you need to upgrade to pytorch 0.4. You can follow the instructions at http://pytorch.org/. I'm actually not sure if we intended to make seq2seq depend on pytorch 0.4. I'll check into that.

stephenroller commented 6 years ago

Official position is we only support pytorch 0.4. Please upgrade your pytorch. If you still have trouble, please file another task.