stanfordnlp / cocoa

Framework for learning dialogue agents in a two-player game setting.
MIT License
158 stars 62 forks source link

TypeError: where() takes at most 2 arguments (3 given) #41

Closed SeekPoint closed 7 years ago

SeekPoint commented 7 years ago

PYTHONPATH=. python src/main.py --schema-path data/schema.json --scenarios-path data/scenarios.json --train-examples-paths data/train.json --test-examples-paths data/dev.json --stop-words data/common_words.txt --min-epochs 10 --checkpoint checkpoint --rnn-type lstm --learning-rate 0.5 --optimizer adagrad --print-every 50 --model attn-copy-encdec --gpu 1 --rnn-size 100 --grad-clip 0 --num-items 12 --batch-size 32 --stats-file stats.json --entity-encoding-form type --entity-decoding-form type --node-embed-in-rnn-inputs --msg-aggregation max --word-embed-size 100 --node-embed-size 50 --entity-hist-len -1 --learned-utterance-decay /ve_tf0.11_py2/venv/lib/python2.7/site-packages/fuzzywuzzy/fuzz.py:35: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning') read_examples: data/train.json read_examples: data/dev.json Building lexicon... Created lexicon: 522092 phrases mapping to 1314 entities, 3.291269 entities per phrase Using rule-based lexicon... 3.96 s test: 0 dialogues out of 0 examples train: 7257 dialogues out of 8967 examples dev: 878 dialogues out of 1083 examples Vocabulary size: 8435 Traceback (most recent call last): File "src/main.py", line 110, in model = build_model(schema, mappings, model_args) File "//cocoa/src/model/encdec.py", line 69, in build_model model = GraphEncoderDecoder(encoder_word_embedder, decoder_word_embedder, graph_embedder, encoder, decoder, pad, select) File "//cocoa/src/model/encdec.py", line 760, in init super(GraphEncoderDecoder, self).init(encoder_word_embedder, decoder_word_embedder, encoder, decoder, pad, select, scope) File "//cocoa/src/model/encdec.py", line 639, in init self.build_model(encoder_word_embedder, decoder_word_embedder, encoder, decoder, scope) File "//cocoa/src/model/encdec.py", line 659, in build_model encoder.build_model(encoder_word_embedder, encoder_input_dict, time_major=False) File "//cocoa/src/model/encdec.py", line 283, in build_model super(GraphEncoder, self).build_model(word_embedder, input_dict, time_major=time_major, scope=scope) File "//cocoa/src/model/encdec.py", line 193, in build_model inputs = self._build_rnn_inputs(word_embedder, time_major) File "//cocoa/src/model/encdec.py", line 267, in _build_rnn_inputs word_embeddings = word_embedder.embed(self.inputs, zero_pad=True) File "//cocoa/src/model/word_embedder.py", line 17, in embed embeddings = tf.where(inputs == self.pad, tf.zeros_like(embeddings), embeddings) TypeError: where() takes at most 2 arguments (3 given)

SeekPoint commented 7 years ago

it is my fault

I miss used the tf0.11, it should be tf0.12