Closed SeekPoint closed 7 years ago
This issue came with latest tensorflow version. I will be working on this in couple of weeks . Feel free to to modify code and send pull request.
Thanks
On Dec 7, 2016 5:36 PM, "yk_data" notifications@github.com wrote:
rzai@rzai00:/prj/Neural_Conversation_Models$ python neural_conversation_model.py --train_dir ubuntu/ --en_vocab_size 60000 --size 512 --data_path ubuntu/train.tsv --dev_data ubuntu/valid.tsv --vocab_path ubuntu/60k_vocan.en --attention --decode --beam_search --beam_size 25 I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 pciBusID 0000:02:00.0 Total memory: 7.92GiB Free memory: 7.81GiB W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x30e25d0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 1 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 pciBusID 0000:01:00.0 Total memory: 7.92GiB Free memory: 7.33GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0 1 I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 1: Y Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0) I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080, pci bus id: 0000:01:00.0) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) Attention Model Symbols 60000 60000 Tensor("model_with_buckets/embedding_attention_seq2seq/concat:0", shape=(?, 5, 512), dtype=float32) Check number of symbols 60000 Initial_state Traceback (most recent call last): File "neural_conversation_model.py", line 323, in tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run sys.exit(main(sys.argv[:1] + flags_passthrough)) File "neural_conversation_model.py", line 318, in main decode() File "neural_conversation_model.py", line 230, in decode model = create_model(sess, True, beam_search=beam_search, beam_size=beam_size, attention=attention) File "neural_conversation_model.py", line 104, in create_model forward_only=forward_only, beam_search=beam_search, beam_size=beam_size, attention=attention) File "/home/rzai/prj/Neural_Conversation_Models/seq2seq_model.py", line 138, in init softmax_loss_function=softmax_loss_function) File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 1044, in decode_model_with_buckets decoder_inputs[:bucket[1]]) File "/home/rzai/prj/Neural_Conversation_Models/seq2seq_model.py", line 137, in self.target_weights, buckets, lambda x, y: seq2seq_f(x, y, True), File "/home/rzai/prj/Neural_Conversation_Models/seq2seq_model.py", line 101, in seq2seq_f beam_size=beam_size ) File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 839, in embedding_attention_seq2seq initial_state_attention=initial_state_attention, beam_search=beam_search, beam_size=beam_size) File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 756, in embedding_attention_decoder initial_state_attention=initial_state_attention, output_projection=output_projection, beam_size=beam_size) File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 600, in beam_attention_decoder state_size = int(initial_state.get_shape().with_rank(2)[1]) AttributeError: 'tuple' object has no attribute 'get_shape' rzai@rzai00:/prj/Neural_Conversation_Models$
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/pbhatia243/Neural_Conversation_Models/issues/7, or mute the thread https://github.com/notifications/unsubscribe-auth/AQOL-gyGxPCkBjR9TKUu2Z209lYevlYWks5rFqFggaJpZM4LGhLw .
would you tell me how long it takes on a full training? just tell me your case, thanks
I run python neural_conversation_model.py --train_dir ubuntu/ --en_vocab_size 60000 --size 512 --data_path ubuntu/train.tsv --dev_data ubuntu/valid.tsv --vocab_path ubuntu/60k_vocan.en --attention
I was also overwhelmed with this problem, hope to get your help
set state_is_tuple
parameter in seq2seq_model.py
at line 87 here to solve this problem
cell = rnn.MultiRNNCell([single_cell] * num_layers,state_is_tuple=False)
@gsh199449 I have set the parameter as you mentioned, but still got an error: ValueError: Some cells return tuples of states, but the flag state_is_tuple is not set. Have you met this problem? Is there any solution? Thanks in advance!
Updated with latest version.
Hi, @pbhatia243. I have used the updated version, but still got an error: ValueError: Some cells return tuples of states, but the flag state_is_tuple is not set. It seems that the LSTM cell in tensorflow 1.0 is designed in Tuple format. Could you give me some hints on how to fix this problem? Thanks in advance!
rzai@rzai00:~/prj/Neural_Conversation_Models$ python neural_conversation_model.py --train_dir ubuntu/ --en_vocab_size 60000 --size 512 --data_path ubuntu/train.tsv --dev_data ubuntu/valid.tsv --vocab_path ubuntu/60k_vocan.en --attention --decode --beam_search --beam_size 25 I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 pciBusID 0000:02:00.0 Total memory: 7.92GiB Free memory: 7.81GiB W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x30e25d0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 1 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate (GHz) 1.7335 pciBusID 0000:01:00.0 Total memory: 7.92GiB Free memory: 7.33GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0 1 I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 1: Y Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0) I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080, pci bus id: 0000:01:00.0) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) Attention Model Symbols 60000 60000 Tensor("model_with_buckets/embedding_attention_seq2seq/concat:0", shape=(?, 5, 512), dtype=float32) Check number of symbols 60000 Initial_state Traceback (most recent call last): File "neural_conversation_model.py", line 323, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "neural_conversation_model.py", line 318, in main
decode()
File "neural_conversation_model.py", line 230, in decode
model = create_model(sess, True, beam_search=beam_search, beam_size=beam_size, attention=attention)
File "neural_conversation_model.py", line 104, in create_model
forward_only=forward_only, beam_search=beam_search, beam_size=beam_size, attention=attention)
File "/home/rzai/prj/Neural_Conversation_Models/seq2seq_model.py", line 138, in init
softmax_loss_function=softmax_loss_function)
File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 1044, in decode_model_with_buckets
decoder_inputs[:bucket[1]])
File "/home/rzai/prj/Neural_Conversation_Models/seq2seq_model.py", line 137, in
self.target_weights, buckets, lambda x, y: seq2seq_f(x, y, True),
File "/home/rzai/prj/Neural_Conversation_Models/seq2seq_model.py", line 101, in seq2seq_f
beam_size=beam_size )
File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 839, in embedding_attention_seq2seq
initial_state_attention=initial_state_attention, beam_search=beam_search, beam_size=beam_size)
File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 756, in embedding_attention_decoder
initial_state_attention=initial_state_attention, output_projection=output_projection, beam_size=beam_size)
File "/home/rzai/prj/Neural_Conversation_Models/my_seq2seq.py", line 600, in beam_attention_decoder
state_size = int(initial_state.get_shape().with_rank(2)[1])
AttributeError: 'tuple' object has no attribute 'get_shape'
rzai@rzai00:~/prj/Neural_Conversation_Models$