Closed wfmonster closed 6 years ago
Your code mask = Lambda(lambda x:GetPadMask(emb, emb))(src_seq) The wrapped function is not related with x (src_seq). It is strange. The last error is also strange because K is not a Tensor but a module. Is K the keras.backend in your code?
I dont know more without any other information.
Yes, K is the tf.keras.backend. I went ahead and cloned a new version of the repository, and the fresh code is running fine now. I must have made an error somewhere while working on modifications. Thank you for getting back to me.
I was also curious about the new QANET blocks you've added. Do you have more info on this or is it for personal research purposes (if you don't mind elaborating)? I currently do NMT research so I would be interested in any additional materials or thoughts.
The QANet blocks can be used for performing CNN+self-attention for encoding a sequence.
x = Embedding(words.num(), 64)(input)
x = Dropout(0.5)(x)
mask = Lambda(lambda x:K.cast(K.greater(x, 0), 'float32'))(input)
x = QANet_Encoder(64, n_head=4, n_conv=2, n_block=3, kernel_size=5, dropout=0.5, add_pos=False)(x, mask)
Moreover, they are basic elements for my implementation of the QANet model. I may release the implementation when I have time to organize the code.
I'm running into a lot of errors attempting to run the Transformer.py file for testing purposes.
The issue begins with:
What version of Keras and Tensorflow are you using to develop? Could you add that info to a requirements.txt file or possibly to the readme info. I am wondering if this is an issue between conflicting versions. I am using:
tensorflow 1.8.0 Keras 2.2.0
I've tried wrapping the operations in Lambda Layers which works for the first two lines in GetPadMask Function but I am running into issues again with the K.batch_dot Operation. An Ideas? I am relatively new to the Keras framework.