Open westbrrokk opened 10 months ago
I encountered the same issue as you, The problem is that data is missing the feature dimension necessary for a Conv1D layer, which needs the input_shape=(timesteps, features). You can try adding an additional dimension with tf.expand_dims for x: x = tf.expand_dims(x, axis=-1), it works for me
I encountered the same issue as you, The problem is that data is missing the feature dimension necessary for a Conv1D layer, which needs the input_shape=(timesteps, features). You can try adding an additional dimension with tf.expand_dims for x: x = tf.expand_dims(x, axis=-1), it works for me
Hello, I used this method a few days ago, but the bug appeared again in the subsequent training, have you encountered the error since then
I didn't encounter such an issue again later, would you still report the same error?
I didn't encounter such an issue again later, would you still report the same error?
I have encountered other bugs during the reproduction process, and the program is not running at the moment, can you provide an email address for us to easily communicate?
I didn't encounter such an issue again later, would you still report the same error?
你后来遇到这个问题了吗: nvalidArgumentError: 2 root error(s) found. (0) Invalid argument: indices[0,0] = 160001 is not in [0, 160001) [[node functional_1/embed_ngram_1/embedding_lookup (defined at C:\Users\Zhao\Desktop\phishing detection\GramBeddings-main\train.py:315) ]] (1) Invalid argument: indices[0,0] = 160001 is not in [0, 160001) [[node functional_1/embed_ngram_1/embedding_lookup (defined at C:\Users\Zhao\Desktop\phishing detection\GramBeddings-main\train.py:315) ]]
I also encountered this issue. It's caused by a mismatch in the input dimension of the embedding layer. You just need to change it to input_dim = self.vocab_size + 1, and it should run smoothly.
I also encountered this issue. It's caused by a mismatch in the input dimension of the embedding layer. You just need to change it to input_dim = self.vocab_size + 1, and it should run smoothly.
Big Brother thank you so much, now it is possible to train, I guess it is possible to run through it
I also encountered this issue. It's caused by a mismatch in the input dimension of the embedding layer. You just need to change it to input_dim = self.vocab_size + 1, and it should run smoothly.
Hi big guy, did you encounter any other problems while testing, I'm a rookie and have some confusion after training, can I ask you for some advice?
I also encountered this issue. It's caused by a mismatch in the input dimension of the embedding layer. You just need to change it to input_dim = self.vocab_size + 1, and it should run smoothly.
My model test result is very poor, please ask your model prediction how, can add you a qq or something to exchange
I also encountered this issue. It's caused by a mismatch in the input dimension of the embedding layer. You just need to change it to input_dim = self.vocab_size + 1, and it should run smoothly.
My model test result is very poor, please ask your model prediction how, can add you a qq or something to exchange
My program runs without any issue, and the results are quite promising. Certainly, you can add my QQ: 2671093345, and we can discuss further.
我也遇到了这个问题。这是由嵌入层的输入维度不匹配引起的。您只需将其更改为input_dim = self.vocab_size + 1,它应该可以顺利运行。
我的模型测试结果很差,请问你的模型预测怎么样,可以加你一个qq什么的交流一下
你好,我可以加你联系方式吗,我也是个菜鸡,想交流一下
Hello author, I downloaded your programme and the dataset today to use spyder for study and research, but your source code is running with an error, I'm using the pdrcnn dataset, and the error is reported as follows: (no change in code) ValueError: Input 0 of layer conv1d_48 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [None, 120] I'm looking for an answer.