Closed xwz-123 closed 2 years ago
Hi, thanks for the question! The problem in this case is that (per default in TensorFlow) an embedding layer will produce a 3-dimensional object as you can see in the print of the model mod
:
> mod
Model: "model"
Layer (type) Output Shape Param # Connected to
=======================================================================================
input_d_Bare_nuclei__1 (In [(None, 1)] 0 []
putLayer)
embedding (Embedding) (None, 1, 3) 30 ['input_d_Bare_nuclei__1[0][0
]']
dense_1 (Dense) (None, 1, 3) 12 ['embedding[0][0]']
input__Intercept__1 (Input [(None, 1)] 0 []
Layer)
dense (Dense) (None, 1, 1) 4 ['dense_1[0][0]']
1_1 (Dense) (None, 1) 1 ['input__Intercept__1[0][0]']
add (Add) (None, 1, 1) 0 ['dense[0][0]',
'1_1[0][0]']
distribution_lambda (Distr ((None, 1, 1), 0 ['add[0][0]']
ibutionLambda) (None, 1, 1))
=======================================================================================
As a consequence, deepregression
does not know how to add together the intercept (which in the print is the 1_1
layer) and your final deep neural network layer (here dense (Dense)
). The solution is to cast the 3-dim tensor to a matrix at some point, e.g., by flattening the last layer from (None, 1, 1)
to (None, 1)
like this:
d_emb <- function(x) {
x %>%
layer_embedding(input_dim = length(unique(dat$Bare.nuclei)), output_dim = 3L) %>%
layer_dense(units = 3, activation = 'relu') %>%
layer_dense(units = 1, activation = 'linear') %>%
layer_flatten()
}
Let me know if this solves your problem.
Best, David
Hi David,
thanks a lot for your reply! My problem is solved now :)
Dear All,
when I use
layer_embedding
to include categorical features in the DNN part for a binary classification problem, I will get the following warning:I don't know where I've used wrong dimensions. Could anyone please help me? Thanks a lot in advance!
My codes are: