model %>% compile(
optimizer = "rmsprop",
loss = c("mean_squared_error", "categorical_crossentropy"),
metrics = c("mean_absolute_error", "accuracy")
)
model %>% fit(
x = list(title_data, text_body_data, tags_data),
y = list(priority_data, department_data),
epochs = 1
)`
Error message:
Error in py_call_impl(callable, call_args$unnamed, call_args$named) : ValueError: Input 2 of layer "functional_4" is incompatible with the layer: expected shape=(None, 10000), found shape=(32, 100) Runreticulate::py_last_error()for details.
Output of detailed log:
`── R Traceback ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
▆
├─model %>% ...
├─generics::fit(...)
└─keras3:::fit.keras.src.models.model.Model(...)
├─base::do.call(object$fit, args)
└─reticulate (local) <python.builtin.method>(...)
└─reticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
See reticulate::py_last_error()$r_trace$full_call for more details.`
Looking at keras3: https://keras.io/api/layers/core_layers/embedding/, there is indeed no "input_length" any longer but what I still do not understand is where do we use embedding in the above code? We are accessing the "Functional API" and nothing in code suggests we call "Embedding Layer".
https://github.com/t-kalinowski/deep-learning-with-R-2nd-edition-code/blob/5d666f93d52446511a8a8e4eb739eba1c0ffd199/ch07.R#L146C1-L150C2
Executed code:
`install.packages("remotes") remotes::install_github("rstudio/tensorflow") reticulate::install_python() install.packages("keras3") keras3::install_keras(envname = "r-reticulate") library(keras3)
model <- keras_model_sequential() %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax")
Not build yet, not weights
model$weights
model$build(input_shape = shape(NA, 3)) str(model$weights)
model
model <- keras_model_sequential(input_shape = c(3), name = "my model" ) %>% layer_dense(units = 64, activation = "relu") %>% layer_dense(units = 10, activation = "softmax")
model
Functional API
inputs <- layer_input(shape = c(3), name = "my_input") features <- inputs %>% layer_dense(64, activation = "relu") outputs <- features %>% layer_dense(10, activation = "softmax") model <- keras_model(inputs = inputs, outputs = outputs)
model
multi-input, multi-output
vocabulary_size <- 10000 num_tags <- 100 num_departments <- 4
title <- layer_input(shape = c(vocabulary_size), name = "title") text_body <- layer_input(shape = c(vocabulary_size), name = "text_body") tags <- layer_input(shape = c(vocabulary_size), name = "tags")
features <- layer_concatenate(list(title, text_body, tags)) %>% layer_dense(64, activation = "relu")
priority <- features %>%
layer_dense(1, activation = "sigmoid", name = "priority")
department <- features %>%
layer_dense(num_departments, activation = "softmax", name = "departments")
model <- keras_model( inputs = list(title, text_body, tags), outputs = list(priority, department) )
num_samples <- 1280 random_uniform_array <- function(dim) array(runif(prod(dim)), dim)
random_vectorized_array <- function(dim) array(sample(0:1, prod(dim), replace = TRUE), dim)
title_data <- random_vectorized_array(c(num_samples, vocabulary_size)) text_body_data <- random_vectorized_array(c(num_samples, vocabulary_size)) tags_data <- random_vectorized_array(c(num_samples, num_tags))
priority_data <- random_vectorized_array(c(num_samples, 1)) department_data <- random_vectorized_array(c(num_samples, num_departments))
model %>% compile( optimizer = "rmsprop", loss = c("mean_squared_error", "categorical_crossentropy"), metrics = c("mean_absolute_error", "accuracy") )
model %>% fit( x = list(title_data, text_body_data, tags_data), y = list(priority_data, department_data), epochs = 1 )`
Error message:
Error in py_call_impl(callable, call_args$unnamed, call_args$named) : ValueError: Input 2 of layer "functional_4" is incompatible with the layer: expected shape=(None, 10000), found shape=(32, 100) Run
reticulate::py_last_error()for details.
Output of detailed log:
`── R Traceback ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▆
<python.builtin.method>
(...)reticulate::py_last_error()$r_trace$full_call
for more details.`If I understand the issue correctly this is related to the move to Keras3 mentioned here: https://github.com/rstudio/keras3/issues/1427#issuecomment-2041491532
Looking at keras3: https://keras.io/api/layers/core_layers/embedding/, there is indeed no "input_length" any longer but what I still do not understand is where do we use embedding in the above code? We are accessing the "Functional API" and nothing in code suggests we call "Embedding Layer".