rstudio / keras3

R Interface to Keras
https://keras3.posit.co/
Other
831 stars 282 forks source link

Verbose doesn't work for callback_reduce_lr_on_plateau #892

Open dfalbel opened 4 years ago

dfalbel commented 4 years ago
library(keras)

x <- matrix(0, nrow = 10, ncol = 1)
y <- runif(10)

lr <- callback_reduce_lr_on_plateau(monitor = "loss", patience = 1, min_delta = 1, verbose = 1)

model <- keras_model_sequential() %>% 
  layer_dense(units = 1, input_shape = c(1))

model %>% 
  compile(loss = "mse", optimizer = "adam")

model %>% 
  fit(x = x, y= y, callbacks = list(lr))

lb <- callback_lambda(on_epoch_end = function(epoch, logs) {
  print(model$optimizer$learning_rate)
})

model %>% 
  fit(x = x, y= y, callbacks = list(lr, lb))
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.0000001e-12>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.0000001e-13>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.00000015e-14>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.0000001e-15>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.0000001e-16>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.0000001e-17>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1e-18>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.00000003e-19>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.00000005e-20>
#> <tf.Variable 'Adam/learning_rate:0' shape=() dtype=float32, numpy=1.0000001e-21>

Created on 2019-10-10 by the reprex package (v0.3.0)

Python prints a message like this:

Epoch 00005: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.

when the LR is changed

skeydan commented 4 years ago

You mean in general, not in an .Rmd, right? (our other topic).

For me, in plain R, the above code works fine, output looks like

Epoch 00010: ReduceLROnPlateau reducing learning rate to 1.000000082740371e-12.
10/10 [==============================] - 0s 268us/sample - loss: 0.4463

This is with TF 2.0, Python 3.7.4.

dfalbel commented 4 years ago

In general for me! Ok, I think I get this with python 3.6. Let me verify that

dfalbel commented 3 years ago

Don't work in tf > 2.1