t-kalinowski / deep-learning-with-R-2nd-edition-code

Code from the book "Deep Learning with R, 2nd Edition"
https://blogs.rstudio.com/ai/posts/2022-05-31-deep-learning-with-r-2e/
54 stars 22 forks source link

Wrong order of arguments in sqare_loss? #18

Open yuriy-babenko opened 3 months ago

yuriy-babenko commented 3 months ago

https://github.com/t-kalinowski/deep-learning-with-R-2nd-edition-code/blob/5d666f93d52446511a8a8e4eb739eba1c0ffd199/ch03.R#L266C1-L270C5

Can it be that the order of arguments in the function is wrong?

loss <- square_loss(predictions, targets)

We defined square previously as:

square_loss <- function(targets, predictions) {
  per_sample_losses <- (targets - predictions)^2
  mean(per_sample_losses)
}

In text they say "the training loss .... stabilized around 0.025" which I only get once I change the order of arguments: loss <- square_loss( targets, predictions)

t-kalinowski commented 3 months ago

Good catch! The order of the arguments in the call is different from the order expected by the function signature. For some loss functions, this matters! But it does not for MSE, since the magnitude is unchanged regardless of which order you supply the args. You can work through it on paper to confirm, or do a quick check at the R repl:

x <- runif(100)
y <- runif(100)
all(((x-y)^2) == ((y-x)^2))
yuriy-babenko commented 3 months ago

Ha-ha. Indeed, as this is a squared function! Thanks for the explanation!