Open yuriy-babenko opened 3 months ago
Good catch! The order of the arguments in the call is different from the order expected by the function signature. For some loss functions, this matters! But it does not for MSE, since the magnitude is unchanged regardless of which order you supply the args. You can work through it on paper to confirm, or do a quick check at the R repl:
x <- runif(100)
y <- runif(100)
all(((x-y)^2) == ((y-x)^2))
Ha-ha. Indeed, as this is a squared function! Thanks for the explanation!
https://github.com/t-kalinowski/deep-learning-with-R-2nd-edition-code/blob/5d666f93d52446511a8a8e4eb739eba1c0ffd199/ch03.R#L266C1-L270C5
Can it be that the order of arguments in the function is wrong?
We defined square previously as:
In text they say "the training loss .... stabilized around 0.025" which I only get once I change the order of arguments:
loss <- square_loss( targets, predictions)