Open adampl opened 8 years ago
Good point. You're right, the layer implementation actually computes half of the squared difference (and the gradients accordingly), so SquaredDifference
is a misnomer. We should do something about this. (CC @Qwlouse)
As to the reason, this is basically because we wanted to use this layer for regression similar to how some libraries do it (e.g. Caffe's EuclideanLossLayer). This makes the results match for someone coming from another library. Libraries such as Chainer use the 'correct' error though, so may be we should switch to that.
We should have a regression example. Any suggestions for the task?
This probably is the only way to make gradient checking work correctly, though. If we didn't add the constant 1/2, we'd have to multiply errors by 2 during backprop to make everything "mathematically correct" -- in practice we could of course just say that that constant gets absorbed into the learning rate, but it's technically correct to add the 1/2, IMO.
The backward pass implementation could simply multiply deltas by 2, so the gradient check would work fine.
Edit: I meant to say, sure we'll have to modify the backward pass, but this is not a reason to not compute the 'correct' squared difference.
Or just warn about this somewhere in the docs, though it seems less elegant.
We should have a regression example. Any suggestions for the task?
I suggest time series prediction using LSTM :)
Here's a plan for this issue. We'll change SquaredDifference
layer so that it computes the correct squared difference. We'll add a new layer (let's call it SquaredLoss
, subject to change) which will be used similar to SoftmaxCE
layer, but for regression. Differences:
SquaredLoss
will compute half of the squared error, to be consistent with implementations like UFLDL, Caffe etc. This will be specified in the docs.SquaredLoss
will have inputs named default
and targets
, SquaredDifference
has inputs named inputs_1
and inputs_2
.SquaredLoss
will have outputs named predictions
and loss
, SquaredDifference
has only default
.SquaredLoss
will only compute gradients w.r.t. default
inputs, SquaredDifference
computes them for both inputs.I'm done with making the above change in a private branch. I've named the new layer SquaredLoss
, but perhaps SquaredError
or something else would be better? (Caffe calls it EuclideanLoss).
How about EuclideanRE (compatible with SoftmaxCE)
I don't really like Euclidean
because it doesn't really mean anything in the Neural Networks community. But CE
is a good suffix to stay consistent. Maybe GaussianCE
or RegressionCE
would be better?
I agree, Euclidean loss is not really a name commonly used in NN literature. MSE
is. CE
is a good suffix, but also not commonly used in regression context.
A proposition which tries to respect convention and accuracy: the new layer is called SquaredError
or SE
(since it doesn't compute the mean), the older layer is renamed to EuclideanDistance
. We may later have a CosineDistance
layer (for siamese nets etc.)
I now realize that EuclideanDistance
would clearly not be a correct name either.
HalvedSquaredDifference
? :)
:D Good point. However, I think that SquaredError
is probably the best name for the new layer, even though it halves the error. The Error
suffix can act as a signal about it being special (it does not compute gradients for targets like other CE
layers). It would be easily recognizable, and we would note in the doc that it computes half of the squared error as defined in some textbooks.
I have already changed the older SquareDifference
layer such that it computes the actual sq. diff.
I tried a bit to look for an LSTM regression dataset which would be as recognizable as MNIST/CIFAR but didn't really find one. Open to suggestions.
Hi everyone!
I'm using brainstorm for multivariable regression with LSTM network and I've just noticed that the
SquaredDifference.outputs.default
values are exactly half of the real squared errors.FYI: using Python 3.3.3 with the latest version of brainstorm from master.
By the way, a regression example would be welcome :)