f-dangel / backpack

BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.
https://backpack.pt/
MIT License
560 stars 55 forks source link

Support for single output networks (BCELoss and MSELoss) #96

Closed aleximmer closed 1 year ago

aleximmer commented 4 years ago

Currently, BCELoss where the neural network maps to a scalar for a single example and a vector for a batch, are not supported if I am not mistaken. Therefore, for simple binary classification, one needs to replace BCELoss with standard cross-entropy loss (for multiclass) and use a network with two outputs where only one would be needed. As you initialize with the square root of the loss Hessian, for binary classification the BCE would be probably better/more exact since in the multiclass case the Hessian is not full rank.

Is there a problem with scalar output networks? It seems that for MSELoss, as a [Batch, 1] is required even for scalar observations right?

fKunstner commented 4 years ago

Hi Alex, thanks for the report!

MSELoss not supporting vectors is annoying if it clashes with the standard pytorch API. I'll try to look into it.

There's no technical reason for BCELoss to be missing. It's not on the priority list as CrossEntropyLoss is more general. The difference in running time would not be noticable (compared to the rest of the model) and they would give the same output as it's a reparametrization (up to floating point precision).

aleximmer commented 4 years ago

To have BCELoss included, one would need to extend the loss function and provide the second-derivatives and some other quantities I guess? Maybe I might give it a shot sometime. I understand why it doesn't make much sense to assign high priority to it as it though is indeed almost the same as softmax for two classes.

The MSELoss problem is not so bad and can simply be avoided by reshaping the labels from N-dimensional vector to a[N,1] matrix.

f-dangel commented 1 year ago

Hi Alex,

I rolled out support for BCEWithLogitsLoss, the binary equivalent of CrossEntropyLoss, in most extensions (#279, #280, #282, #283, #284). I was favoring BCEWithLogitsLoss over BCELoss because Fisher = GGN for the former.

Feel free to close this issue if you believe it to be solved.