adalca / neurite

Neural networks toolbox focused on medical image analysis
Apache License 2.0
325 stars 67 forks source link

Explaining HyperConvFromDense #57

Open jackyko1991 opened 2 years ago

jackyko1991 commented 2 years ago

In Hypermoprh they combined a hyper network with voxelmorph for auto-hyperparamter training.

From their source code I have traced that they use "HyperConvFromDense" layer from neurite. Can you explain the concept behind this layer? Thanks.

adalca commented 2 years ago

HyperConvFromDense performs a convolution where the convolution weights come from a dense layer, rather than being learned. This enables us to use convolutions determined by a hypernetwork. Does that make sense?

ahoopes commented 2 years ago

You can think of the HyperConvFromDense layer as a functional equivalent to a regular Conv layer, with the main exception that its internal weights are predicted based on the values of a secondary input (we'll call it 'h'). For example, a normal convolution layer, with input x and output y might be configured like:

y = Conv(x)

while the corresponding hyper-convolution would be configured like:

y = HyperConvFromDense([x, h])

This enables the layer to perform a convolutional operation dependent on h. In HyperMorph, h is the output of a small linear network, conditioned on some input hyperparameter, like lambda.

Another way to think about the HyperConvFromDense layer is that it's really doing two steps under the hood - something like:

conv = predict_conv_weights(h)
y = conv(x)

First it predicts the convolutional kernel and bias weights given h, then it uses those weights to convolve x.

jackyko1991 commented 2 years ago

@ahoopes I have checked the hypernetwork in this notebook

From my understanding hypernetwork is a network train the weights and biases in the inference network. Only network weights in the hypernetwork will be updated during backpropagation. During inference the network weights will generated according to user inputs (in hypermorph the input is the regularization lambdas.

Theoractically the hypernetwork can be a RNN or CNN to reduce trainable parameters to reduce model size. Would this be possible to use hyperconv values without using dense layer as the final output? I have such concern because for deeper inference networks the number of hypernetwork output would also increase.

adalca commented 2 years ago

I at least don't have a super full grasp of overfitting in hypernetworks (@ahoopes might know more?) but @jackyko1991 if you mean because of overfitting concerns, I don't think there is an overfitting issue here (Regardless of size) since you won't validate on hyperparameter values outside the range you use during training.