Closed rasmuspjohansson closed 3 months ago
Adding the following import from fastai.layers import _get_norm in order for the following code to be able to run
def BatchNormZero(nf, ndim=2, kwargs):\n", " \"BatchNorm layer with nf features and ndim initialized depending on norm_type. Weights initialized to zero.\"\n", " return _get_norm('BatchNorm', nf, ndim, zero=True, kwargs)\n",
nf
ndim
norm_type
After this fix it becomes possible to train a timm model based unet with attention or double_attention as bottleneck
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Thanks!
Adding the following import from fastai.layers import _get_norm in order for the following code to be able to run
def BatchNormZero(nf, ndim=2, kwargs):\n", " \"BatchNorm layer with
nf
features andndim
initialized depending onnorm_type
. Weights initialized to zero.\"\n", " return _get_norm('BatchNorm', nf, ndim, zero=True, kwargs)\n",After this fix it becomes possible to train a timm model based unet with attention or double_attention as bottleneck