walkwithfastai / walkwithfastai.github.io

Host for https://walkwithfastai.com
Other
143 stars 53 forks source link

Fix: adding import nececeary for using attention layers as bottlenecks #70

Closed rasmuspjohansson closed 3 months ago

rasmuspjohansson commented 3 months ago

Adding the following import from fastai.layers import _get_norm in order for the following code to be able to run

def BatchNormZero(nf, ndim=2, kwargs):\n", " \"BatchNorm layer with nf features and ndim initialized depending on norm_type. Weights initialized to zero.\"\n", " return _get_norm('BatchNorm', nf, ndim, zero=True, kwargs)\n",

After this fix it becomes possible to train a timm model based unet with attention or double_attention as bottleneck

review-notebook-app[bot] commented 3 months ago

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

muellerzr commented 3 months ago

Thanks!