Closed ViliamVadocz closed 10 months ago
Right, batch-normalization is not available yet. We started by focussing on language models where group-norm is far more frequent than batch-norm. We've just started adding the vision bits, e.g. convolutions so as to get stable-diffusion to run, we would like to add some actual vision model now so batch norm is likely to be added soonish (a week or two I would say).
Not sure if it will be enough for your use case but I've just merged #508 which adds a batch normalization layer. It could be used in a similar way to nn::batch_norm_2d
but with the limitation that it's only designed for inference and would not work for training (it doesn't keep track/learn the running stats). I've tested it on some examples against the PyTorch implementation and it seems reasonable but let me know if you see anything weird with it.
I am training networks, so unfortunately this is not enough for my usecase.
Interesting, what models do you actually care about? I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.
I am working with ResNets for AlphaZero / MuZero.
Has there been any progress on this front?
Interesting, what models do you actually care about? I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.
I'm using MobileNetV3, which needs trainable batchnorms, as well as other mobile-scale realtime classification convnets.
Not much progress I'm afraid. @Awpteamoose do you have some MobileNetV3 or other models code that you could share? Would be very interesting to point at it as external resources that use candle. If I understand you're training these models? I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.
I was porting my implementation from dfdx (https://github.com/coreylowman/dfdx/pull/794) and halfway through noticed that batchnorms aren't trainable so I don't really have any code to share.
I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.
I'm probably just out of date as the field moves very fast, but also transformers that I have looked at require an order of magnitude more FLOPS. I'm doing inference on tiny single-core CPUs as part of massively parallelised video analysis so even real-time is too slow for me.
@LaurentMazare This should be closed due to the merge of #1504
I am trying to translate some code I wrote with
tch-rs
intocandle
as an experiment to see what the library is like. It looks like I stumbled into a road-block almost immediately. I have a convolutional neural network made up of many residual blocks. Each residual block internally uses batch normalization.In
tch-rs
, I could usenn::batch_norm_2d
. Is batch normalization is not implemented bycandle
yet?