huggingface / candle

Minimalist ML framework for Rust
Apache License 2.0
15.83k stars 956 forks source link

Trainable batch normalization #467

Closed ViliamVadocz closed 10 months ago

ViliamVadocz commented 1 year ago

I am trying to translate some code I wrote with tch-rs into candle as an experiment to see what the library is like. It looks like I stumbled into a road-block almost immediately. I have a convolutional neural network made up of many residual blocks. Each residual block internally uses batch normalization.

In tch-rs, I could use nn::batch_norm_2d. Is batch normalization is not implemented by candle yet?

LaurentMazare commented 1 year ago

Right, batch-normalization is not available yet. We started by focussing on language models where group-norm is far more frequent than batch-norm. We've just started adding the vision bits, e.g. convolutions so as to get stable-diffusion to run, we would like to add some actual vision model now so batch norm is likely to be added soonish (a week or two I would say).

LaurentMazare commented 1 year ago

Not sure if it will be enough for your use case but I've just merged #508 which adds a batch normalization layer. It could be used in a similar way to nn::batch_norm_2d but with the limitation that it's only designed for inference and would not work for training (it doesn't keep track/learn the running stats). I've tested it on some examples against the PyTorch implementation and it seems reasonable but let me know if you see anything weird with it.

ViliamVadocz commented 1 year ago

I am training networks, so unfortunately this is not enough for my usecase.

LaurentMazare commented 1 year ago

Interesting, what models do you actually care about? I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.

ViliamVadocz commented 1 year ago

I am working with ResNets for AlphaZero / MuZero.

ViliamVadocz commented 1 year ago

Has there been any progress on this front?

Awpteamoose commented 1 year ago

Interesting, what models do you actually care about? I had the feeling that most recent architectures use some form of group/layer norm instead of batch-norm (e.g. dinov2, the unet/vae from stable diffusion) and so I was thinking that we would only have batch-norm for inference as it's a mess to get right for training contrary to group/layer norms. That said, certainly happy to reconsider if there is much demand for it.

I'm using MobileNetV3, which needs trainable batchnorms, as well as other mobile-scale realtime classification convnets.

LaurentMazare commented 1 year ago

Not much progress I'm afraid. @Awpteamoose do you have some MobileNetV3 or other models code that you could share? Would be very interesting to point at it as external resources that use candle. If I understand you're training these models? I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.

Awpteamoose commented 1 year ago

I was porting my implementation from dfdx (https://github.com/coreylowman/dfdx/pull/794) and halfway through noticed that batchnorms aren't trainable so I don't really have any code to share.

I would have assumed that nowadays even mobile scale vision models have mostly switched to transformers like tinyvit etc.

I'm probably just out of date as the field moves very fast, but also transformers that I have looked at require an order of magnitude more FLOPS. I'm doing inference on tiny single-core CPUs as part of massively parallelised video analysis so even real-time is too slow for me.

nkoppel commented 10 months ago

@LaurentMazare This should be closed due to the merge of #1504