quic / aimet

AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
https://quic.github.io/aimet-pages/index.html
Other
2k stars 363 forks source link

Model is too big to apply AdaRound #2698

Open yanz0920 opened 5 months ago

yanz0920 commented 5 months ago

What to do when the model is too large to use adaround?

For example, when the model has 6B parameters and dtype is torch.float32, the storage requirements are as follows: model: 24G quantsim_model:24G

But there will be OOM when I runing AdaRound on Nvidia A100, which has 80G cuda memory...

quic-mangal commented 4 months ago

@quic-hitameht could you help answer this?

quic-hitameht commented 4 months ago

Hi @yanz0920 During Adaround optimization, we try to put all the cached intermediate activation data for a given layer on GPU for faster optimization whenever possible. In your case, you could disable this optimization by patching AdaroundOptimizer.enable_caching_acts_data method as shown in this unit test.

https://github.com/quic/aimet/blob/develop/TrainingExtensions/torch/test/python/test_adaround_weight.py#L889

Hope this helps. Please let us know if you have further questions.