Open yanz0920 opened 5 months ago
@quic-hitameht could you help answer this?
Hi @yanz0920 During Adaround optimization, we try to put all the cached intermediate activation data for a given layer on GPU for faster optimization whenever possible. In your case, you could disable this optimization by patching AdaroundOptimizer.enable_caching_acts_data
method as shown in this unit test.
Hope this helps. Please let us know if you have further questions.
What to do when the model is too large to use adaround?
For example, when the model has 6B parameters and dtype is torch.float32, the storage requirements are as follows: model: 24G quantsim_model:24G
But there will be OOM when I runing AdaRound on Nvidia A100, which has 80G cuda memory...