SeanNaren / deepspeech.pytorch

Speech Recognition using DeepSpeech2.
MIT License
2.1k stars 620 forks source link

partition_activations produces no activation memory improvement with zero3 #693

Open andrasiani opened 1 year ago

andrasiani commented 1 year ago

Hi, I am trying to run a gpt2 model with blocksize 2048, and I cannot use batchsize larger than 16 because activation memory becomes too large. To reduce activation memory I already use deepspeed actication checkpointing on each transformer block +amp. I saw there is an option to partition / shard activations too, advertized by megatron. But when I try it I see no effect at all.

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.