BlinkDL / RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Apache License 2.0
12.05k stars 827 forks source link

v4neo multi gpu training split (not duplicated ram/vram) #157

Closed ExtReMLapin closed 11 months ago

ExtReMLapin commented 1 year ago

Hello,

How exactly do you train on multi gpu without duplicating ram/vram, I don't want to train just faster I also want to be able to train bigger models using multiple A40 48gb GPUs

Thanks.

BlinkDL commented 11 months ago

use deepspeed stage2 :)