Open casper-hansen opened 1 week ago
we can look into this more in detail, meanwhile, have you tried using mosaicml/composer though for training? Are there specific features you are relying on in Torchtitan?
I would really appreciate if you could look into it! TorchTitan uses torch.distributed.pipelining
, most of which is only available from 2.5.0 or in nightly builds.
There are many key features like FSDP2, 4D parallelism, FP8, and torch.compile that makes LLaMa models scale well in pretraining. You also get full control over the training loop which is desirable if you want to experiment.
🚀 Feature Request
Supporting TP and SP seems quite easy to do with the `replication parameter:
I have tried various ways to enable PP without success (unexpected high loss). I tried adding
pp
into the equation when computingreplication
andnum_canonical_nodes
, but I cannot get it to function normally because I get an unexpected high loss.Motivation
I want to use the mosaicml streaming library with 4D parallel. Specifically, I rely on TorchTitan as my training tool and have simply swapped in the mosaicml streaming library by modifying the StreamingTextDataset implementation from LLM Foundry.