axolotl-ai-cloud / axolotl

Go ahead and axolotl questions
https://axolotl-ai-cloud.github.io/axolotl/
Apache License 2.0
7.48k stars 808 forks source link

[Feature Request] Multi-Node Model Parallel #887

Open brthor opened 9 months ago

brthor commented 9 months ago

⚠️ Please check that this feature request hasn't been suggested before.

🔖 Feature description

Allow for training in the model parallel mode when there is more than one node involved.

Specifically, allow the model to be split sequentially over multiple GPUs in the case when there is more than one node present in the system.

This will allow for training large models across multiple nodes, in cases where a person cannot fit all of the required VRAM on a single machine, whether by hardware or space constraints.

✔️ Solution

From these PRs: https://github.com/OpenAccess-AI-Collective/axolotl/pull/816 https://github.com/OpenAccess-AI-Collective/axolotl/pull/538

It seems that the solution could be as simple as triggering the model parallel state when WORLD_SIZE > 1 with a configurable value, either in the config yaml file or passed via cli.

❓ Alternatives

No response

📝 Additional Context

No response

Acknowledgements

creatorrr commented 5 months ago

@winglian any thoughts on this?

brthor commented 4 months ago

I tried to implement this and discovered that some custom work will need to be done to split the model across nodes, and then ferry data back and forth. Accelerate supports multinode with mpi-like operators, but the device_map does not support multi-node.