huggingface / alignment-handbook

Robust recipes to align language models with human and AI preferences
https://huggingface.co/HuggingFaceH4
Apache License 2.0
4.28k stars 367 forks source link

Hardware used for reproducing #33

Closed nathan-az closed 8 months ago

nathan-az commented 8 months ago

Any information on the exact hardware used when training Zephyr 7B Beta? The deepspeed config looks like no CPU offloading was done so were larger (or many) GPUs used?

Would be great to know the hardware that was used for the sake of reproducing, as well as safely expanding to using other datasets (knowing that the current configuration with regards to batch sizes won't OOM).

edbeeching commented 8 months ago

I believe we used 2 nodes with 8xA100 80GB per node. But you should be able to trained it on 1 node if you adjust the batch size and grad accumulation.