Open HatedFate opened 4 days ago
Hi @HatedFate I think you do not need to do anything, HuggingFace already did that for you. You should already trained the model on multi-GPU
Hi @HatedFate I think you do not need to do anything, HuggingFace already did that for you. You should already trained the model on multi-GPU
So I can simply run it the same way it is done in the Jupyter Notebook, right? Do I have to specified how many GPUs I am using or will it default to use all the GPUs I allocated to it?
@HatedFate yeah, I think so. I will allocate all available sources by default.
I am still very new to LLMs. I have access to a large amount of GPUs, and I would like to train this model across multiple GPUs ( though, I am not sure whether this is necessary/overkill ). Previously, I used DistributedDataParallel for parallelization, but I am not sure how to integrate this into the trainer.