voidful / TextRL

Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-176B/bloom/gpt/bart/T5/MetaICL)
MIT License
545 stars 60 forks source link

Does the package support automatic multi-gpu? #24

Closed margarita-aicyd closed 1 year ago

margarita-aicyd commented 1 year ago

Hi,

Very excited to find this package useful!

Does it support multi-gpu? I am dealing with datasets with long input >> 1024, the CUDA-error is reported when initializing the environment. (env = MyEnv(...)).

Everything goes well if I switch to a small input-size.

Thank you!

voidful commented 1 year ago

Hi,

I apologize for the inconvenience, but currently, multi-GPU training is not supported. However, you can try using the device_map='auto' option to split the model across available devices. This may help in utilizing multiple GPUs effectively.

Please let me know if you have any further questions or need assistance with anything else.

margarita-aicyd commented 1 year ago

Thank you!