microsoft / mttl

Building modular LMs with parameter-efficient fine-tuning.
MIT License
62 stars 3 forks source link

Inquiry about Multi-GPU Training Support #43

Closed rongaoli closed 1 year ago

rongaoli commented 1 year ago

Hello,

I noticed a specific code snippet in your project and it has sparked my interest.

trainer = Trainer( enable_checkpointing=not args.finetune_skip_es, devices=1, ... precision=int(args.precision) if args.precision in ["16", "32"] else args.precision, callbacks=callbacks, accumulate_grad_batches=args.gradient_accumulation_steps, )

I would like to inquire if your project supports multi-GPU training. Could you please provide information on whether there are built-in features available for utilizing multiple GPUs in the training process?

Thank you for your assistance!

Best regards