bigscience-workshop / xmtf

Crosslingual Generalization through Multitask Finetuning
https://arxiv.org/abs/2211.01786
Apache License 2.0
513 stars 37 forks source link

Use Petals without sharing GPU #14

Open raihan0824 opened 1 year ago

raihan0824 commented 1 year ago

is it possible to use petals for inferring/prompt tuning without sharing my gpu?

Muennighoff commented 1 year ago

Not sure about that, maybe one of @borzunov @justheuristic @mryab knows?

borzunov commented 1 year ago

Hi @raihan0824,

Your GPU is not shared when you use a Petals client to run inference or fine-tuning. The GPU is only shared when you run a Petals server.

raihan0824 commented 1 year ago

yes, but I want to run bloom in petals with my own GPU, not others. Is that possible?

mryab commented 1 year ago

Hi, do you mean you want to use Petals with your GPU, but don't want to let the others use it? I think you can set up a private swarm using these instructions. If you run into any troubles, the tutorial has a link to the Discord server, where we (and other users of Petals) can help you with technical issues.

Please keep in mind that you'll need around 176GB of GPU memory just for 8-bit parameters though; if you only have a single GPU, your best bet is offloading or joining the public swarm.

raihan0824 commented 1 year ago

Well noted.

Is it possible to do prompt tuning with that private swarm? also what if I want to use the smaller bloom model such as bloomz-7b1-mt?

My goal is to do prompt tuning on bloomz-7b1-mt.

mryab commented 1 year ago

Yes, it is possible — you just need to specify a different set of initial peers in DistributedBloomConfig when you're creating DistributedBloomForCausalLM from the tutorial. By default, the config (and thus the model) will connect to peers from the public swarm — you need to change these to the addresses of your peers in the private swarm.

However, I'd say that for bloomz-7b1, you might not even need Petals (depends on your GPU setup, obviously). A reasonably new GPU should be able to host the whole model, so you'll be able to run it just with standard Transformers/PEFT. Do you have any specific reasons why you want to use Petals for this task?

raihan0824 commented 1 year ago

The reason why I want to use petals is because it can be used to do prompt tuning, instead of fine-tuning. I can't find other sources that provides prompt tuning for BLOOM

mryab commented 1 year ago

Have you checked out https://github.com/huggingface/peft#use-cases? I think PEFT even showcases bigscience/bloomz-7b1, and the model support matrix includes BLOOM for prompt tuning

raihan0824 commented 1 year ago

Thank you for the info! Will check it out.

So I want to confirm my initial question: It's possible to use petals with my own GPU to do inference and prompt-tuning on bigscience/bloomz-7b1 model. Is that correct?

mryab commented 1 year ago

Yes, it is possible, but not necessary: with PEFT, you are likely to get the same result with fewer intermediate steps for setup.

raihan0824 commented 1 year ago

Thank you very much 🙏