alibaba / FederatedScope

An easy-to-use federated learning platform
https://www.federatedscope.io
Apache License 2.0
1.27k stars 208 forks source link

add prefix tuning, prompt tuning and p-tuning #658

Closed qbc2016 closed 1 year ago

qbc2016 commented 1 year ago

How to use: -) prefix: llm.adapter.args: [ { 'adapter_package': 'peft', 'adapter_method': 'prefix', 'num_virtual_tokens': 20 } ] -) prompt: llm.adapter.args: [ { 'adapter_package': 'peft', 'adapter_method': 'prompt', 'num_virtual_tokens': 20 } ] -) p-tuning: llm.adapter.args: [ { 'adapter_package': 'peft', 'adapter_method': 'p-tuning', 'num_virtual_tokens': 20, 'encoder_hidden_size': 128} ] Remark: when tok_len = 512, 400, 450 for prefix, prompt and p-tuning, the GPU usage are 32419MiB , 31431MiB and 32433MiB, respectively.