rmihaylov / falcontune

Tune any FALCON in 4-bit
Apache License 2.0
468 stars 52 forks source link

8-bit models #33

Open zepmck opened 1 year ago

zepmck commented 1 year ago

Hi all, I am trying to finetune falcon-40b-instruct but running into the following error:

ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode. In
order to use 8-bit models that have been loaded across multiple GPUs the solution is to use Naive Pipeline Parallelism. 
Therefore you should not specify that you are under any distributed regime in your accelerate config.

Any suggestion?

Thanks!