FMInference / FlexLLMGen

Running large language models on a single GPU for throughput-oriented scenarios.
Apache License 2.0
9.22k stars 549 forks source link

opt-175b model how to load model from disc. #73

Open prof-schacht opened 1 year ago

prof-schacht commented 1 year ago

I tried to load the opt-175b model by using the following command:

python3 -m flexgen.flex_opt --model facebook/opt-175b --percent 0 0 100 0 100 0 --offload-dir ./tmp_offline

The issue I have is, that after converting the weights using Alpha to numpy as described in a folder. I'm not knowing how to define that the script should use the folder for loading the model.

I always get the error: OSError: facebook/opt-175b is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'

How do you have to define the command to load the locally stored weights of the opt-175b model?

xangma commented 1 year ago

The numpy weights seem to want to be put in ~/opt_weights/opt-175b-np?

merrymercy commented 1 year ago

Yes. The numpy weights should be put under ~/opt_weights/opt-175b-np/. You can check the downloaded weights of smaller models for the required format.