langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
91.24k stars 14.51k forks source link

HuggingFacePipeline can‘t load model from local repository #22528

Open DiaQusNet opened 2 months ago

DiaQusNet commented 2 months ago

Checked other resources

Example Code

from langchain_huggingface import HuggingFacePipeline

llm = HuggingFacePipeline.from_model_id(
    model_id="my_path/MiniCPM-2B-dpo-bf16",
    task="text-generation",
    pipeline_kwargs=dict(
        max_new_tokens=512,
        do_sample=False,
        repetition_penalty=1.03,
    ),
)

Error Message and Stack Trace (if applicable)


ValueError: Loading my_path/MiniCPM-2B-dpo-bf16 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.

Description

I follow the offical examples at https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/, just change the path to my local repo where the model files were downloaded at Huggyingface. When I try to run the codes above, terminal shows: The repository for my_path/MiniCPM-2B-dpo-bf16 contains custom code which must be executed to correctly load the model. You can inspect the repository content at https://hf.co/my_path/MiniCPM-2B-dpo-bf16. You can avoid this prompt in future by passing the argument trust remote code=True Do you wish to run the custom code? y/N] (Press 'Enter' to confirm or 'Escape' to cancel) and no matter what you choose, the final error is to tell you to execute the config file in that repo which didn't exsit in the reopistory.

System Info

langchain==0.2.1 langchain-community==0.2.1 langchain-core==0.2.3 langchain-huggingface==0.0.1 langchain-openai==0.1.8 langchain-text-splitters==0.2.0 langchainhub==0.1.17 platform: Ubuntu 20.04.1 python==3.9

sangam0406 commented 2 months ago

seems that the HuggingFacePipeline.from_model_id() method is raising an error because the model ID you're trying to load requires execution of a configuration file on your local machine.for rectifying try changing the model,try executing the configuration fil locally.from langchain_huggingface import HuggingFacePipeline If these doesnt work try changing the code in this manner. llm = HuggingFacePipeline.from_model_id( model_id="my_path/MiniCPM-2B-dpo-bf16", task="text-generation", pipeline_kwargs=dict( max_new_tokens=512, do_sample=False, repetition_penalty=1.03, ), trust_remote_code=True
)

DiaQusNet commented 2 months ago

After setting trust_remote_code=True, the same error occurred.