langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.4k stars 2.1k forks source link

DOC: <Issue related to /v0.2/docs/integrations/llms/huggingface_inference/> #5680

Closed rookiewxy closed 2 weeks ago

rookiewxy commented 3 months ago

Checklist

Issue with current documentation:

No response

Idea or request for content:

How to use the model on huggingface, the code example only wrote gpt2, I download the model from huggingface to the local, using the absolute path, but the error 404, the code is as follows

import { HuggingFaceInference } from "@langchain/community/llms/hf"; export const generateImage = async (data: string) => { const model = new HuggingFaceInference({ model: "E:\langchain\gpt2", apiKey: "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", }); const res = await model.invoke("1 + 1 ="); console.log({ res }); };

dosubot[bot] commented 3 months ago

Hey there, @rookiewxy! I'm here to help you with any bugs, questions, or contributions you might have. Let's tackle this together!

To correctly use a locally downloaded model from Hugging Face with your current setup and avoid a 404 error, you can use the HuggingFaceInference class from the langchain library. Here is an example of how to do it:

First, ensure you have the necessary package installed:

npm install @huggingface/inference@2

Then, you can use the following code to load and use the model:

import { HuggingFaceInference } from "langchain/llms/hf";

export const generateImage = async (data: string) => {
  const model = new HuggingFaceInference({
    model: "E:\\langchain\\gpt2", // Ensure this path points to your locally downloaded model
    apiKey: "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", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY
  });
  const res = await model.invoke("1 + 1 =");
  console.log({ res });
};

Make sure that the path you provide (E:\\langchain\\gpt2) correctly points to the location of your locally downloaded model file. This will ensure that the model is loaded from the local file system rather than attempting to fetch it from a remote server, thus avoiding a 404 error [1][2][3][4].

To continue talking to Dosu, mention @dosu.