langchain-ai / langchainjs

πŸ¦œπŸ”— Build context-aware reasoning applications πŸ¦œπŸ”—
https://js.langchain.com/docs/
MIT License
12.7k stars 2.19k forks source link

Unable to Use intfloat/multilingual-e5-large with HuggingFace Transformers #3334

Closed ishikawa-atsuki-valuesccg closed 11 months ago

ishikawa-atsuki-valuesccg commented 12 months ago

I am reaching out for assistance with an issue I'm experiencing while trying to use the intfloat/multilingual-e5-large model in a TypeScript project in my local environment. I am utilizing LangChain.js and HuggingFace Transformers, and I hope you can provide some guidance or a solution.

Environment:

Node.js version: 20.9.0 npm version: 10.1.0 LangChain version: 0.0.192 @xenova/transformers version: 2.8.0

Code: I am using the following code snippet:

import { HuggingFaceTransformersEmbeddings } from 'langchain/embeddings/hf_transformers';

const model = new HuggingFaceTransformersEmbeddings({
    modelName: 'intfloat/multilingual-e5-large',
});

// Embed queries
const res = await model.embedQuery(
    'What would be a good company name for a company that makes colorful socks?'
);
console.log(res);

Error: However, I am receiving this error:

file:///home/my_user/my_project/node_modules/@xenova/transformers/src/utils/hub.js:240
    throw Error(`${message}: "${remoteURL}".`);
          ^

Error: Could not locate file: "https://huggingface.co/intfloat/multilingual-e5-large/resolve/main/onnx/model_quantized.onnx".
    at handleError (file:///home/atsuki/verfication-langchain/node_modules/@xenova/transformers/src/utils/hub.js:240:11)
    at getModelFile (file:///home/atsuki/verfication-langchain/node_modules/@xenova/transformers/src/utils/hub.js:473:24)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async constructSession (file:///home/atsuki/verfication-langchain/node_modules/@xenova/transformers/src/models.js:119:18)
    at async Promise.all (index 1)
    at async BertModel.from_pretrained (file:///home/atsuki/verfication-langchain/node_modules/@xenova/transformers/src/models.js:760:20)
    at async AutoModel.from_pretrained (file:///home/atsuki/verfication-langchain/node_modules/@xenova/transformers/src/models.js:3960:20)
    at async Promise.all (index 1)
    at async loadItems (file:///home/atsuki/verfication-langchain/node_modules/@xenova/transformers/src/pipelines.js:2368:5)
    at async pipeline (file:///home/atsuki/verfication-langchain/node_modules/@xenova/transformers/src/pipelines.js:2314:19)

Node.js v20.9.0

It seems that there might be an issue related to Transformers.js, so it might be more appropriate to reach out to the Hugging Face team. However, if you have any advice on how to use the model locally, or any other suggestions, I would be very grateful for your guidance.

dosubot[bot] commented 12 months ago

πŸ€–

Hello @ishikawa-atsuki-valuesccg!

I'm Dosu, a bot here to assist you with your issues, answer your questions, and help you contribute to this project. While we wait for a human maintainer to join us, I'll be digging into your issue. Please hang tight while I prepare a detailed response.

dosubot[bot] commented 12 months ago

πŸ€–

Hello @ishikawa-atsuki-valuesccg,

I'm Dosu, a bot here to assist you with your queries, help address your bugs, and guide you on how to become a contributor. While we wait for a human maintainer, I'm here to help you with your issue. I'll do my best to provide valuable and timely assistance.

From the error message you provided, it seems that the issue is related to the @xenova/transformers package not being able to locate the intfloat/multilingual-e5-large model file. This could be due to the model not being available in the Hugging Face Model Hub, or the model file not being named as expected by the @xenova/transformers package.

As for LangChain.js, it should be able to work with any model that is compatible with the Hugging Face Transformers library, as long as the model is correctly loaded into the HuggingFaceTransformersEmbeddings class. However, the specific model you're trying to use, intfloat/multilingual-e5-large, is not mentioned in the LangChain.js repository, so I can't confirm its compatibility.

Here's a suggestion: You could try to download the model files manually from the Hugging Face Model Hub and load them from your local file system. Here's how you can modify your code to load the model from a local directory:

const model = new HuggingFaceTransformersEmbeddings({
    modelName: '/path/to/your/local/model/directory',
});

Please replace '/path/to/your/local/model/directory' with the actual path to the directory where you saved the model files.

If this doesn't solve your issue, it might be helpful to reach out to the Hugging Face team or the maintainers of the @xenova/transformers package for further assistance, as they might be able to provide more specific guidance on how to use the intfloat/multilingual-e5-large model with their libraries.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ishikawa-atsuki-valuesccg commented 11 months ago

I think I will use Bedrock.

zdarova commented 10 months ago

I'm using Titan for the embeddings from Bedrock (amazon.titan-embed-text-v1) and wondering if you managed to make intfloat/multilingual-e5-large work to do a comparison. Thanks

ishikawa-atsuki-valuesccg commented 10 months ago

@zdarova I'm sorry, but I ended up not trying to use intfloat/multilingual-e5-large with JavaScript, so I couldn't get it to work at that time. I'm not sure if it has become workable since then.

qsdhj commented 6 months ago

I tried this:

model_name = "intfloat/multilingual-e5-large-instruct"

model_kwargs = {'device': 'cuda',
                   }
query_instruction = "Given a web search query, retrieve relevant passages that answer the query:"
encode_kwargs = {'normalize_embeddings': True}

hf = HuggingFaceInstructEmbeddings(
    model_name=model_name,
    model_kwargs=model_kwargs,
    encode_kwargs=encode_kwargs,
    # query_instruction=query_instruction

And get this error

FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\xxx/.cache\\torch\\sentence_transformers\\intfloat_multilingual-e5-large-instruct\\sentence_xlnet_config.json'
juerware commented 3 months ago

I tried this:

model_name = "intfloat/multilingual-e5-large-instruct"

model_kwargs = {'device': 'cuda',
                   }
query_instruction = "Given a web search query, retrieve relevant passages that answer the query:"
encode_kwargs = {'normalize_embeddings': True}

hf = HuggingFaceInstructEmbeddings(
    model_name=model_name,
    model_kwargs=model_kwargs,
    encode_kwargs=encode_kwargs,
    # query_instruction=query_instruction

And get this error

FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\xxx/.cache\\torch\\sentence_transformers\\intfloat_multilingual-e5-large-instruct\\sentence_xlnet_config.json'

I have the same problem right now