Open oculushut opened 3 months ago
Hey there, @oculushut! Great to see you diving into the depths of LlamaIndex again. Running into a bit of a snag, are we? Let's see what we can do about that pesky ReadTimeoutError
.
Based on the information you've provided, it seems like the ReadTimeoutError
you're encountering is due to a network issue on your local setup. The HuggingFaceEmbedding
class from the llama_index.embeddings.huggingface
module, which you're using to load the model, does not directly expose a timeout
parameter that you can adjust. This class uses the sentence_transformers
library for embedding operations, which may internally use network operations to download models but does not expose direct control over network parameters like timeout
.
However, there's a different class in the same module, HuggingFaceInferenceAPIEmbedding
, which does allow adjusting the timeout
parameter. This class uses the Hugging Face Inference API for network operations. Here's the relevant part of the code for reference:
timeout: Optional[float] = Field(
default=None,
description=(
"The maximum number of seconds to wait for a response from the server."
" Loading a new model in Inference API can take up to several minutes."
" Defaults to None, meaning it will loop until the server is available."
),
)
This timeout
parameter can be set during the initialization of a HuggingFaceInferenceAPIEmbedding
instance. However, please note that this class is not the one you're currently using (HuggingFaceEmbedding
), and switching to it might require changes to your code.
If you're only experiencing this issue on your local setup and not on Google Colab, it might be worth investigating if there are any network restrictions or issues that could be causing the timeout. This could include firewall settings, network congestion, or ISP-related issues.
If you continue to experience this issue, please provide more details about your local setup (e.g., operating system, network configuration, Python version, etc.) and any error messages or logs you have. This will help us better understand the issue and provide more targeted assistance.
You can find more information about the HuggingFaceEmbedding
and HuggingFaceInferenceAPIEmbedding
classes in the LlamaIndex repository.
Seems like an issue with connecting to huggingface from your local computer. Maybe due to a VPN or other things?
Question Validation
Question
Hi there - I'm a bit new to all of this, but I was wondering if there was a simple answer for the error I am seeing.
This code:
Generates this error:
I ran the same code on a Collab instance and seemed to run fine. I could not see any obvious ways to increase the ReadTimeout. Any tips?
Thanks!