Closed zhongwentao closed 5 months ago
🤖
Based on the code you provided and the context from the LlamaIndex repository, it seems like the issue might be related to the proxy settings for the API connection. When you use Postman with the proxy turned on, Postman is configured to route its requests through the proxy, which is not automatically the case for your Python code.
In the provided code snippet, there is no mention or handling of proxy configurations for the HuggingFaceInferenceAPIEmbedding
class or its parent classes. This means that if your network environment requires a proxy to access external APIs, your code will not be able to connect to the Hugging Face API unless it is explicitly configured to use the proxy.
To resolve this issue, you would need to modify the code to ensure that HTTP requests made by the HuggingFaceInferenceAPIEmbedding
class (or its underlying HTTP client) are configured to use the necessary proxy. This typically involves setting environment variables (HTTP_PROXY
and HTTPS_PROXY
) or configuring the HTTP client library you're using (e.g., requests
, httpx
) to use a proxy.
If you're using the requests
library, for example, you can configure it to use a proxy by setting the proxies parameter in your request:
import requests
proxies = {
'http': 'http://10.10.1.10:3128',
'https': 'http://10.10.1.10:1080',
}
response = requests.get('http://example.com', proxies=proxies)
However, since your code is likely using an asynchronous HTTP client (given the asynchronous nature of the embedding methods), you'll need to check the documentation for the specific HTTP client library you're using to see how to configure it to use a proxy.
If the Hugging Face Inference API client is based on httpx
, configuring a proxy might look like this:
import httpx
proxies = {
'http://': 'http://10.10.1.10:3128',
'https':': 'http://10.10.1.10:1080',
}
async with httpx.AsyncClient(proxies=proxies) as client:
response = await client.get('http://example.com')
You would need to integrate proxy configuration into the part of your code that makes the HTTP requests to the Hugging Face API. This might involve modifying the HuggingFaceInferenceAPI
class or wherever the HTTP client is instantiated and used.
For more details, you can refer to the source code and this issue in the LlamaIndex repository.
I hope this helps! If you have any other questions or need further clarification, feel free to ask.
Try setting the task type
HuggingFaceInferenceAPIEmbedding(..., task="feature-extraction")
试过了没有用。我跟了下代码用request请求hugging api接口时,它会去获取全局的代理,而HuggingFaceInferenceAPIEmbedding我不知道要怎么把代理配置传给它
Hmm, I don't think the embedding class supports a proxy right now? 🤔
Question Validation
Question
我开启代理后,用postman请求api接口是通的,但是用HuggingFaceInferenceAPIEmbedding去调就提示无法连接。 代码如下:
并且我直接用request去请求接口也是能通的,谁能帮我看看,是我用法有问题吗?