There is a problem when you try to do a post request with an AsyncInferenceClient object :
It doesn't handle the case where you're using a proxy, so the post request never reaches the given URL.
I wanted to add the ability to configure the proxy to perform this post request.
Here is a proposal of what I did to add this feature :
There is a problem when you try to do a post request with an AsyncInferenceClient object : It doesn't handle the case where you're using a proxy, so the post request never reaches the given URL. I wanted to add the ability to configure the proxy to perform this post request.
Here is a proposal of what I did to add this feature :
I did for both synchronous and asynchronous InferenceClient because of the following code in HuggingFaceEndpoint :
To finish, here is an example of how to build a HuggingFaceEndpoint object with a specified proxy for our AsyncInferenceClient :
HuggingFaceEndpoint(endpoint_url=... , ... , server_kwargs={"proxy": "http://localhost:3128"})