Open YaxinFAN1 opened 1 year ago
@YaxinFAN1, thank you for your interest in our work!!!
I just modified the openai_wrapper to catch openai.error.APIConnectionError
.
If this doesn't work, the issue is likely still related to the proxy, as I mentioned in this isse.
We are working diligently to enable users to use open source models rather than the OpenAI API. Stay tuned!
Let me know if you have any more questions!
Thank you!
Hi, thank you for your reply.
You are right, it is indeed the proxy problem.
I didn't set the proxies successfully before.
When I set the proxies as follows, it worked.
import openai
proxies = {'http': "http://127.0.0.1:7890",
'https': "http://127.0.0.1:7890"}
openai.proxy = proxies
# Initialize a Factool instance with the specified keys. foundation_model could be either "gpt-3.5-turbo" or "gpt-4"
factool_instance = Factool("gpt-3.5-turbo")
inputs = [
{
"prompt": "Introduce Graham Neubig",
"response": "Graham Neubig is a professor at MIT",
"category": "kbqa"
},
]
response_list = factool_instance.run(inputs)
print(response_list)
Thanks for your awesome contributions!!!
Hey @YaxinFAN1 @EthanC111 - why proxy the openai base?
for supporting local models like llama2 wouldn't they need to be deployed on GPUs / deployment providers - which have their own client libraries?
Hello, I encountered the following error.
I run the following code and it's ok.
I also tried the following solutions and they all failed. https://zhuanlan.zhihu.com/p/611080662 https://blog.csdn.net/weixin_43937790/article/details/131121974
So, how to solve this problem?