Closed Extremys closed 11 months ago
Hey @Extremys Thank you for you kind words and yes indeed. By self-hosting you mean, you want an end point you want to hit for inference? We have a hosted inference, that you can hit.
response = openai.ChatCompletion.create(
api_base = "http://34.132.127.197:8000/v1",
model='gorilla-bash-v0',
temperature=0.1,
messages=messages,
)
Hope this helps. Let me know if you have any other questions!
No He means he wants the model + API implementation of it, to host it on his own environment
No He means he wants the model + API implementation of it, to host it on his own environment
Yes exactly :) with the API service
Hello, great cli app! Would it be possible to release the inference API source code if not already done, to be able to self host the inference API? Regards.