Closed PierreMesure closed 2 months ago
hi @PierreMesure, thank you for creating the issue. Sorry, we don't have the functionality of contributing to the docs at this time (hopefully soon).
This is a great example of a Studio templates. We feature LitServe based templates here. If you're interested in creating a template for your code, similar to LlamaIndex RAG API but with OpenAISpec
, it would be a great candidate to be featured on docs.
Thank you Aniket, that sounds like a good idea! I'm not sure how to create a template but I guess the first step is to create an account. I'll try in the coming days. 🙂
EDIT: I tried but I didn't manage to go through the onboarding process (invalid phone number?). I have no need for a studio in this service, I just wanted to add documentation for Litserve which I'm using on my own machines. So I guess I'll wait until there's another way to contribute.
@PierreMesure Could you please explain how you handle tool calls in the predict function? It seems that predict only returns a text stream.
I haven't tried that. Hope you can publish your code when you've got it working 🙂.
🚀 Feature
A new page of documentation explaining how to expose an LlamaIndex RAG using an OpenAI-compatible API.
Motivation
It took me a good 6 hours to put together these two tutorials: LlamaIndex RAG API and OpenAI spec to expose my LlamaIndex app with an OpenAI-spec API. Maybe I'm a bit stupid but I think this should be a pretty common use-case so I wanted to write a new page in Litserve's documentation. But I couldn't find the docs source code here so I'm writing an issue instead.
Code
server.py
simple_llm.py
test.py