Closed ShelbyJenkins closed 4 months ago
FYI, I wrote a backend to do this, and ended up doing enough to make it usable for others.
Just published it here: https://github.com/ShelbyJenkins/llm_client/
IMO, I think it makes more sense to integrate these features into this library rather than duplicating the code.
Hello, Thank you for your appreciation.
The challenge here is maintainence, I'd rather stick with just one provider which is OpenAI. If multiple providers do converge into same APIs then this crate should work with the changes in open PR https://github.com/64bit/async-openai/pull/125
Happy to have this crate!
I have it working with llama.cpp in server mode documented here: https://github.com/ggerganov/llama.cpp/tree/master/examples/server. Just create the client like:
However, it only works with the llama.cpp
/v1/chat/completions
end point, and that endpoint lacks some features (notably logit bias). The/completion
endpoint, with all the extra features, does not work.I don't know if this is a tenable long term solution, but as the rust llama.cpp crates haven't been updated for months, and the llama.cpp library seems to be moving very quickly, I was reticent to rely on crates that would require overhauls for every change. This method seemed like it would be stable long term since the local server probably won't change as often.
I think this crate has some potential to be a good base for building projects that rely on multiple APIs as the industry moves towards a standard. Interested to hear your thoughts, and if it's viable happy to contribute anything I create.