Closed barakplasma closed 1 year ago
Very cool, this looks quite useful. I cannot truly test this (I do have llama2 on one machine but dont have LocalAI) but LGTM :) Thanks for the addition!
By the way, this works with litellm openai proxy and ollama + mistralai as well
Hey @barakplasma any tweaks required for litellm proxy besides the int(time.time) patch?
@krr
Hey @barakplasma any tweaks required for litellm proxy besides the int(time.time) patch?
I don't think any other changes are needed. In my local patch I wrapped all the other occurrences of time time with int, but doing it at the top of main py should be enough
by changing the openai base url, we can use LocalAI to summarize articles instead of ChatGPT. this also lets the user change the model (gpt-4 or gpt-3.5-turbo or openllama)
also bumps the dependencies for: github.com/sashabaranov/go-openai v1.14.2 github.com/spf13/viper v1.16.0
my go linter removed a bunch of trailing spaces by accident
I tested with and without the config for openai_base_url and openai_model to make sure it works for existing configs.