di-sukharev / opencommit

Generate conventional git commit messages with AI in 1 second 🤯🔫
https://www.npmjs.com/package/opencommit
MIT License
5.98k stars 318 forks source link

[Feature]: local model support #236

Open danil-iglu opened 1 year ago

danil-iglu commented 1 year ago

Description

In some organizations it is prohibited to send the code to 3rd party.

Suggested Solution

Support of dockerized llama-2 model that running locally?

Alternatives

No response

Additional Context

No response

malpou commented 1 year ago

Is there any specific (and stable) setup you have in mind for running the model in docker? Then I will try to play around with that when time allows it, and try to get opencommit running against llama-2.

BR

di-sukharev commented 1 year ago

@malpou im constantly thinking about adding local llama support, this would be just killing.

i imagine e.g. setting oco config set OCO_MODEL=llama_2 and then opencommit switches to local llama out of the box. if the OCO_MODEL=gpt-4 then we continue to call openai api

i suggest taking most smart and most lightweight (so download time isnt more than ~20-30 sec), as the package is installed and updated globally once 2-3 months then waiting for 30 sec once in a while is ok (imo)

malpou commented 1 year ago

Yes that's exactly my thought.

Haven't gotten around to playing with llama2 yet, is there a standard way to run it in docker, as far as I can see there is just multiple smaller projects. If you can point me in the right direction on what we would like to use for llama locally then I can do the rest of the implementation.

di-sukharev commented 1 year ago

I don’t know any setup, need to googleOn 7 Sep 2023, at 17:52, Malthe Poulsen @.***> wrote: Yes that's exactly my thought. Haven't gotten around to playing with llama2 yet, is there a standard way to run it in docker, as far as I can see there is just multiple smaller projects. If you can point me in the right direction on what we would like to use for llama locally then I can do the rest of the implementation.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>

malpou commented 1 year ago

@di-sukharev I've found this project that I'll try to get running. And then see how it is to interface with.

https://github.com/go-skynet/LocalAI

pypeaday commented 1 year ago

I would love to see local model support for this!

Edit: I've seen Simon Willison play around a ton with local models and although I don't have anything off the top of my mind I expect he'd have helpful blog posts to guide this feature

Edit 2: found this in my stars https://github.com/nat/openplayground for playing with LLMs locally...

Breinich commented 11 months ago

Me too!

Recently I came across with ollama implementation, and maybe would be helpful for you: https://ollama.ai/.

Edit: After checking your PR draft, LocalAI seems to be more robust, at least seems to have bigger community, so currently it's a good idea to keep that. Only if your issue wouldn't be fixed, this is a good alternative option to try.

github-actions[bot] commented 10 months ago

Stale issue message

di-sukharev commented 7 months ago

we now support Ollama

gudlyf commented 6 months ago

@di-sukharev I tried with the AI_PROVIDER flag and not having any OPENAI key set, but the application errors out saying I need to have the key set. If I set the key to something arbitrary (i.e/, sk-blahblahblah...) it still seems to try to call out to OpenAI. Using v3.0.11

(Update: I see the issue. The documentation needs to be updated to state that you need to set OCO_AI_PROVIDER in the configuration file to ollama in order for it to work, not set AI_PROVIDER env var.)