twinnydotdev / twinny

The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
https://twinny.dev
MIT License
2.36k stars 130 forks source link

Use two Ollama Server, one for Chat and one for Fim, to improve Twinny Performance #163

Closed sthufnagl closed 3 months ago

sthufnagl commented 4 months ago

Hi, this is my first feature request...sorry for the unusual format of the request. Next time I will improve the feature request! :-)

Situation: I have two separate Ollama Server that I can use for the great Twinny VS Code Extenstion. It would be good for the Twinny Performance to use them.

Request: Would it be possible to use two separate Ollama Server, one for Chat and one for Fim? We have in the Twinny Settings the possibility to choose two different Ollama Models. Why not include the Ollama URL?

Helping Hand: I have some experience with JS/TS. Where should we change the code to use different Ollama URL? It shouldn't be so complicated to implement it.

Thx in advande

Steve

rjmacarthy commented 4 months ago

Hey @sthufnagl thanks for the request. I contemplated this before myself, but never got around to making it happen.

Feel free to make a PR for it into the development branch.

The PR should take care of the following:

I think that's it.

Many thanks,

sthufnagl commented 4 months ago

Hi @rjmacarthy, thanks for your encouraging words.

Thx

Steve

rjmacarthy commented 4 months ago

No worries @sthufnagl , let's reopen it anyway because it's a good suggestion and we can implement it in a future release.

AntonKrug commented 4 months ago

I'm wondering if I would have enough knowledge to attempt this one.

rjmacarthy commented 3 months ago

I have now implemented this in 3.10.0 which makes every setting independent and configurable.

rjmacarthy commented 3 months ago

This is now ready.