TabbyML / tabby

Self-hosted AI coding assistant
https://tabby.tabbyml.com/
Other
20.83k stars 941 forks source link

Tabby VSCode Extension: Autostart Tabby Server #624

Open matthiasgeihs opened 10 months ago

matthiasgeihs commented 10 months ago

Context I use Tabby VSCode extension with a local Tabby server. Currently, when I start VSCode and the Tabby server is not running, it reminds me of that through the yellow indicated extension icon in the status bar. In this case, I open a terminal, and start the Tabby server manually, and then the extension is happy and works as expected. When I close VSCode and no longer need the server, I go to the Terminal window and shut down the Tabby server, manually.

What I would like: It would be nice if the VSCode extension would start the Tabby Server automatically, if it detects that the server is not running. Additionally, when I close VSCode, it would shut down the server, if there are no other applications running that rely on it.


Please reply with a 👍 if you want this feature.

matthiasgeihs commented 10 months ago

Implementation idea An implementation of the above functionality could be realized as follows.

Create a background service that runs at all times with minimal resource requirements. The service receives all Tabby requests. If no Tabby Inference Server is running, the service boots up the server. The service checks periodically, if there have been any inference requests within a specified time interval. If no activity happened, it shuts down the inference server and releases its resources.

wsxiaoys commented 10 months ago

I believe this problem is specific to Tabby's local deployment scenario. For remote deployment, it is automatically managed by cloud vendors through auto-scaling.

A potential solution to this issue could involve a straightforward Python script utilizing asgi_proxy. The script would create a new Tabby process whenever a request is made. After a certain period of inactivity, such as half an hour without any requests, the script would terminate the process.

This script could be deployed as a system daemon or in a similar manner.

wsxiaoys commented 10 months ago

added a quick and basic implementation at https://github.com/TabbyML/tabby/pull/630/files.

matthiasgeihs commented 10 months ago

added a quick and basic implementation at https://github.com/TabbyML/tabby/pull/630/files.

cool, works for me. added a few more options and changed the default startup behavior. (does not start the server until incoming inference request.) https://github.com/TabbyML/tabby/compare/add-tabby-supervisor...matthiasgeihs:tabby:add-tabby-supervisor

itlackey commented 10 months ago

This I a cool idea! I use specific switches to launch the Tabby container. It would be great if I could specify the startup command or a path to a docker compose file in the Tabby configuration file maybe.

wsxiaoys commented 10 months ago

This I a cool idea! I use specific switches to launch the Tabby container. It would be great if I could specify the startup command or a path to a docker compose file in the Tabby configuration file maybe.

Should be something easy to hack around https://github.com/TabbyML/tabby/blob/main/experimental/supervisor/app.py - by replacing the startup / stop command to docker-compose up and docker-compose down.

limdingwen commented 7 months ago

Is it possible or desirable to bundle the Tabby server into the VSCode extension for simple local usage?

wsxiaoys commented 7 months ago

Is it possible or desirable to bundle the Tabby server into the VSCode extension for simple local usage?

The only platform where bundling makes sense is probably the Apple M-series. However, given how easily one can install Tabby with homebrew, I feel it doesn't add value to bundle it.

bubundas17 commented 1 month ago

Cancelled my copilot subscription and using tabby full time. Its really usefull feature.
ollama does it by default very well. I do not code for the whole day. when playing games I need to manually stop the tabby docker container. It will be very helpfull if it automatically offloads models when not in use like ollama.

Or if possible can we offload the model interface to Ollama API?

wsxiaoys commented 1 month ago

Or if possible can we offload the model interface to Ollama API?

Yes - this has been supported since 0.12, see https://tabby.tabbyml.com/docs/administration/model/#ollama for a configuration example

bubundas17 commented 1 month ago

Or if possible can we offload the model interface to Ollama API?

Yes - this has been supported since 0.12, see https://tabby.tabbyml.com/docs/administration/model/#ollama for a configuration example

Chat still not support OLLAMA http api

my config:

[model.completion.http]
kind = "ollama/completion"
model_name = "codestral:22b-v0.1-q6_K"
api_endpoint = "http://10.66.66.3:11434"
prompt_template = "[SUFFIX]{suffix}[PREFIX]{prefix}"  # Example prompt template for CodeLlama model series.

[model.chat.http]
kind = "ollama/completion"
model_name = "codestral:22b-v0.1-q6_K"
api_endpoint = "http://10.66.66.3:11434"
#api_key = "secret-api-key"

image

wsxiaoys commented 1 month ago

Have you tried https://tabby.tabbyml.com/docs/administration/model/#openaichat ?

bubundas17 commented 1 month ago

Have you tried https://tabby.tabbyml.com/docs/administration/model/#openaichat ?

Is the api same for openai and local ollama? Let me ckeck real quick.

bubundas17 commented 1 month ago

image

Tabby started but the chat is not working with ollama my current config is:

[model.completion.http]
kind = "ollama/completion"
model_name = "codestral:22b-v0.1-q6_K"
api_endpoint = "http://10.66.66.3:11434"
prompt_template = "[SUFFIX]{suffix}[PREFIX]{prefix}"  # Example prompt template for CodeLlama model series.

[model.chat.http]
kind = "openai/chat"
model_name = "codestral:22b-v0.1-q6_K"
api_endpoint = "http://10.66.66.3:11434"