Closed anitsch-scs closed 3 months ago
For the Docker deployment, we have set the environment variable TABBY_ROOT=/data
.
To ensure that your config.toml
takes effect, you need copy config.toml
to /data
. You can use the following Docker Compose configuration:
services:
tabby:
container_name: tabby
image: tabbyml/tabby:latest
ports:
- "8080:8080"
volumes:
- ./data:/data
command: serve
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
Thank you for the quick answer. I agree that updating the documentation for HTTP-based endpoints would be nice. The problem for me is not solved though.
I would expect the server to log errors or warnings to console if it can't reach or authenticate with the Mistral endpoint. Are there other places that I can check the logs?
Logging won't be of much help in your case because you're not mounting config.toml
to the correct location; therefore, Tabby won't be able to connect to Codestral at all.
The documentation label pertains to updating the Docker configuration's TABBY_ROOT
setup and is not directly related to the HTTP endpoint.
I have adjusted the compose file as instructed. There's no indication of wether that changed anything from the docker logs
output.
Assuming that it did, there seems so be some other problem that I'm unable to debug without logs.
Or should loading of a config.toml print something to console and I therefore know it didn't work?
System tab shall display model as Remote
, comparing to models started locally:
Local models:
Remote models:
That seems to have worked then, thanks!
Is there any way to check wether the tabby server is able to connect to the API with the URL/token that I provided?
There may be an error in this documentation: https://tabby.tabbyml.com/docs/administration/model/#mistral--codestral
According to La Platforme the endpoints for codestral are:
The documentation on their end seems to be outdated as well: https://docs.mistral.ai/capabilities/code_generation/#integration-with-tabby
This config seems to be working:
[model.completion.http]
kind = "mistral/completion"
api_endpoint = "https://api.mistral.ai"
api_key = "<general api key, not codestral api key>"
model_id = "codestral-2405"
It was mostly a confusion caused by the (optional) separate endpoint for codestral then. I found the differentiation between the two endpoints here: https://docs.mistral.ai/capabilities/code_generation/#codestral
@anitsch-scs thx for your feedback. i also get it running with this condig. Did you also get it running wirt the chat and mistral codestral?
System tab shall display model as
Remote
, comparing to models started locally:Local models:
Remote models:
System tab shall display model as
Remote
, comparing to models started locally:Local models:
Remote models:
hi, im using docker without gpu,and i set config.toml correctly and the system tab is just like you showed here,but i still can`t receive nothing when i try to chat with the model using the webapp.
Describe the bug Completion with Codestral via HTTP API does not work.
Not with config.toml
nor with config.toml
My docker-compose.yml:
I can login in the webui and connect the Tabby Plugin in IntelliJ with tabby.
Symptoms:
Information about your version 0.14.0