TabbyML / tabby

Self-hosted AI coding assistant
https://tabbyml.com
Other
21.76k stars 990 forks source link

Codestral API Integration not working #2788

Closed anitsch-scs closed 3 months ago

anitsch-scs commented 3 months ago

Describe the bug Completion with Codestral via HTTP API does not work.

Not with config.toml

[model.completion.http]
kind = "mistral/completion"
api_endpoint = "https://api.mistral.ai"
api_key = "<my_key>"

nor with config.toml

[model.completion.http]
kind = "mistral/completion"
api_endpoint = "https://codestral.mistral.ai/v1/fim/completions"
api_key = "<my_key>"

My docker-compose.yml:

services:
  tabby:
    container_name: tabby
    image: tabbyml/tabby:latest
    ports:
      - 8080:8080
    volumes:
      - ./data:/data
      - ./config.toml:/root/.tabby/config.toml
    command: serve
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities: [gpu]

I can login in the webui and connect the Tabby Plugin in IntelliJ with tabby.

Symptoms:

Information about your version 0.14.0

wsxiaoys commented 3 months ago

For the Docker deployment, we have set the environment variable TABBY_ROOT=/data.

To ensure that your config.toml takes effect, you need copy config.toml to /data. You can use the following Docker Compose configuration:

services:
  tabby:
    container_name: tabby
    image: tabbyml/tabby:latest
    ports:
      - "8080:8080"
    volumes:
      - ./data:/data
    command: serve
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0']
              capabilities: [gpu]
anitsch-scs commented 3 months ago

Thank you for the quick answer. I agree that updating the documentation for HTTP-based endpoints would be nice. The problem for me is not solved though.

I would expect the server to log errors or warnings to console if it can't reach or authenticate with the Mistral endpoint. Are there other places that I can check the logs?

wsxiaoys commented 3 months ago

Logging won't be of much help in your case because you're not mounting config.toml to the correct location; therefore, Tabby won't be able to connect to Codestral at all.

The documentation label pertains to updating the Docker configuration's TABBY_ROOT setup and is not directly related to the HTTP endpoint.

anitsch-scs commented 3 months ago

I have adjusted the compose file as instructed. There's no indication of wether that changed anything from the docker logs output. Assuming that it did, there seems so be some other problem that I'm unable to debug without logs. Or should loading of a config.toml print something to console and I therefore know it didn't work?

wsxiaoys commented 3 months ago

System tab shall display model as Remote, comparing to models started locally:

Local models:

image

Remote models:

image
anitsch-scs commented 3 months ago

That seems to have worked then, thanks! image

Is there any way to check wether the tabby server is able to connect to the API with the URL/token that I provided?

anitsch-scs commented 3 months ago

There may be an error in this documentation: https://tabby.tabbyml.com/docs/administration/model/#mistral--codestral

According to La Platforme the endpoints for codestral are: image

The documentation on their end seems to be outdated as well: https://docs.mistral.ai/capabilities/code_generation/#integration-with-tabby

anitsch-scs commented 3 months ago

This config seems to be working:

[model.completion.http]
kind = "mistral/completion"
api_endpoint = "https://api.mistral.ai"
api_key = "<general api key, not codestral api key>"
model_id = "codestral-2405"

It was mostly a confusion caused by the (optional) separate endpoint for codestral then. I found the differentiation between the two endpoints here: https://docs.mistral.ai/capabilities/code_generation/#codestral

itstueben commented 2 months ago

@anitsch-scs thx for your feedback. i also get it running with this condig. Did you also get it running wirt the chat and mistral codestral?

LeoXu1996 commented 1 month ago

System tab shall display model as Remote, comparing to models started locally:

Local models: image

Remote models: im
![282701727313921_ pic](https://github.com/user-attachments/assets/4df43943-3128-44db-8d47-547ac4b6bac5)
age

System tab shall display model as Remote, comparing to models started locally:

Local models: image

Remote models: image

hi, im using docker without gpu,and i set config.toml correctly and the system tab is just like you showed here,but i still can`t receive nothing when i try to chat with the model using the webapp. 282701727313921_ pic 282721727313939_ pic 282741727313960_ pic