rming / dify-openai-apis

OpenAI-compatible APIs for Dify platform services.
20 stars 4 forks source link

OpenAI compatible servers / APIs | Continue.dev #3

Open Asterovim opened 6 months ago

Asterovim commented 6 months ago

Hello. Thanks you for your work.

I have try to use the extension : https://docs.continue.dev/reference/Model%20Providers/openai

Normally, is OpenAI compatible servers / APIs.

I try everything (connection refused and 404 not found) to connect to this extension.

Do you know why ? Can you help me please ?

When i start "dify-openai-apis" i didn't have any message or logs into my terminal.

rming commented 5 months ago
{
  "models": [
    {
      "title": "OpenAI-compatible server / API",
      "provider": "openai",
      "model": "dify",
      "apiKey": "app-ZCUqNz0r37q3OfouQjDY****",
      "apiBase": "http://127.0.0.1:3000/v1",
      "useLegacyCompletionsEndpoint": false
    }
  ],
  "tabAutocompleteModel": {
    "title": "OpenAI-compatible server / API",
    "provider": "openai",
    "model": "dify",
    "apiKey": "app-ZCUqNz0r37q3OfouQjDY****",
    "apiBase": "http://127.0.0.1:3000/v1",
    "useLegacyCompletionsEndpoint": false
  },
  "allowAnonymousTelemetry": true,
  "embeddingsProvider": {
    "provider": "free-trial"
  },
  "reranker": {
    "name": "free-trial"
  }
}

Here is an example configuration for the "continue" plugin in VS Code:

The useLegacyCompletionsEndpoint field must be set to false because we did not implement the legacy API /v1/completions.

We can configure an apiKey for each model individually. The apiKey here refers to the API Key in the Dify platform.

rming commented 5 months ago

When i start "dify-openai-apis" i didn't have any message or logs into my terminal.

I will include startup prompts and program status in future updates.

Asterovim commented 5 months ago
{
  "models": [
    {
      "title": "OpenAI-compatible server / API",
      "provider": "openai",
      "model": "dify",
      "apiKey": "app-ZCUqNz0r37q3OfouQjDY****",
      "apiBase": "http://127.0.0.1:3000/v1",
      "useLegacyCompletionsEndpoint": false
    }
  ],
  "tabAutocompleteModel": {
    "title": "OpenAI-compatible server / API",
    "provider": "openai",
    "model": "dify",
    "apiKey": "app-ZCUqNz0r37q3OfouQjDY****",
    "apiBase": "http://127.0.0.1:3000/v1",
    "useLegacyCompletionsEndpoint": false
  },
  "allowAnonymousTelemetry": true,
  "embeddingsProvider": {
    "provider": "free-trial"
  },
  "reranker": {
    "name": "free-trial"
  }
}

Here is an example configuration for the "continue" plugin in VS Code:

The useLegacyCompletionsEndpoint field must be set to false because we did not implement the legacy API /v1/completions.

We can configure an apiKey for each model individually. The apiKey here refers to the API Key in the Dify platform.

Thanks you so much.

I can talk with my llm into the sidebar ;)

Just one last problem with the tab auto complete :

HTTP 422 Unprocessable Entity from http://127.0.0.1:7000/v1/chat/completions

Failed to deserialize the JSON body into the target type: stop: invalid type: sequence, expected a string at line 1 column 1368

Code: undefined
Error number: undefined
Syscall: undefined
Type: undefined

Error: HTTP 422 Unprocessable Entity from http://127.0.0.1:7000/v1/chat/completions
Failed to deserialize the JSON body into the target type: stop: invalid type: sequence, expected a string at line 1 column 1368
    at customFetch (c:\Users\Asterovim\.vscode\extensions\continue.continue-0.9.149-win32-x64\out\extension.js:37208:21)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async withExponentialBackoff (c:\Users\Asterovim\.vscode\extensions\continue.continue-0.9.149-win32-x64\out\extension.js:36978:26)
    at async OpenAI._streamChat (c:\Users\Asterovim\.vscode\extensions\continue.continue-0.9.149-win32-x64\out\extension.js:37739:26)
    at async OpenAI._streamComplete (c:\Users\Asterovim\.vscode\extensions\continue.continue-0.9.149-win32-x64\out\extension.js:37693:26)
    at async OpenAI.streamComplete (c:\Users\Asterovim\.vscode\extensions\continue.continue-0.9.149-win32-x64\out\extension.js:37327:26)
    at async ListenableGenerator._start (c:\Users\Asterovim\.vscode\extensions\continue.continue-0.9.149-win32-x64\out\extension.js:41182:28)

Do you have an idea ? I have try with chatbot (basic) and text generator and same problem :(

rming commented 5 months ago

You can try the artifact from this link: GitHub Action Run. I suspect it's an issue with JSON deserialization.

Asterovim commented 5 months ago

You can try the artifact from this link: GitHub Action Run. I suspect it's an issue with JSON deserialization.

Thanks you, i will try next Friday.