continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
18.29k stars 1.48k forks source link

'inferenceConfig.stopSequences' failed to satisfy constraint: Member must have length less than or equal to 4 #2538

Open hongbo-miao opened 4 days ago

hongbo-miao commented 4 days ago

Before submitting your bug report

Relevant environment info

{
  "models": [
    {
      "title": "claude-3-5-sonnet",
      "provider": "openai",
      "apiBase": "https://litellm.example.com",
      "apiKey": "xxx",
      "model": "claude-3-5-sonnet"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Tab Autocomplete",
    "provider": "openai",
    "apiBase": "https://litellm.example.com",
    "apiKey": "xxx",
    "model": "claude-3-5-sonnet",
    "maxStopWords": 4
  },
  "tabAutocompleteOptions": {
    "debounceDelay": 500,
    "maxPromptTokens": 1500
  },
  "slashCommands": [
    {
      "name": "edit",
      "description": "Edit highlighted code"
    },
    {
      "name": "comment",
      "description": "Write comments for the highlighted code"
    },
    {
      "name": "share",
      "description": "Export the current chat session to markdown"
    },
    {
      "name": "cmd",
      "description": "Generate a shell command"
    }
  ],
  "customCommands": [
    {
      "name": "test",
      "prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
      "description": "Write unit tests for highlighted code"
    }
  ],
  "contextProviders": [
    {
      "name": "diff",
      "params": {}
    },
    {
      "name": "open",
      "params": {}
    },
    {
      "name": "terminal",
      "params": {}
    }
  ],
  "embeddingsProvider": {
    "provider": "free-trial"
  },
  "reranker": {
    "name": "free-trial"
  }
}

Description

My Continue "chat" and "edit" functions work well. Thanks! ☺️

Now I am trying to set up autocomplete now. Here is my setup:

AWS Bedrock -> LiteLLM (proxy & gateway) -> Continue

Based on the error I got

inferenceConfig.stopSequences' failed to satisfy constraint: Member must have length less than or equal to 4

I tried to add "maxStopWords": 4 based on https://github.com/continuedev/continue/blob/4124c214eb885842d6b5d975dfc00650a518bd7f/extensions/vscode/config_schema.json#L267-L271

which looks like

  ...
  "tabAutocompleteModel": {
    "title": "Tab Autocomplete",
    "provider": "openai",
    "apiBase": "https://litellm.example.com",
    "apiKey": "xxx",
    "model": "claude-3-5-sonnet",
    "maxStopWords": 4
  },
  "tabAutocompleteOptions": {
    "debounceDelay": 500,
    "maxPromptTokens": 1500
  },
  ...

But I still got same error. So this "maxStopWords": 4 seems not functioning.

To reproduce

  1. Go to any code like Python file
  2. Continue typing some code to wait Continue to auto complete

Log output

Continue log (Jetbrains ~/.continue/logs/core.log):

[2024-10-15T21:10:31] Error generating autocompletion:  Error: Malformed JSON sent from server: {"error": {"message": "litellm.BadRequestError: BedrockException - {\"message\":\"1 validation error detected: Value '[</COMPLETION>, \\n\\n, \\r\\n\\r\\n, /src/, #- coding: utf-8, , \\ndef, \\nclass, \\n\\\"\\\"\\\"#]' at 'inferenceConfig.stopSequences' failed to satisfy constraint: Member must have length less than or equal to 4\"}", "type": null, "param": null, "code": "400"}}

LiteLLM log: https://gist.github.com/hongbo-miao/b10b9785997e6078b9290cb30af5ccf2

Potential Related Issues

malaki12003 commented 3 days ago

@hongbo-miao, it appears that the issue is related to stopSequences. According to the AWS documentation, the number of stopSequences should be between 0 and 4. The values for stopSequences are derived from stop. I̶ b̶e̶l̶i̶e̶v̶e̶ i̶f̶ y̶o̶u̶ a̶d̶d̶ c̶o̶m̶p̶l̶e̶t̶i̶o̶n̶O̶p̶t̶i̶o̶n̶s̶ a̶s̶ s̶h̶o̶w̶n̶ b̶e̶l̶o̶w̶, t̶h̶e̶ i̶s̶s̶u̶e̶ m̶i̶g̶h̶t̶ b̶e̶ r̶e̶s̶o̶l̶v̶e̶d̶.

  "tabAutocompleteModel": {
    "title": "Tab Autocomplete",
    "provider": "openai",
    "apiBase": "https://litellm.example.com",
    "apiKey": "xxx",
    "model": "claude-3-5-sonnet",
    "maxStopWords": 4,
    "completionOptions": {
      "stop": []
    }
  }
malaki12003 commented 3 days ago

@sestinj @Patrick-Erichsen, I believe the issue lies in the following lines: https://github.com/continuedev/continue/blob/4124c214eb885842d6b5d975dfc00650a518bd7f/core/autocomplete/completionProvider.ts#L637-L646

We likely need to limit the number of items in the stop array by introducing something like maxStopWords here: https://github.com/continuedev/continue/blob/4124c214eb885842d6b5d975dfc00650a518bd7f/core/llm/llms/Bedrock.ts#L114 If you'd like, I'm happy to take care of the fix. :)

hongbo-miao commented 3 days ago

Thanks @malaki12003 I can confirm adding extra completionOptions still not work. So may need some source code updates.

  "tabAutocompleteModel": {
    ...
    "maxStopWords": 4,
    "completionOptions": {
      "stop": []
    }
  }
Patrick-Erichsen commented 2 days ago

Hi @malaki12003 , that would be awesome! 😁 I think your proposed solution makes sense.

malaki12003 commented 2 days ago

@Patrick-Erichsen I’ve added comments in the code to highlight areas for potential future improvements. I believe incorporating TODOs in the code will be helpful for tracking enhancements and future refactoring efforts.