continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
18.03k stars 1.41k forks source link

Custom LLM Provider works only on first message and hangs on subsequent messages #1080

Open foongzy opened 6 months ago

foongzy commented 6 months ago

Before submitting your bug report

Relevant environment info

- OS: Windows 11
- Continue: v0.039, v0.041
- IDE: PyCharm Community Edition 2023.3.3, IntelliJ iDealC-2023.3.4.win
- Model: Custom model

Description

Main Issue

When defining a custom LLM Provider according to the docs (https://continue.dev/docs/model-setup/configuration#defining-a-custom-llm-provider) and saving the config.ts file, in PyCharm or IntelliJ IDE, the Continue ext only works once on the first message. An SQLITE_CONSTRAINT:NOT NULL error message is revealed after the first message as seen:

error

After which, subsequent messaged just hangs:

hang

I tried both v0.039 and v0.041 of Continue ext and faced the same issue. This issue however is not present in VsCode and the same config.ts file works fine there.

Side issue:

I believe the documentation here (https://continue.dev/docs/model-setup/configuration#defining-a-custom-llm-provider) is outdated as well. This code throws an error:

export function modifyConfig(config: Config): Config {
  config.models.push({
    options: {
      title: "My Custom LLM",
      model: "mistral-7b",
    },
    streamComplete: async function* (prompt, options) {
      // Make the API call here

      // Then yield each part of the completion as it is streamed
      // This is a toy example that will count to 10
      for (let i = 0; i < 10; i++) {
        yield `- ${i}\n`;
        await new Promise((resolve) => setTimeout(resolve, 1000));
      }
    },
  });
}

This is the working one that I used to replicate the above errors in the IDE:

export function modifyConfig(config: Config): Config {
  config.models.push({
    options: {
      title: "MyModel",
      model: "customModel",
    },
    streamCompletion: async function* (prompt, options) {
      // Make the API call here

      // Then yield each part of the completion as it is streamed
      // This is a toy example that will count to 10
      for (let i = 0; i < 5; i++) {
        yield `- ${i}\n`;
        await new Promise((resolve) => setTimeout(resolve, 100));
      }
    },
  });
  return config
}

To reproduce

  1. Change contents of config.ts file to:

    export function modifyConfig(config: Config): Config {
    config.models.push({
    options: {
      title: "MyModel",
      model: "customModel",
    },
    streamCompletion: async function* (prompt, options) {
      // Make the API call here
    
      // Then yield each part of the completion as it is streamed
      // This is a toy example that will count to 10
      for (let i = 0; i < 5; i++) {
        yield `- ${i}\n`;
        await new Promise((resolve) => setTimeout(resolve, 100));
      }
    },
    });
    return config
    }
  2. Launch IDE (either PyCharm or IntelliJ)
  3. Select MyModel at the bottom of Continue Extension
  4. Send a message to MyModel. It will count from 0 to 4. SQL error should appear: error
  5. Send second message to MyModel. It will hang

Log output

[info] Starting Continue core...
[info] Starting Continue core... 
[info] Exiting Continue core... 
[info] Starting Continue core...
[info] Starting Continue core... 
[info] Starting Continue core...
[info] Starting Continue core... 
[info] Exiting Continue core... 
[info] Starting Continue core...
[info] Starting Continue core... 
[info] Starting Continue core...
[info] Starting Continue core... 
[info] Exiting Continue core... 
[info] Starting Continue core...
[info] Starting Continue core... 
[info] Starting Continue core...
[info] Starting Continue core... 
[info] Exiting Continue core... 
[info] Starting Continue core...
[info] Starting Continue core...
ghost commented 5 months ago

+1

sestinj commented 5 months ago

@foongzy @timdahlmanns thanks for sharing all of the details, that made the fix quite straightforward! It looks like we just weren't setting providerName in the CustomLLM class: https://github.com/continuedev/continue/commit/c3d3980e97af39ef75a8112963a18644ec807a69

I'm finishing up JetBrains 0.0.46 but the store may not accept it until Monday

sestinj commented 5 months ago

On the mention of the docs fix: are you seeing red underlines in config.ts, or is it also entirely failing to load before you remove those type annotations? It looks more like this may be a mistake in how we're shipping the types file than the documentation

foongzy commented 5 months ago

@sestinj thanks for looking into it. The docs fix I think it's just a typo between streamComplete and streamCompletion. The file fails to load if using streamComplete and it will hint to use streamCompletion instead

timdah commented 5 months ago

@foongzy @timdahlmanns thanks for sharing all of the details, that made the fix quite straightforward! It looks like we just weren't setting providerName in the CustomLLM class: c3d3980

I'm finishing up JetBrains 0.0.46 but the store may not accept it until Monday

Thanks for the quick fix, but I still get the SQLITE_CONSTRAINT error in the 0.0.46 version.

foongzy commented 5 months ago

@foongzy @timdahlmanns thanks for sharing all of the details, that made the fix quite straightforward! It looks like we just weren't setting providerName in the CustomLLM class: c3d3980 I'm finishing up JetBrains 0.0.46 but the store may not accept it until Monday

Thanks for the quick fix, but I still get the SQLITE_CONSTRAINT error in the 0.0.46 version.

@sestinj the SQLITE_CONSTRAINT error still exists for me too

foongzy commented 3 months ago

Hi @sestinj, any updates regarding this bug?

timdah commented 3 months ago

The error is fixed for me in the latest version.

foongzy commented 3 months ago

The error is fixed for me in the latest version.

ahhh yes, thanks @timdah!