twinnydotdev / twinny

The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
https://twinny.dev
MIT License
2.3k stars 126 forks source link

Incomplete Code Autocompletion and Non-Responsive Chat UI in Twinny Extension #231

Closed YXTR closed 2 months ago

YXTR commented 2 months ago

Describe the bug The code autocompletion feature does not work and no responses are received in chat; the UI continuously shows a loading spinner.

To Reproduce

  1. Install the extension (v3.11.31) on vscode (v1.88.1)
  2. Launch Ollama (Windows Version).
  3. Successfully execute the following test via command line: curl http://localhost:11434/api/generate -d '{ "model": "llama3", "prompt": "Why is the sky blue?" }' (see screenshot below).
  4. Configure the provider (see details below).
  5. Attempt code autocompletion and chat interaction.

Expected behavior The system should complete code snippets and reply to dialogues.

Screenshots

image

API Provider:

image

Successfully execute the following test via command line:

image

Logging

2024-04-29 21:37:05.947 [error] [rjmacarthy.twinny] provider FAILED
2024-04-29 21:37:05.947 [error] Error: Parsing failed
    at Parser.parse (c:\Users\yxtr\.vscode\extensions\rjmacarthy.twinny-3.11.31\out\index.js:2:219870)
    at t.CompletionProvider.provideInlineCompletionItems (c:\Users\yxtr\.vscode\extensions\rjmacarthy.twinny-3.11.31\out\index.js:2:124150)
    at async U.provideInlineCompletions (c:\Users\yxtr\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\api\node\extensionHostProcess.js:152:108044)

API Provider Label: LLama 3 Code Provider: ollama Type: fim Fim Template: automatic Hostname: localhost Path: /api/generate Protocol: http Port: 11434

Label: LLama 3 Chat Provider: ollama Type: chat Hostname: localhost Path: /api/chat Protocol: http Port: 11434

Chat or Auto Complete? Both Auto Complete and Chat.

Model Name llama3:latest

Desktop (please complete the following information):

Additional context This issue occurs on the Windows version of Ollama.

VSCode Version:

Version: 1.88.1 (user setup)
Commit: e170252f762678dec6ca2cc69aba1570769a5d39
Date: 2024-04-10T17:41:02.734Z
Electron: 28.2.8
ElectronBuildId: 27744544
Chromium: 120.0.6099.291
Node.js: 18.18.2
V8: 12.0.267.19-electron.0
OS: Windows_NT x64 10.0.22631
rjmacarthy commented 2 months ago

Hello, you have some issues with your configurations.

FIM: llama3:latest does not support FIM completions, you need to use a different model, or customise the fim template (results will be bad with llama3). For FIM codellama:7b-code or a deepseek base model is recommended. The rest of the configuration is correct for FIM in your screenshot.

CHAT: You are using the wrong endpoint; you should be using /v1/chat/completions

Please check https://github.com/rjmacarthy/twinny/blob/main/docs/providers.md for more information and recommended configurations.

Regards,

YXTR commented 2 months ago

Hello, you have some issues with your configurations.

FIM: llama3:latest does not support FIM completions, you need to use a different model, or customise the fim template (results will be bad with llama3). For FIM codellama:7b-code or a deepseek base model is recommended. The rest of the configuration is correct for FIM in your screenshot.

CHAT: You are using the wrong endpoint; you should be using /v1/chat/completions

Please check https://github.com/rjmacarthy/twinny/blob/main/docs/providers.md for more information and recommended configurations.

Regards,

Thank you for the quick response! I'll try out your suggestions right away.

Best regards,

YXTR commented 2 months ago

CHAT: You are using the wrong endpoint; you should be using /v1/chat/completions

Thank you for your help with the chat endpoint; it's working great now!

FIM: llama3:latest does not support FIM completions, you need to use a different model, or customise the fim template (results will be bad with llama3). For FIM codellama:7b-code or a deepseek base model is recommended. The rest of the configuration is correct for FIM in your screenshot.

Regarding FIM, I've switched to using codellama:7b-code with the codellama template as you suggested. The extension provided code suggestions correctly the first time, but then the same issue occurred again. Could you please advise on what might be causing this and how to resolve it?

In the screenshot below, you can see that the model generates pd.read_csv(...) correctly, but then it fails to proceed further.

image
rjmacarthy commented 2 months ago

The configuration looks correct. Please try a restart of vscode.

Edit: I just noticed that there is an issue with completions without a tree-sitter parser, please try the latest version and let me know how you get on.

Many thanks,

YXTR commented 2 months ago

The latest version works great!