xNul / code-llama-for-vscode

Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
MIT License
548 stars 30 forks source link

continue can't recognize the content of json file #13

Closed dragonslayer18 closed 2 months ago

dragonslayer18 commented 6 months ago

I download the codellama-7B, continue config.json config as this: {"title": "LocalServer", "provider": "openai", "model": "codellama-7b-Instruct", "apiBase": "http://localhost:8000/v1/"} Then I run the llamacpp_mock_api.py , codeLlama can run rightly in my computer , get the post json from continue, generate LLM content correctly, but when I return the json ,the continue can't reecognize the format and show empty, How do you know the json format of Continue, I see the code add "onesix" to the front of json, I can't find json format definition in continue' docs, Is it possible that the Continue plugin updated the format? The current Json generating code is: "onesix" + jsonify({"choices": [{"delta": {"role": "assistant", "content": response}}]}).get_data(as_text=True) How I can generate a right json that Continue can show?

xNul commented 2 months ago

I've updated Code Llama for VSCode to support the latest version of Continue and Llama.cpp server and have updated the instructions so this should be resolved now :)

The source code for the Continue VSCode extension can be found on your hard drive because it is just JavaScript downloaded locally and run in VSCode. I don't remember exactly where but it can probably be found with a Google search. This is how I was able to reverse engineer how Continue communicates with Llama.cpp.

Also, I'll close this issue since it has likely been solved. Let me know if you'd like to reopen it.