Closed eladamittai closed 5 months ago
@eladamittai I notice that there is a colon rather than a quote in your model name ("model": "deepseek-33b-instruct:,
). Was this just a typo when copying over to GitHub issues?
Assuming that's not causing the config to not load, the first thing I would check is the prompt logs: https://docs.continue.dev/troubleshooting#llm-prompt-logs. This will tell us whether the instructions are being entirely left out, or if the problem is something else.
I also would recommend removing your system prompt. This is a detail particular to deepseek models, but they automatically have a system prompt very similar to yours, and using a different one might be causing it to act strange
@sestinj hey, thanks for the quick reply! 🙏 I removed the system prompt and checked the logs, and the entire command is being left out of the context. Instead of the command, it just puts /log before the context. I also tried again with just writing the command in the chat instead of using the slash command, and it worked fine. Same with the /edit command. So it's only for the custom commands.
Got it! I think this is related to a fix I made just yesterday. I believe it should be available in the latest pre-release, 0.9.130
@sestinj great! I'll check it out. Thanks!
It works!
Before submitting your bug report
Relevant environment info
Description
I've written a few / commands like logs and ut, and all of the commands don't work and the model just describes the code. In the same chat, the first time I use the command, I receive a code explanation, and when I send it again it works. The same prompt and model work for jetbrains.
To reproduce
Here is the config:
Log output
No response