Closed hamidzr closed 9 months ago
Hi @hamidzr
Thank you for interest and effort. I agree that this issue should be fixed with proper roles. Since we can't adapt ShellGPT prompts to every LLM OpenAI has, I think we will change default model of ShellGPT to one of GPT-4
in comming updates.
I've just stumbled upon the same issue with Mistral 7B Instruct (which is provided as an example in the Ollama guide), and it's unfortunately much more stubborn in adding Markdown code blocks. I see no harm in having this feature, since it's extremely unlikely for an actual shell command to start with ```bash
. (Although, one could argue that a model unable to refrain from using Markdown is unlikely to be useful...)
I've just stumbled upon the same issue with Mistral 7B Instruct (which is provided as an example in the Ollama guide), and it's unfortunately much more stubborn in adding Markdown code blocks. I see no harm in having this feature, since it's extremely unlikely for an actual shell command to start with
```bash
. (Although, one could argue that a model unable to refrain from using Markdown is unlikely to be useful...)
Same here, I use GPT-4 Free with ChatGPT authenticated API through 'HAR' files, so it is like being forwarded directly to free ChatGPT. It also cannot strip markdowns from replies. :) Regarding your note in parentheses, this model is definitely useful; it's GPT-4. I would expect that the Hamidzr version would be merged, as I think that would resolve the issue.
With some more recent models whenever I use the shell output format the response includes something like this and we take those out here. We might be able to achieve this with better
role
descriptions aka prompts but I couldn't get that to reliably work and this is easier to reason about.Tested on the following model