Open wojtekcz opened 11 months ago
Llama 2, nice. Will take a look. Correctly formatting the markdown has been bug-prone, will try to figure out what's going wrong here.
The markdown export should have the raw output from the model (other than the <code>**[ChatGPT]**</code>
parts), but there's still a chance non-raw output is getting in there.
Thinking of adding a way to see the raw output in the chat UI, ie using a toggle. Since users sometimes want the actual Markdown code, and it can be tricky to get the markdown without the plugin converting to HTML.
Describe the Bug
</p>
at the end of rendered model responses, sometimes one or more>
characters are being added. I don't know how to determine, if it is a bad model output, or a problem with rendering. Is there a way to examine raw, streamed, model responses?There are other rendering and parsing problems with code sections that can be seen in attached file.
chat_6_transcript.md
Where are you running VSCode? (Optional)
MacOS
Which OpenAI model are you using? (Optional)
Llama 2 based model, running on a Mac, served with LM Studio
Additional context (Optional)
extension v3.19.1