karthink / gptel

A simple LLM client for Emacs
GNU General Public License v3.0
1.03k stars 111 forks source link

ollama generated code wrapped in markdown #334

Open GitHubGeek opened 4 days ago

GitHubGeek commented 4 days ago

Sorry if this is already documented or discussed. I'm using ollama to generate typescript code. It'd be great if the generated code isn't wrapped in markdown code blocks:

image

How to configure gptel or ollama to output in plain text, or better, code in plaintext and description as code comment?

karthink commented 4 days ago

You need to specify the kind of output you want in the system message. Something like

You are a careful programmer. Provide code and only code as output without any additional text, prompt or note. Do NOT use markdown code blocks (```) to format the code.

System messages can be quite involved these days (> 300 words). You can give specific instructions, and lots of them, explaining exactly what you want.

If that still fails, you can see this item on the wiki. But it's better to get the model to understand the format you want the response in.

karthink commented 4 days ago

(Please let us know after you find a solution.)

GitHubGeek commented 4 days ago

Some models seem to insist on markdown code blocks, even with very specific directive. Thanks for the wiki link will try it out, cheers!

karthink commented 4 days ago

Some models seem to insist on markdown code blocks, even with very specific directive.

Hopefully you got them to stop adding explanations and fluff around the code at least (Like "Here's your Typescript function and a Jest test suite").

karthink commented 4 days ago

You can also try some of the coder models (there are several, including Llama3 based ones), those might be better at not generating prose around the code.

GitHubGeek commented 4 days ago

Indeed llama3 is better at outputting code with no fluff. Other such as deepseek insists on markdown.

karthink commented 4 days ago

Okay. There are no other controls available from gptel to improve this situation. You can modify the gptel-post-response-functions hook-based solution from the wiki to heuristically remove unwanted text.

karthink commented 3 days ago

Can I close this issue now?

daedsidog commented 3 days ago

Indeed llama3 is better at outputting code with no fluff. Other such as deepseek insists on markdown.

LLMs really love markdown, and will often insist on it despite your protests. I found the hard way that it's not really worth fighting them over it. The same goes for indentation or column width when using inline code: the models oftentimes just don't know.

Using a hook to clean the result is the best solution IMO.