Open Piggeldi2013 opened 4 days ago
@Piggeldi2013 have you tried increasing INFERENCE_CONTEXT_LENGTH
a bit? The default is pretty low. Can you give it a try and let me know if it leads to better results without adding those instructions at the end?
Describe the feature you'd like
The original prompts for generating the tags seem to be ok for GPTx, but using local LLM lead to the fact that the JSON might not be generated as expected by hoarder, and tagging fails.
As the content of the bookmark is put into the prompt after the user prompt additions, truncation of the content on large input texts may occur and tag generation also does not work reliably.
I'd therefore suggest to add the following to the bottom of the tag generation prompt:
`- Example JSON as follows: {
"tags": [
"tag1",
"tag2"
]
}
Describe the benefits this would bring to existing Hoarder users
The addition to the prompt - done here locally on my local instance - lead to faster responses and to valid JSON returned by the LLM.
Can the goal of this request already be achieved via other means?
Just in parts, as described in the feature description.
Have you searched for an existing open/closed issue?
Additional context
No response