Closed lightningRalf closed 9 months ago
I do not understand how this should work. Do you have an example of a prompt that would do labelling reliable with a supplied context as source? Also this would mean we have to supply the whole document to the LLM as context?
example prompt: 'Please create keywords of the provided context in the following way: "#ai/{keyword}" . List these under the frontmatter properties: tags with an indented bulletpoint list. If there are already keywords check if the newly created already exists #{keyword} if not add it with the syntax "#ai/{keyword}. If you find multiple keywords add those accordingly.'
You should probably give an example context of the YAML frontmatter before, and after the execution. And when it works, we could try to figure out how to shorten the prompt.
It could do the whole document in one go, or chunk it accordingly. depending on the size of the document. But everybody does atomic notes these days?! ;)
Thank you for providing an example. This looks like a cool thing to have but ultimately out of the scope of this plugin. This plugin is specifically built to have chatting functionalies and indexing is its main part. The feature proposed do not need indexing at all so you do not need a 3rd party proxy (the python program currently) sitting between you and your LLM. A project that would be better suited for this work (and my inspiration also) is https://github.com/hinterdupfinger/obsidian-ollama here. Or we can just make another plugin as easy is to install and write plugins for obsidian.
perfect usecase for this: local LLM for tagging in my opinion one could argue that, a local LLM could run over night through your markdown notes, and tag them for keywords and when you come back in the morning, you could analyse your new tags created by ai, to find new connections, you didnt know that exsisted
For example: use #ai/ as prefix for the keywords being created