Closed macin closed 1 month ago
@macin You can best create this by using the workflows feature. https://caretplugin.ai/workflows
I mocked up one and the results seem good. Screenshot attached (although hard to read) and I've also attached the markdown file of the workflow I used. You should be able to add that to your vault and use it. Let me know if that doesn't work though, I've experimented much with making these sharable like this.
Hi @jcollingj This is sort of ok, but I was thinking about something more generic. See, the above requires you to know upfront all the content you want to ask. What I was looking for was a way to fully automate the process of drill down.
I've recorded manual steps I follow now to achieve what I need: https://www.youtube.com/watch?v=O-ADstLPEsw
As you can see there is a lot of repetitive work there, i.e. split generated text into separate cards add user node with the prompt (always the same) generate the content split generate content into separate cards and so on...
What would be extremely cool is the ability to design the workflow, where not only I define the generic prompts, but it works in such a way, that the output is split into separate cards, and then the prompt is applied to each of those resulting cards.
and the process repeats, maybe with a different prompt per each level
Here is the resulting partial map of the concept: https://youtu.be/1_mnhLBNNPQ Of course, in an ideal world I would have all the concepts extracted and explained, but or the purpose of this vide, I have focused only on few randomly selected paths. Mind, I was using https://github.com/rpggio/obsidian-chat-stream plugin as I noticed it kept the context of the path better than your plugin. Not sure why that is.
I also used the https://github.com/cycsd/obsidian-card-note plugin for quickly extracting the content of a generated card into multiple child cards.
Hope above opens up some ideas on how powerfull this could become.
@macin I really appreciate all the thoughts and ideas here!
Candidly I don't think I'll build this functionality into this plugin within the near future. BUT I've actually started work on a web app version of Caret and that's where I'll be building more advanced functionality like this. The Obsidian Plugin will continue being supported and stay free and local-first!
I'm happy to continue this discussion as if it were a feature suggestion for the web app version :)
Re: "I noticed it kept the context of the path better than your plugin. Not sure why that is.". If you could provide a side by side of where Caret fell short that would be great! Definitely possible there is a bug here. For any small amount of context it should work reasonably well
@jcollingj I can share my thoughts on this feature, even if it is outside of obsidian. Although I'd probably stay with the privacy first approach of obsidian.
As for the context... I will test it out in details. Just to confirm, when you construct the construct for the LLM request. Are you taking the full path from the current leaf to the top or are you taking all the branches? I hope its the former, but wouldn't hurt to confirm.
Ya all good if you prefer the Obsidian version! The Caret Obsidian Plugin will definitely be getting some more love soon, hopefully after I've gotten the Caret web app off the ground. Goal is to have the web app fund development for both.
For context - The context is the full path from the current leaf to the top. If there are multiple branches it will pull context from just the longest branch.
I'm going to close this issue. For brainstorming / swapping ideas on that feature please come share them in the discord! https://discord.gg/8FyGfcH24N
@jcollingj if you had it implemented in the app first I could use the app to do my workflows... But there would need to be two conditions
Thanks for the input @macin. We'll see how it shakes out!
HI, just discoverd Caret and its looking great :) Good piece of work!
I was wondering, if you already have an idea on how following workflow could be soved with caret... the content might differ, but the steps are always very similar hence the idea of a predefined drill down workflow:
Imagine you are planning a trip for a weekend. First thing first is to ask what are the options, then explore each option in more and more details in a drill down fashion, more like a top to bottom exploration
Eg. User: Where to go for a holiday trip ChatGPT: You could go to London, Paris or Barcelona
In simple terms, this techique is drilling down in a same fashion for each of an option returned by the LLM. Is it already achievable with Caret? Obviously, the above is the end result of a conversation. I'm wondering how to speed up getting from the point of asking question to the point where each option is throughly drilled down and explained in details. Not to mention, that each question/response should be a dedicated canvas card