Closed wwjCMP closed 2 weeks ago
I believe that project is just applying CoT at prompting time? If so, yes for sure. You can already do it using custom prompts. Perhaps it can be an interesting case study as as youtube video.
But sure, Copilot can integrate CoT prompting. One side effect is extra token cost.
I believe that project is just applying CoT at prompting time? If so, yes for sure. You can already do it using custom prompts. Perhaps it can be an interesting case study as as youtube video.
But sure, Copilot can integrate CoT prompting. One side effect is extra token cost.
How can I use it in QA mode?
Just a thought ;
I believe openAI o1 is agentic behavior I'm believing that after i saw openAI said in their reasoning process they using unmalign model (uncensored? can't know for sure) and after the reasoning completed the actual aligned model read the reasoning process and create censored summary (we can see that on every chat with o1) after that the model will give it's answer to user.
user question to o1 -> question passed to (different unaligned model?) -> (unaligned model?) begin reasoning, time and compute is being used -> reasoning completed passing reasoning to o1 -> o1 summarize the reasoning in censored and safe to user -> user can see both the answer and safe reasoning summary.
It would be impossible if o1 handle it all since aligned model cannot process request in unaligned way.
Is it possible to implement o1-like reasoning chains in obsidian-copilot?
https://github.com/bklieger-groq/g1