logancyang / obsidian-copilot

THE Copilot in Obsidian
https://www.obsidiancopilot.com/
GNU Affero General Public License v3.0
2.98k stars 207 forks source link

o1-like reasoning chains #647

Closed wwjCMP closed 2 weeks ago

wwjCMP commented 1 month ago

Is it possible to implement o1-like reasoning chains in obsidian-copilot?

https://github.com/bklieger-groq/g1

logancyang commented 1 month ago

I believe that project is just applying CoT at prompting time? If so, yes for sure. You can already do it using custom prompts. Perhaps it can be an interesting case study as as youtube video.

But sure, Copilot can integrate CoT prompting. One side effect is extra token cost.

wwjCMP commented 1 month ago

I believe that project is just applying CoT at prompting time? If so, yes for sure. You can already do it using custom prompts. Perhaps it can be an interesting case study as as youtube video.

But sure, Copilot can integrate CoT prompting. One side effect is extra token cost.

How can I use it in QA mode?

RickySupriyadi commented 1 month ago

Just a thought ;

I believe openAI o1 is agentic behavior I'm believing that after i saw openAI said in their reasoning process they using unmalign model (uncensored? can't know for sure) and after the reasoning completed the actual aligned model read the reasoning process and create censored summary (we can see that on every chat with o1) after that the model will give it's answer to user.

user question to o1 -> question passed to (different unaligned model?) -> (unaligned model?) begin reasoning, time and compute is being used -> reasoning completed passing reasoning to o1 -> o1 summarize the reasoning in censored and safe to user -> user can see both the answer and safe reasoning summary.

It would be impossible if o1 handle it all since aligned model cannot process request in unaligned way.