Open arminta7 opened 1 year ago
Could this be helpful? https://openai.com/blog/new-and-improved-embedding-model/
This is a really good idea. I've been thinking about something where you can provide a logseq query or search that would inject the text from the results/pages into the prompt https://github.com/briansunter/logseq-plugin-gpt3-openai/issues/39
I also had some ideas to pull in web searches like Wikipedia https://github.com/briansunter/logseq-plugin-gpt3-openai/issues/36
Would it be possible to do a mem-style feature by utilizing search first within the graph then sending the relevant results as part of the prompt to gpt3?
This might even be more useful or effective than fine tuning since fine tuning doesn't teach facts well.