Closed shawPLUSroot closed 1 year ago
user: "today is a sunny day" bot: "ok, today is sunny." some conversations later... user: "whats today's weather?" program: get the embeddings for users input, find out that this input is really similar to "today is a sunny day", then the initial prompt will include the original exchange for reference. bot: "today is sunny."
So if I'm understanding both your code (only took a brief look) and your example correctly:
Am I correct? Thanks.
So if I'm understanding both your code (only took a brief look) and your example correctly:
- The program gets the embedding for your current input. (It caches/saves this as chat history.)
- The program finds past inputs and responses that are similar to your current input by comparing the embeddings.
- Sends your current input + the string with a similar embedding as input (prompt) to ChatGPT thus faking memory.
Am I correct? Thanks.
yes
I'm really curious about this, since as far as I know that embedding can be only used to calculate the similarity between texts. While the whole conversation will be counted as the input query for GPT3.5-turbo model, I can really not find a way to find a relation between your instruction even I checked the printed logs.