Closed Torhamilton closed 1 year ago
Setting vector queries as the default has the adverse effect of reducing context memory because it can fetch out-of-context data. I've set it up so that when you call a query command, that data is only used in the next answer and disappears from context memory.
Yet another reason for a robust voctor db. I will test out the new update and see how it fairs. Maybe this should be admin level decision how system behaves? We should implement a switch
Added admin to MySQL users table status column. It is now possible to identify who is an admin in the chat and this can be changed in the admin panel. What's next?
Implement /bypass to search via LLM without hitting vector. Otherwise chats must check vector for embeddings before interacting with LLM. The purpose of this app I believe is to grant longer context memory. Being forced to add /query in front of every chat is tiresome. Embeddings can prompt LLM how to behave on chat initiation.