-Bumped versions for langchain, tiktoken, and pytorch to the latest available and updated code using these libraries as appropriate.
-Changed the embedding model used to TaylorAI/bge-micro-v2 which has a 512 context window (old one had 256). This model is also faster.
-Changed semantic search display in the Dashboard to a method with raw text:
-Bumped versions for langchain, tiktoken, and pytorch to the latest available and updated code using these libraries as appropriate. -Changed the embedding model used to TaylorAI/bge-micro-v2 which has a 512 context window (old one had 256). This model is also faster. -Changed semantic search display in the Dashboard to a method with raw text: