Open Aisuko opened 2 weeks ago
As we discussed, let's see how can we use this feature to solve this issue.
milvus db cannot directly search strings with complex algorithm . All it could do is using like as other normal rds do. However, the milvus lib contains embedding function to generate embeddings vectors. I think it would help since we won't need any extra model and lib for tokenizing any more
As we discussed, we can use embedding API of the inference engine. @Micost
Note: We only consider dataset for RAG feature. And also we only use dataset https://www.kaggle.com/datasets/aisuko/rmit-2024-postgraduate-study-areas. No need to support other formats(more rows and columns) in current stage. @Micost @jinronga
Contact Details(optional)
No response
What feature are you requesting?
Currently, we add some data while after we create vector-db's collection. Here, we do not need to do that. We can load the data from the frontend. Default, we don't have to provide any special context only chat history.
And also we don't need to covert input to vectors by using our vector-db