Apologise this is more a question , If you define the embedding model
EMBEDDING_MODEL="openai"
But then use different models for different documents
LLM_MODELS="gpt-3.5, gpt-4"
DocA uses gpt-3.5
DocB used gpt-4
Is this generally acceptable when querying the GRAG (graphical RAG), or is it best to stick to one model across the graph ?
I'm thinking the embeddings generated by different models like GPT-3.5 and GPT-4 might not be directly comparable or interoperable due to differences in model architecture, training data, and resultant vector spaces.
Apologise this is more a question , If you define the embedding model
EMBEDDING_MODEL="openai"
But then use different models for different documents
LLM_MODELS="gpt-3.5, gpt-4"
Is this generally acceptable when querying the GRAG (graphical RAG), or is it best to stick to one model across the graph ?
I'm thinking the embeddings generated by different models like GPT-3.5 and GPT-4 might not be directly comparable or interoperable due to differences in model architecture, training data, and resultant vector spaces.
Thanks in advance