I have using TheBloke/Vicuna-13B-v1.3-German-GPTQ model as a load_full_model in Local GPT.
but when I am asking the query in res it is printing the Source data but result key is coming empty i.e from context it is not able to generate answer.
I tried printing the prompt template and as it takes 3 param history, context and question . whenever prompt is passed to the text generation pipeline, context is going empty.
as can be seen in highlighted text.
Due to which model not returning any answer. I am not able to find the loophole can you help me.
I have using
TheBloke/Vicuna-13B-v1.3-German-GPTQ
model as a load_full_model in Local GPT.but when I am asking the query in
res
it is printing the Source data but result key is coming empty i.e from context it is not able to generate answer. I tried printing the prompt template and as it takes 3 paramhistory
,context
andquestion
. whenever prompt is passed to the text generation pipeline, context is going empty.as can be seen in highlighted text. Due to which model not returning any answer. I am not able to find the loophole can you help me.
@PromtEngineer