I just want to test the performance of the few-shot in-context learning capability. But I found an issue. I added the Instruction and response few-shot examples before the question and the generated result after the llm.generate function remains the same. And no matter how many examples I added, the inferenced results remain the same as the zero-shot result. So could you help me with the issues?
I just want to test the performance of the few-shot in-context learning capability. But I found an issue. I added the Instruction and response few-shot examples before the question and the generated result after the llm.generate function remains the same. And no matter how many examples I added, the inferenced results remain the same as the zero-shot result. So could you help me with the issues?