Open Kirushikesh opened 8 months ago
Further I raised a PR to address the issue. #657
+1 getting same error for trying out Google gemini models through langchain-google-genai
@Kirushikesh but removing the temperature arg impacts OpenAI behavior right?
@joy13975, when initialising the OpenAI LLM we are providing the temperature right llm = ChatOpenAI(temperature=0)
and temperature in . generate_prompt() is also an optional parameter.
Someone have any update about this bug?
Hey, @RazHadas There are two PRs raised on the same issue. You can check them out or wait till we merge them.
This issue should probably not be closed without merging the fixes. I am facing the same issue using langchain-google-genai.
thanks for bringing it to your attention @LostInCode404 , reopening this
Describe the bug
LangchainLLMWrapper has .generate_text() function which further calls .generate_prompt() of the underlying LLM. The LangchainLLMWrapper passes 'temperature' parameter in .generate_prompt() function which causes the following issues,
Since when initialising an LangChain LLM we can pass the temperature as a parameter, it is not needed to be supplied additionally in LangchainLLMWrapper.
For ex in HuggingFacePipeline, you can specify the temperature when initialization using:
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, temperature=1)
Or when using IBM LLM you can specify the temperature by:
Ragas version: 0.1.1 Python version: 3.10.6
Code to Reproduce The following code explains why 'temperature' parameter not affecting the response in HuggingFaceLLM
In the above code I initialised the HuggingFacePipeline with gpt-2 model and wrapped it around ragas LangchainLLMWrapper and i was passing 'temperature=0' when calling .generate_text(), ideally this should generate error because 0 temperature is not accepted in HuggingFace.
You can also check by passing temperature as 99 in .generate_text() and its not raising any exception too for this high value of temperature. Thus its evident that temperature in .generate_text is not affecting the HuggingFace LLM. Also user can sent the temperature in pipeline() function so need to have an additional temperature in .generate_text() function.
The following code explains why passing 'temperature' raises an error in IBM LLM:
As the error trace explains that using Langchain wrapped IBM LLM doesn't support 'temperature' as an additional parameter in .generate_prompt() function. The error resolves when i didn't pass temperature parameter. The same error occurs when calling 'evaluate()' function in ragas with the same IBM LLM.
Expected behavior A clear solution to this problem was to remove the temperature parameter in LangchainLLMWrapper
Additional context Add any other context about the problem here.
R-273