langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.55k stars 15.52k forks source link

. #10955

Closed Tejaswgupta closed 1 year ago

Tejaswgupta commented 1 year ago

.

dosubot[bot] commented 1 year ago

🤖

Hello,

The error you're encountering, OutputParserException, is raised when the output from your custom LLM doesn't match the expected format. The parse method in the BaseLLMOutputParser class expects the output to be a list of Generation objects, and it parses the text from the first Generation in the list.

In your CustomLLM class, the _call method is returning the result of client.predict(), which might not be in the expected format. You should ensure that the result is a list of Generation objects. Here's an example of how you can modify your _call method:

def _call(
    self,
    prompt: str,
    **kwargs: Any,
) -> str:
    """Call to Together endpoint."""
    result = client.predict(
        prompt,  # str in 'Question' Textbox component
        0.95,    # int | float (numeric value between 0.05 and 1.0)
        0.5,     # int | float (numeric value between 0.1 and 1.0)
        50,      # int | float (numeric value between 1 and 50)
        300,     
        fn_index=0
    )
    # Ensure the result is a list of Generation objects
    if isinstance(result, list) and all(isinstance(r, Generation) for r in result):
        return result
    else:
        raise ValueError(
            "Expected result to be a list of Generation objects. "
            f"Received {result}"
        )

This will ensure that the output from your custom LLM is in the correct format, and should prevent the OutputParserException from being raised.

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.