ju-bezdek / langchain-decorators

syntactic sugar 🍭 for langchain
MIT License
228 stars 11 forks source link

llm_prompt returns boolean instead of pydantic model #15

Open mbalty opened 6 months ago

mbalty commented 6 months ago

hello,

Thank you for building this!! I am having the issue that on many occasions, even if my function outputs a pydantic model, the llm_prompt decorated function returns boolean (true/false). When I look at the logs even if it prints the proper json object at the end, but it returns true, so I am asuming it's a bug. I solved it with a retry for now. Any ideas why this would be?

Mihai

ju-bezdek commented 6 months ago

So partly yes... I've noticed that if the json could not be parsed, it returns false.. I believe that this was an intent in the early days to allow you test

if not llm_output:
   # do something

Ive ran into this on several occasions and didnt like it because the resulting error is often something like None has not Key or some other misleading error...

but since the other option is to raise an exception anyway, i didnt look into that yet.

Can you compare the original JSON and the retried one? I assume that there will be a difference, which could mean that your prompt could be improved.

If you can share prompt template, outputs and llm you're using i might be able to give you a hint

mbalty commented 6 months ago

The thing is that it isn't consistent. That is, it may work if run the same prompt twice. This is the output type:

class DocSegmentReturn(BaseModel):
    type:DocType = Field(description=f"""The type of document. Possible values are: {ALL_DOCTYPES}""")
    first_name:str = Field(description="The first name of the person the document refers to")
    last_name:str = Field(description="The last name of the person the document refers to")
    languages:List[str] = Field(description="The languages the document is written in")
    has_english_translation:bool = Field(description="Whether the document has an english translation or not")

DocType is a string enum.

And this is the function:

@llm_prompt(
    llm=llm_model_gpt4,
    verbose=VERBOSE_LLM,
    retry_on_output_parsing_error=True,
    output_parser="pydantic"
    )
def document_segmentation(document:str, all_doctypes=dm.ALL_DOCTYPES) -> dm.DocSegmentReturn:
    """ You are a document parsing system. You need to extract the following attributes from the document:
    1. The "type" of document. Possible values are: {all_doctypes}
    2. The "first_name" of the person the document refers to.
    3. The "last_name" of the person the document refers to.
    4. The "languages" the document is written in.
    5. whether or not it "has_english_translation"
    {document}

    {FORMAT_INSTRUCTIONS}
    """
    return
ju-bezdek commented 6 months ago

pls can you provide an example input / call ?

also can you pls provide the Raw LLM output during successful and not successful calls? If you cant provide the data, pls let me know if they are the same?

two things that come to my mind...

First is that you should probably add some instruction that the {FORMAT_INSTRUCTIONS} are instrucitons:

for example:

Always reply with in this format:
{FORMAT_INSTRUCTIONS}

The other could be also LLM temperature ... seeing that you have defined your own llm, are you sure you have set temperature = 0?

mbalty commented 6 months ago

Thank you @ju-bezdek ! Your suggestions helped, the rate at which I need to retry reduced drastically.