langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
88.82k stars 13.97k forks source link

with_structured_output raise NotImplementedError() Version: 0.1.14 #20102

Open xitex opened 3 months ago

xitex commented 3 months ago

Checked other resources

Example Code

model = ChatGroq( model_name="mixtral-8x7b-32768" )

class Joke(BaseModel): setup: str = Field(description="The setup of the joke") punchline: str = Field(description="The punchline to the joke")

model_with_structure = model.with_structured_output(Joke, method="json_mode") f = model_with_structure.invoke( "Tell me a joke about cats, respond in JSON with setup and punchline keys" )

Error Message and Stack Trace (if applicable)

python/3_11/venv/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function with_structured_output is in beta. It is actively being worked on, so the API may change. warn_beta( Traceback (most recent call last): File "/python/3_11/wp_app/src/aibro_langchain.py", line 25, in model_with_structure = model.with_structured_output(Joke, method="json_mode") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/3_11/venv/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py", line 110, in warning_emitting_wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/3_11/venv/lib/python3.11/site-packages/langchain_core/language_models/base.py", line 204, in with_structured_output raise NotImplementedError() NotImplementedError

Description

Try to get structured output from groq

System Info

langchain==0.1.14 langchain-community==0.0.31 langchain-core==0.1.40 langchain-groq==0.0.1 langchain-openai==0.0.5

Mac python. 3.11

xitex commented 3 months ago

I try this with groq not openai (may be here is wrong tag). Thanks

jovi-s commented 3 months ago

I am getting the same error, it seems that

image

the method is just not implemented, but it is covered in the docs:

  1. https://python.langchain.com/docs/use_cases/query_analysis/techniques/routing/#routing-to-multiple-indexes
  2. https://python.langchain.com/docs/use_cases/query_analysis/how_to/multiple_retrievers/#query-analysis

P.S. I am using the OpenAI implementation from langchain_openai

juancalvof commented 3 months ago

I have the same error with version 0.1.15.

Thanks! :)

juancalvof commented 3 months ago

Exploring https://github.com/langchain-ai/langchain-google/blob/main/libs/vertexai/langchain_google_vertexai/chat_models.py#L733 I have found that when I pass this model:

    ## Data model
    class code(BaseModel):
        """Code output"""
        prefix: str = Field(description="Description of the problem and approach")
        imports: str = Field(description="Code block import statements")
        code: str = Field(description="Code block not including import statements")

This way ...

    model = ChatVertexAI(model_name="gemini-1.5-pro-preview-0409", convert_system_message_to_human=True)
    llm_with_tool = model.bind(functions=[code])
    parser = PydanticOutputFunctionsParser(pydantic_schema=code)   

returns

#!/bin/bash

PROJECT_ID="GCP-PROJECT"
MODEL_ID="gemini-pro"

# Execute the curl command and capture the response
response=$(curl -s \
  -X POST \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" \
  "https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/publishers/google/models/${MODEL_ID}:generateContent" -d \
$'{
  "contents": {
    "role": "user",
    "parts": {
      "text": "I am passing text key 'foo' to my prompt and want to process it with a function, process_text(...). prior to the prompt. how can I do this using LCEL?"
    }
  },
"tools": {
  "function_declarations": {
    "name": "code",
    "description": "Code output",
    "parameters": {
      "type": "OBJECT",
      "description": "Code output",
      "properties": {
        "prefix": {
          "type": "STRING",
          "description": "Description of the problem and approach"
        }
      },
      "properties": {
        "imports": {
          "type": "STRING",
          "description": "Code block import statements"
        }
      },
      "properties": {
        "code": {
          "type": "STRING",
          "description": "Code block not including import statements"
        },
      },
      "required": "prefix",
      "required": "imports",
      "required": "code"
    },
  }
},
"generation_config": {}
}')

# Print the response
echo "$response"

Which Gemini returns a 500 error. If I change the type OBJECT to ARRAY works. Not sure if this related to how this line works: https://github.com/langchain-ai/langchain-google/blob/main/libs/vertexai/langchain_google_vertexai/functions_utils.py

william-ingold commented 3 months ago

@xitex

I try this with groq not openai (may be here is wrong tag). Thanks

As @jovi-s said, it isn't implemented with Groq yet, see here, though it is applicable to Mistral here. So if you don't need Groq, but want Mistral still (as in your OP), then this is an option.

juancalvof commented 2 months ago

Hey! I got the error with OpenAI. Makes the call to the base.py file: image

sepiatone commented 2 months ago

@xitex @jovi-s @juancalvof @william-ingold Please upgrade to the latest langchain-groq. You should no longer get the NotImplementedError.

xitex commented 2 months ago

@sepiatone awesome thanks!!!

sepiatone commented 2 months ago

@jovi-s Could you be running an older version of the langchain-openai package?

jovi-s commented 2 months ago

@sepiatone I got it working! Thank you so much for your help :)

igalma commented 2 months ago

Any plans implementing it for langchain-aws? It's already implemented for langchain-anthropic but can't be used when calling anthropic models through bedrock

sandeeshc commented 1 month ago

@sepiatone How did you get this working? Even though I'm using the latest versions of langchain-core and langchain-openai, I'm still getting this error.

juancalvof commented 1 month ago

Make sure to update packages. Check your constraints in the.toml file because you may be doing a poetry update, but a constraint could be limiting the upgrade. To check for this, execute poetry show --outdated. The yellow ones will be the ones you may not updating because of a constraint (yours or from a package)

sandeeshc commented 1 month ago

@juancalvof I updated all the packages and these are my langchain packages' versions. langchain== 0.1.20 langchain==0.1.52 langchain-openai==0.1.7

Still the issue was there

sepiatone commented 1 month ago

@sepiatone How did you get this working? Even though I'm using the latest versions of langchain-core and langchain-openai, I'm still getting this error.

Please post a MRE and somebody could help.

xajik commented 1 month ago

Same issues with:

codekiln commented 1 month ago

Any plans implementing it for langchain-aws? It's already implemented for langchain-anthropic but can't be used when calling anthropic models through bedrock

I've requested support for this in Implement with_structured_output for ChatBedrock from langchain-aws · langchain-ai/langchain · Discussion #22701. Also, I've opened a stack overflow post Is there a compatibility table between Bedrock Claude and Anthropic Claude APIs - Stack Overflow with the goal of discovering what langchain-specific workarounds are possible.

dhatraknilam commented 2 weeks ago

Facing same error with langchain-0.2.5 and langchain-core-0.2.9. I tried upgrading langchain to latest. I am using ChatOpenAI from langchain_openai.