langchain-ai / langchain-google

MIT License
121 stars 150 forks source link

ChatGoogleGenerativeAI works fine with the PydanticOutputParser at local development but same environment when replicated through docker does not work when deployed to ec2 #460

Open ccir41 opened 3 months ago

ccir41 commented 3 months ago

I'm getting this error on EC2 deployment

Screenshot from 2024-08-21 20-38-43

File "/home/ubuntu/venv/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 85, in exec_func_with_error_handling
    result = func()
             ^^^^^^
File "/home/ubuntu/venv/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 576, in code_to_exec
    exec(code, module.__dict__)
File "/home/ubuntu/app.py", line 41, in <module>
    st.markdown(llm_chain.invoke({'user_message': 'Tell me a 2 jokes about cats'}))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2878, in invoke
    input = context.run(step.invoke, input, config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 276, in invoke
    self.generate_prompt(
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 776, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
    raise e
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 623, in generate
    self._generate_with_cache(
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 845, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_google_genai/chat_models.py", line 950, in _generate
    return _response_to_result(response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/venv/lib/python3.12/site-packages/langchain_google_genai/chat_models.py", line 530, in _response_to_result
    llm_output = {"prompt_feedback": proto.Message.to_dict(response.prompt_feedback)}
                                                           ^^^^^^^^^^^^^^^^^^^^^^^^
import streamlit as st
from typing import Optional
from typing import List

from langchain_core.pydantic_v1 import BaseModel, Field

from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate

llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro")

class Joke(BaseModel):
    '''Joke to tell user.'''
    setup: str = Field(description="The setup of the joke")
    punchline: str = Field(description="The punchline to the joke")
    rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")

class JokeList(BaseModel):
    jokes : List[Joke] = Field(
        description="List of jokes"
    )

parser = PydanticOutputParser(pydantic_object=JokeList)

prompt_template = """\
User message: {user_message}

{format_instructions}
"""

prompt = PromptTemplate(
    template=prompt_template,
    input_variables=["user_message"],
    partial_variables={"format_instructions": parser.get_format_instructions()}
)

llm_chain = prompt | llm | parser

st.markdown(llm_chain.invoke({'user_message': 'Tell me a 2 jokes about cats'}))
langchain==0.2.14
langchain-community==0.2.12
langchain-google-genai==1.0.8
streamlit==1.37.1

However when i ran same code in my local machine it works

Screenshot from 2024-08-21 18-21-59

lkuligin commented 3 months ago

could you share a full stacktrace, please?

ccir41 commented 3 months ago

@lkuligin i have updated my comment above