langchain-ai / langchain-google

MIT License
74 stars 78 forks source link

ChatVertexAI - no response in v1.0.4 #297

Open ventz opened 2 weeks ago

ventz commented 2 weeks ago

@gmogr @lkuligin Opening the new ticket for that follow up issue.

Ex: After running it 6 times, I got an output (see > RESPONSE:) only twice (the rest comes back with no output):

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x10a9bde10>, temperature=0.2, max_output_tokens=8192)] last=StrOutputParser()
> RESPONSE: 

vs

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x12b123790>, temperature=0.2, max_output_tokens=8192)] last=StrOutputParser()
> RESPONSE: I am a large language model, trained by Google.

Here is the test code:

# test.py
import dotenv, os, json

from google.oauth2 import service_account

from langchain_google_vertexai import ChatVertexAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser

dotenv.load_dotenv()
GCP_PROJECT_ID = os.environ.get("GCP_PROJECT_ID")
GCP_REGION = os.environ.get("GCP_REGION")
GCP_CREDENTIALS_JSON = os.environ.get("GCP_CREDENTIALS_JSON")

credentials = service_account.Credentials.from_service_account_info(json.loads(GCP_CREDENTIALS_JSON))
scoped_creds = credentials.with_scopes(["https://www.googleapis.com/auth/cloud-platform"])

llm = ChatVertexAI(
        model_name="gemini-pro",
        convert_system_message_to_human=False,
        project=GCP_PROJECT_ID,
        location=GCP_REGION,
        credentials=scoped_creds,
        max_output_tokens=8192,
        temperature=0.2,
)

output_parser = StrOutputParser()

prompt_template = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
    MessagesPlaceholder(variable_name="messages"),
])

chain = prompt_template | llm | output_parser
print(chain)

response = chain.invoke({
    "messages": [
        HumanMessage(content="What llm are you"),
    ],
})
print(f"> RESPONSE: {response}")

And the requirements.txt:

python-dotenv
google-cloud-aiplatform
langchain
langchain_community
langchain_google_vertexai

If it's helpful, this is auth-ed with a .env file:

GCP_PROJECT_ID="<redacted>"
GCP_REGION="us-central1"
GCP_CREDENTIALS_JSON='{
 "type": "service_account",
  "project_id": "<redacted>",
  "private_key_id":  ...
  ...
}

Here are 6 new test runs, with a lowered max tokens (max_output_tokens=8000) - you can see that only 2 outputted:

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x105026dd0>, temperature=0.2, max_output_tokens=8000)] last=StrOutputParser()
> RESPONSE: I am a large language model, trained by Google.

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x10512fb90>, temperature=0.2, max_output_tokens=8000)] last=StrOutputParser()
> RESPONSE: I am a large language model, trained by Google.

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x1041dae90>, temperature=0.2, max_output_tokens=8000)] last=StrOutputParser()
> RESPONSE: 

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x1064ab0d0>, temperature=0.2, max_output_tokens=8000)] last=StrOutputParser()
> RESPONSE: 

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x100a43b90>, temperature=0.2, max_output_tokens=8000)] last=StrOutputParser()
> RESPONSE: 

% python test.py
first=ChatPromptTemplate(input_variables=['messages'], input_types={'messages': typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant. Answer all questions to the best of your ability.')), MessagesPlaceholder(variable_name='messages')]) middle=[ChatVertexAI(project='<redacted>', model_name='gemini-pro', model_family=<GoogleModelFamily.GEMINI: '1'>, full_model_name='projects/<redacted>/locations/us-central1/publishers/google/models/gemini-pro', client_options=ClientOptions: {'api_endpoint': 'us-central1-aiplatform.googleapis.com', 'client_cert_source': None, 'client_encrypted_cert_source': None, 'quota_project_id': None, 'credentials_file': None, 'scopes': None, 'api_key': None, 'api_audience': None, 'universe_domain': None}, default_metadata=(), credentials=<google.oauth2.service_account.Credentials object at 0x1042f6e10>, temperature=0.2, max_output_tokens=8000)] last=StrOutputParser()
ventz commented 5 days ago

@gmogr @lkuligin Just wanted to ping as a reminder about this.

Upgraded to v1.0.6 -- seeing consistent "no output"

google-api-core               2.19.1
google-auth                   2.30.0
google-cloud-aiplatform       1.56.0
google-cloud-bigquery         3.25.0
google-cloud-core             2.4.1
google-cloud-resource-manager 1.12.3
google-cloud-storage          2.17.0
google-crc32c                 1.5.0
google-resumable-media        2.7.1
googleapis-common-protos      1.63.2
grpc-google-iam-v1            0.13.1
langchain-google-vertexai     1.0.6
% python test.py
> RESPONSE: 

% python test.py
> RESPONSE: 

% python test.py
> RESPONSE: 

% python test.py
> RESPONSE: 

% python test.py
> RESPONSE: 

% python test.py
> RESPONSE: 

% python test.py
> RESPONSE: