langchain-ai / langchain-google

MIT License
74 stars 78 forks source link

KeyError when specifying custom `api_endpoint` in VertexAI #312

Closed dtc-NaokiSomeya closed 1 week ago

dtc-NaokiSomeya commented 1 week ago

Environment:

Python 3.11 langchain 0.2.4 langchain-core 0.2.6 langchain-google-vertexai 1.0.5 google-cloud-aiplatform 1.54.1

Description:

I encountered an issue when trying to initialize the VertexAI class with a custom API endpoint. it is expected that users can pass a api_endpoint argument to customize the API endpoint. However, when passing this argument as shown in the code snippet below, a KeyError is raised.

llm = VertexAI(
    model_name="gemini-1.5-pro",
    temperature=0,
    api_transport="rest",
    api_endpoint="https://<some custom api endpoint>",
)

error->

File ~/work/dxg/aiplatform_gcp/langchain-google/libs/vertexai/langchain_google_vertexai/llms.py:121, in VertexAI.__init__(self, model_name, **kwargs)
    119 if model_name:
    120     kwargs["model_name"] = model_name
--> 121 super().__init__(**kwargs)
...
--> 105     api_endpoint = values["api-endpoint"]
    106 else:
    107     location = values.get("location", cls.__fields__["location"].default)

KeyError: 'api-endpoint'

I've found following the work around to avoid the problem.

WorkAround:

llm = VertexAI(
    model_name="gemini-1.5-pro",
    temperature=0,
    api_transport="rest",
    api_endpoint="https://<some custom api endpoint>",
    **{"api-endpoint": "https://<some custom api endpoint>"},
)