langchain-ai / langchain-google

MIT License
78 stars 81 forks source link

[Feature Request] [Maybe Bug?] GemmaChatVertexAIModelGarden & GemmaVertexAIModelGarden support being run with RunnablePassthrough #178

Open BriianPowell opened 2 months ago

BriianPowell commented 2 months ago

I'm currently trying to add support for Gemma to my LLM playground application. I'm using Langserve & Langchain to host this playground. Gemma is being hosted in VertexAI with a public endpoint.

Currently, I'm trying to add support to my Langserve endpoints to serve Gemma, however, I'm using RunnablePassthrough to route to which specific LLMs I'd like to use at any given time.

image

image

When I try to do this, it seems that the langchain_google_vertexai implementation isn't aware of the currently event loop available in the thread.

image

This works fine for a multitude of other LLMs, even the ones available from VertexAI out of the box (chat-bison, gemini-pro, etc...) but for whatever reason the Gemma object exported by this package doesn't seem to be agreeable