I'm currently trying to add support for Gemma to my LLM playground application.
I'm using Langserve & Langchain to host this playground.
Gemma is being hosted in VertexAI with a public endpoint.
Currently, I'm trying to add support to my Langserve endpoints to serve Gemma, however, I'm using RunnablePassthrough to route to which specific LLMs I'd like to use at any given time.
When I try to do this, it seems that the langchain_google_vertexai implementation isn't aware of the currently event loop available in the thread.
This works fine for a multitude of other LLMs, even the ones available from VertexAI out of the box (chat-bison, gemini-pro, etc...) but for whatever reason the Gemma object exported by this package doesn't seem to be agreeable
I'm currently trying to add support for Gemma to my LLM playground application. I'm using Langserve & Langchain to host this playground. Gemma is being hosted in VertexAI with a public endpoint.
Currently, I'm trying to add support to my Langserve endpoints to serve Gemma, however, I'm using RunnablePassthrough to route to which specific LLMs I'd like to use at any given time.
When I try to do this, it seems that the langchain_google_vertexai implementation isn't aware of the currently event loop available in the thread.
This works fine for a multitude of other LLMs, even the ones available from VertexAI out of the box (chat-bison, gemini-pro, etc...) but for whatever reason the Gemma object exported by this package doesn't seem to be agreeable