Open INF800 opened 1 year ago
We do use local models, but I like this yeah we should work with more endpoints
Vertex AI requires billing to be enabled on the user end. How do we plan to handle that?
Happy to pick this up. If I understand correctly, we have to define call_vertexai
in utils under rebel agent, and make changes accordingly. Allow user to either share open AI auth or google auth. And accordingly handle code changes in the files.
Any other change required for this feature ? @mmirman
Two models are available - text completion and chat models based on PaLM.
Docs: https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/api-quickstart#summarization.
google-cloud-aiplatform
from pypi provides necessary helpers.This will also make sure we are not using openai ONLY in llm-vm.