Show how to instrument a LangChain LLM application using the VertexAI API and Phoenix tracing.
Context
Google serves its LLMs (such as text-bison, a.k.a. PaLM2) and embedding models (such as text-gecko) via two APIs:
the PaLM API, which requires an API key,
the VertexAI API, which requires a Google Cloud project and authentication, e.g., via gcloud.
The LangChain Google PaLM tracing tutorial currently builds a simple RAG application using LangChain LLM and embedding implementations that call out to the PaLM API. We should enhance the notebook to be configurable so that the user of the notebook can choose between the APIs.
Requirements
The user of the notebook is able to choose between the PaLM API and the VertexAI API, perhaps using a Jupyter dropdown widget. By default, the notebook should use the PaLM API.
The copy of the notebook is updated to reflect that the user must decide which API to use, and has clear instructions and links on how to use the APIs.
Goal
Show how to instrument a LangChain LLM application using the VertexAI API and Phoenix tracing.
Context
Google serves its LLMs (such as text-bison, a.k.a. PaLM2) and embedding models (such as text-gecko) via two APIs:
gcloud
.The LangChain Google PaLM tracing tutorial currently builds a simple RAG application using LangChain LLM and embedding implementations that call out to the PaLM API. We should enhance the notebook to be configurable so that the user of the notebook can choose between the APIs.
Requirements
Relevant Components
Other Resources
Here is an existing notebook using the VertexAI API for reference: