-
While running the following script:
```python
from langchain_google_vertexai import VertexAI
llm = VertexAI(model_name="text-unicorn@001")
llm.invoke("Hello, what is your name?")
```
It prints t…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I am trying to add safety settings to vertex ai models.
```
safety_settings={
…
-
### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the exis…
-
### Feature Description
"In a typical AI workflow, you might pass the same input tokens over and over to a model. Using the Gemini API context caching feature, you can pass some content to the mode…
-
### Bug Description
I tried this https://docs.llamaindex.ai/en/stable/examples/managed/VertexAIDemo/ but it gave errors.
1)
```print(index.list_files())``` gave ```list_files() got an unexpected…
-
### [READ] Step 1: Are you in the right place?
Issues filed here should be about bugs for a **specific extension in this repository**.
If you have a general question, need help debugging, or f…
-
### Description of the feature request:
https://ai.google.dev/gemini-api/docs/prompting_with_media?lang=python
based on the above link, it seems not to work on the pdf file?
is my understanding r…
-
First: We ❤️ LiteLLM
I wish it supported the new Gemini context caching:
https://ai.google.dev/gemini-api/docs/caching?lang=python
I admit I haven't thought the API through well, si…
-
Hi team,
I am trying to use context caching in Gemini through vertexai, however, when trying to create the content to cache, it fails as it can't find the stable model.
I can see that context c…
-
Trying out a few ollama models. But they fail with
`Model "ollama/qwen2" does not support tools, but some tools were supplied to generate(). Please call generate() without tools if you would like …
xster updated
3 weeks ago