-
Implement [prompt caching](https://www.anthropic.com/news/prompt-caching) in Anthropic integration
-
### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Gemini now allows a developer to create a co…
-
**Client**
vertexai/genai
**Environment**
Local running on Mac OS X
**Go Environment**
go version
go version go1.22.2 darwin/arm64
go env
GO111MODULE='on'
GOARCH='arm64'
GOBIN=''…
-
### What is your article idea?
Here is the article's outline
## Part 1: Introduction and Backend Setup with Strapi
### Overview
- Introduction to chatbots and their applications.
- Explanation of…
-
**Description**
[Add a description of the feature]
**Why is this needed**
[If you have a concrete use case, add details here.]
**Implementation details**
[If known, describe how this change s…
-
### Bug Description
Hello,
I am trying to get the output of the model gpt4-o model where it can be a message to the user(streamed) or a function call.
I can't know beforehand if it's going to be …
-
### Description of the bug:
Function calling does not work when providing `stop_sequences` and `stream=True`.
### Actual vs expected behavior:
Actual:
```python
import google.generativeai as ge…
-
If I use GoogleGemini as LLM (see code in my forked version), the `main.rb` demo tends to give consistent 500s after a few invokations.
* It seems like either the API is rate limited, or maybe not
*…
-
Hi,
Can any one help to track the issue with the error below?
Loading chunk 0.
Loading chunk 1.
Loading chunk 2.
Loading chunk 3.
Loading chunk 4.
Loading chunk 5.
Loading chunk 6.
Loadin…
-
### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Gemini now allows a developer to create a context …