-
Using gemini models:
```
from langchain_google_vertexai import ChatVertexAI
model = ChatVertexAI(
model="gemini-1.5-pro-002",
temperature=0,
)
```
and trustcall create_extractor…
-
Let's add support for using locally hosted models using Ollama.
We'll use the `langchain-ollama` module for it.
`ChatOllama` from `langchain_ollama` may be constructed similar to ChatOpenAI. However…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
**Is your feature request related to a problem? Please describe.**
We'd like to enable better control of tool calling when using Langchain::Assistant. Some of the supported LLMs ([Anthropic](https://…
-
- Remove LangChain dependency for LLMs in OpenAGI and write functions using the official API for all major LLM providers
-
### URL
https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent_types.AgentType.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included …
-
### System Info / 系統信息
CUDA 12.4
Transformer 4.44
### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
- [ ] docker / docker
- [X] pip install / 通过 pip install 安装
- [ ] installation…
-
For example, following this how-to guide: [How to stream LLM tokens from your graph](https://langchain-ai.github.io/langgraphjs/how-tos/stream-tokens/), And using [How to interact with the deployment …
Jronk updated
10 hours ago
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…