-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
Is there a way or tutorial on how to configure ollama litellm to work with skyvern? How can skyvern work with a local llm?
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
hyhzl updated
2 weeks ago
-
### 🥰 需求描述
希望【设置】模型服务商选项,增加 ollama本地大模型 选项
这个需求是在不与互联网连接的局域网内,假设A电脑部署了ollama,下载了一些本地大模型
本机用户可以在A电脑上直接选中ollama本地大模型,设置接口地址为http://localhost:11434,直接对话
其他用户或者更多的用户都可以在本机安装nextchat客户端,然后设置接口地址为http:…
-
# Hi,
Before I start a huge thanks for the project.
____
Right now I expierence some problems with BMO. If I start my laptop and try to run a BMO chat I get the a http 400 error. I can chat on O…
-
### What happened?
**Background:**
I'm installing Quivr locally on Ubuntu and I want to use llama3.1 in Ollama.
I changed the URL in .env file using the internal IP address of the host machine, "…
-
**Description:**
One of the reasons Ollama is so widely adopted as a tool to run local models is its ease of use and seamless integration with other tools. Users can simply install an app that star…
-
### What happened?
My request:
```
list my notes that mention adobe?
```
Reply
```markdown
Adobe Notes
You likely have some notes about Adobe in your Obsidian knowledge base. Based on the ex…
-
### Description
If you download a gguf model and update the LLM URL settings to the proper port where kotaemon is loading the model, testing against the "ollama" LLM works.
However, the Embeddin…