-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
Can we use Langchain Tools with LlamaIndex agents?
-
之前测试了本地知识库,发现返回的结果并不理想。想剥离RAG的本地知识库,直接剥离一些数据用上下文的方式测试下是大模型的问题,还是RAG提供的搜索数据的问题。在llama中文社区用llama3-8b测试回答的很理想,各种准确。回来在咱们的系统上不挂本地知识库,用同样的上下文测试llama3-8b-q4,llama2-13b-q4,qwen-4b-q4,结果还是不理想,我的问题是量化4b后模型差距这么…
-
I'm using Ollama, with `llama2-uncensored`. The API connection test works fine, and rarely I get some predictions, though the quality is much lower than I get when I chat with `llama2-uncensored` dire…
-
I set the `OLLAMA_HOST` like so:
```bash
docker run -d -p 3000:3000 -e OLLAMA_HOST="http://127.0.0.1:11434/" --name chatbot-ollama ghcr.io/ivanfioravanti/chatbot-ollama:main
```
and can't connec…
-
### Describe the issue
using
```
summary_method="reflection_with_llm",
```
Does not seem to work as the `result.summary` of the chat session is the last message instead of summary.
by enabl…
-
Hey, thanks for sharing such a great tool
I might be missing something, but when I'm chatting with a Llama 3 model (either the original or a variant like dolphin 2.9) the context length seems maxxe…
-
I'm not sure if this is an ollamac or ollama issue or maybe just a setting thing, but I didn't know where else to ask. I'm on macOS Sequoia 15.0.1.
After a few reboots where everything worked fine,…
-
Is there a way or tutorial on how to configure ollama litellm to work with skyvern? How can skyvern work with a local llm?
-
I was trying to use an LM studio hosted local server, but apparently put in the wrong end point. Every end point I attempted to enter as the server showed up with an error. I haven't connected an age…
-
### What is the issue?
I tried running with the 1.7b version, and it ran successfully.
![image](https://github.com/user-attachments/assets/6074c785-cbb2-43e0-b82d-32fe74184840)
However, when runni…