-
### What happened?
We can't use Vision with `ollama_chat`, but it's working with `ollama`.
config.yaml
```yaml
- model_name: 'llava:7b'
litellm_params:
model: 'ollama_chat/llava:7b…
-
### Validations
- [ ] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [ ] I'm not able to find an [open issue](https://githu…
-
**Describe the bug**
Agent already created using Client Code , given Id and other details like default ( that's happening even with Basic Agent ) and its showing in the backend ( sql lite, showing in…
-
Hi, i meet a problem when load the model using the "tevatron/retriever/driver/encode.py". "lora_name_or_path" is trained with "tevatron/retriever/driver/train.py". I am confused by this problem.
Trac…
-
### Your current environment
The output of `python collect_env.py`
```
Collecting environment information...
PyTorch version: 2.4.0a0+3bcc3cddb5.nv24.07
Is debug build: False
CUDA used to bu…
-
Hi, I am following IPEX-LLM GraphRAG_quickstart.md, I met two issues.
1) No module named "past"
Prepare Input Corpus
Some [sample documents](https://github.com/TheAiSingularity/graphrag-local-…
-
### What is the issue?
No issues with any model that fits into a single 3090 but seems to run out of memory when trying to distribute to the second 3090.
```
INFO [wmain] starting c++ runner | ti…
-
## Description
**Objective:** Integrate **[Ollama](https://ollama.ai/)** into RepoGPT to enable local AI processing using models like Llama 2.
---
## Rationale
- **Enhanced Privacy:** Keep…
-
### Your current environment
```text
安装完之后,直接运行python examples/minicpmv_example.py出现的问题
INFO 06-27 10:16:32 utils.py:598] Found nccl from environment variable VLLM_NCCL_SO_PATH=/usr/local/lib/pytho…
-
Hi there, I was wondering if there is a simple guide available for generating long stories using a local Large Language Model (LLM)? I'm looking for something that I can adapt and implement within a f…