-
I have a remote server running Ollama. I can list the models just fine with:
`list_models(server="http://server:11434")
`
But
`query("why is the sky blue?",server=server_url)
`
fails wi…
bshor updated
3 weeks ago
-
### What happened?
### Goal
I want to use a custom hosted OpenAI LLM model instead of the one hosted by OpenAI.
Therefore I want to change OPENAI_API_BASE_URL.
### What happened
Using `fabric -…
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
```
swift infer --model_type internvl2-8b-awq --infer_backend lmdeploy
```
```
WARNING:ro…
-
Is there a way or tutorial on how to configure ollama litellm to work with skyvern? How can skyvern work with a local llm?
-
Subscribe to this issue and stay notified about new [weekly trending repos in Python](https://github.com/trending/python?since=weekly)!
-
Paperless-ngx uses an OCR engine that is not particularly good with languages like chinese, korean and especially seems to perform badly when multiple languages are present in the same document.
Mu…
-
### What is the issue?
C:\Users\18164>ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)librar…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.7.2.dev0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python …
-
### What is the issue?
Attached log: [llama3.2-cuda-oom.log](https://github.com/user-attachments/files/17582524/llama3.2-cuda-oom.log)
I'm testing the `x/llama3.2-vision:11b-instruct-q4_K_M` and…
-
### What is your question?
[Question]: I have ollama installed (locally and remotely) -
After reading the documentation, I am still not clear how to get fabric working.
I tried this, this: pbp…