issues
search
ollama
/
ollama
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, and other large language models.
https://ollama.com
MIT License
134.99k
stars
11.2k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
404 Page Not Found when trying to access /api/generate
#10016
Sunny-DCosta
opened
1 minute ago
0
Final stream chunk missing "done": true despite completed response
#10015
jopersr
opened
1 hour ago
1
Podman on Ubuntu can't find gpus
#10014
thezachdrake
opened
2 hours ago
4
Show GPU system requirements for each model
#10013
RustoMCSpit
closed
32 minutes ago
1
num_ctx
#10012
20246688
opened
6 hours ago
3
OLLAMA_NUM_PARALLEL not working
#10011
forReason
opened
6 hours ago
4
Error: could not connect to ollama app, is it running?
#10010
hao45e
closed
9 hours ago
2
Question: Changelog of updated models
#10009
zivanfi
closed
4 hours ago
2
Request for Help: Checking My ollama - Python - Based Embed Code
#10008
20246688
opened
12 hours ago
3
ollama uses multiple GPU resources
#10007
cigar-wiki
opened
13 hours ago
1
docs: make context length faq readable
#10006
ParthSareen
closed
16 hours ago
0
runner: Release semaphore and improve error messages on failures
#10005
jessegross
opened
17 hours ago
0
Add full support for omni models
#10004
flexiworld
opened
17 hours ago
0
server: prevent model thrashing from unset API fields
#10003
rick-github
opened
23 hours ago
0
add qwen 2.5 VL 32B model
#10002
olumolu
closed
23 hours ago
0
Improve compatibility with OpenAI structured outputs json_schema response format
#10001
SuperPat45
closed
3 hours ago
2
How do I make a model simply complete my text?
#10000
Explosion-Scratch
closed
34 minutes ago
6
[ENHANCE] Add Ubuntu Support for AMD Ryzen AI 9 HX 370 w/ Radeon 890M (gfx1150)
#9999
liuchuan01
opened
1 day ago
0
Segmentation fault
#9998
Hml520QQ
opened
1 day ago
2
Qwen2.5-VL-32B
#9997
enryteam
closed
1 day ago
0
plz add : `ollama stop all`
#9996
NGC13009
opened
1 day ago
0
Ollama runs a multimodal model
#9995
Gusha-nye
closed
1 day ago
2
ollama cannot running until restrat it
#9994
Liuyuan0803
opened
1 day ago
5
不能聊天
#9993
bmb-li
closed
1 day ago
2
Qwen2.5-vl-32B wanted!!
#9992
twythebest
closed
1 day ago
0
server: Improve download reliability in bandwidth-constrained environments.
#9991
monolith-jaehoon
opened
1 day ago
0
Inaccurate model size display in ollama ps command
#9990
jaybom
opened
1 day ago
4
how to accelerate the inference speed of the model
#9989
Tu1231
closed
8 hours ago
2
Incorrect memory requirement calculation for small models (32B model showing 659.2 GiB requirement)
#9988
jaybom
opened
1 day ago
4
Improve memory estimates for sliding window attention
#9987
jessegross
closed
20 hours ago
0
ollama runners crashing with wsarecv: An existing connection was forcibly closed by the remote host
#9986
azizbtk
opened
1 day ago
1
Discrepancy Between API Documentation and Actual Response: name vs model Field
#9985
CristianoMafraJunior
closed
16 hours ago
2
Add support for array for head count GGUF KV
#9984
ngxson
opened
2 days ago
1
Update README.md
#9983
uggrock
opened
2 days ago
0
do we have official ollama docker image < 1 GB
#9982
babu-kandyala
opened
2 days ago
8
Hope to support the Qwen2.5-VL-32B-Instruct
#9981
Czj1997-02
closed
2 days ago
0
Update DeepSeek V3 to improved version
#9980
bannert1337
opened
2 days ago
2
Vision models: regex sometimes doesn't catch file name from prompt
#9979
RicoElectrico
opened
2 days ago
1
config: allow setting fixed context length
#9978
pppy2012
opened
2 days ago
0
How can we prevent creating a new complete model instance in GPU when using different context lengths?
#9977
jaybom
opened
2 days ago
1
segmentation violation GGML_ASSERT(sections.v[0] > 0 || sections.v[1] > 0 || sections.v[2] > 0)
#9976
thafer6
opened
2 days ago
1
dify中不同的模型启动模型上下文长度参数会导致启动不同的模型实例到显存中吗
#9975
jaybom
opened
2 days ago
2
运行模型时,gpu利用率占满,但是功率很低,gpu没有正常使用起来
#9974
save-FGG
opened
2 days ago
3
server: support streaming near tool usage
#9973
fizx
opened
2 days ago
5
wsarecv: An existing connection was forcibly closed by the remote host.
#9972
upendravalera
opened
2 days ago
4
Please support Qwen 2.5 VL 32B
#9971
Jigit-ship-it
closed
2 days ago
0
RX6600 detected but not used (Linux)
#9970
LevitatingBusinessMan
closed
2 days ago
1
Some Questions about Using Embedding Models in Ollama
#9969
20246688
closed
1 day ago
2
llama runner process has terminated: exit status 2
#9968
Andyshenjx
opened
2 days ago
1
Ollama model files for Gemma3 specifying mmproj ggufs do not retain vision capability.
#9967
lkraven
closed
2 days ago
4
Next