issues
search
taketwo
/
llm-ollama
LLM plugin providing access to local Ollama models using HTTP API
Apache License 2.0
146
stars
8
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Fixes a bug when using llm chat is used with Ollama Model.
#20
sukhbinder
closed
1 week ago
13
Vision support (for llama3.2-vision)
#19
simonw
closed
2 weeks ago
6
Fix unit tests after embedding support was added
#18
taketwo
closed
3 weeks ago
0
Error when running ollama 3.2
#17
kesitrifork
closed
4 weeks ago
3
Add embed support and make chat only be able to use chat models
#16
zivoy
closed
4 weeks ago
1
Added support for vision models in Ollama
#15
sukhbinder
closed
1 month ago
5
Support for embedding models installed through Ollama
#14
javierhb23
closed
1 month ago
4
Support for remote ollama server
#13
dpatschke
closed
4 months ago
8
Expose `format="json"`?
#12
jefftriplett
closed
4 months ago
4
Change type of stop sequences option to list
#11
Mrmaxmeier
closed
5 months ago
2
Avoid KeyError when iterating through messages
#10
simonw
closed
5 months ago
1
Error: 'message' when executing a streaming prompt
#9
simonw
closed
5 months ago
3
Use try/except to return an empty model list if Ollama is not responding
#8
davedean
closed
6 months ago
1
Add pydantic v2 as a dependency
#7
Taik
closed
6 months ago
1
Error: 'Options' object has no attribute 'model_dump'
#6
shiyongxin
closed
6 months ago
4
Add available modelfile parameters
#5
Taik
closed
6 months ago
1
Implementing `keep_alive` parameter
#4
ibehnam
opened
7 months ago
1
'Connection refused' Error
#3
haje01
closed
9 months ago
2
404 response from ollama on prompt
#2
nhoffman
closed
10 months ago
2
FYI Official Ollama Python Library
#1
easp
closed
10 months ago
2