issues
search
ollama
/
ollama-python
Ollama Python library
https://ollama.com
MIT License
2.69k
stars
221
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Continued Conversations
#99
splamei
closed
3 months ago
1
ollama.chat is missing `template` kwarg.
#98
knoopx
closed
2 months ago
1
Loosen httpx constraint
#97
balloob
closed
3 months ago
3
Clone the HuggingFace repository (optional) llm/llama.cpp/convert.py can't find this file.
#96
tigerzhanglaihu
opened
3 months ago
0
Use with personal context
#95
Victordeleusse
opened
3 months ago
1
json format do not return all the results
#94
jojogh
opened
3 months ago
2
Add Phorm AI Badge
#93
bentleylong
closed
3 months ago
1
Why Ollama is so terribly slow when I set format="json"
#92
eliranwong
closed
3 months ago
1
Python 3.12.2 Cannot import ollama
#91
Ray0907
opened
3 months ago
3
Add an example using continuous dialogue
#90
Gloridust
opened
3 months ago
0
Unable to get ollama serve working
#89
harsham05
closed
3 months ago
1
response error when calling any ollama functions
#88
Hansyvea
opened
3 months ago
3
Setting up top k , Max tokens , context length ?
#87
timtensor
closed
3 months ago
1
num_ctx=100000 does not work
#86
eliranwong
opened
3 months ago
0
Running without network error: ollama._types.ResponseError
#85
Gloridust
opened
3 months ago
2
Can I set num_ctx=-1 to use max possible context window?
#84
eliranwong
closed
3 months ago
2
ResponseError(without error information) when running with python
#83
Kaleemullahqasim
opened
3 months ago
1
client run timeout,no response
#82
songsh
opened
3 months ago
3
pull method to get total progress info ?
#81
iorilu
opened
3 months ago
2
pip install ollama error
#80
songsh
closed
3 months ago
2
terribly slow: format="json", stream=False, please advise
#79
eliranwong
closed
4 months ago
0
Understanding Variable Response Times from a Local Mistral Model
#78
fentresspaul61B
closed
4 months ago
3
How do I define the 'base_url' using embeddings()
#77
pandego
closed
4 months ago
1
How to set temprature and output token size in the chat mode?
#76
jojogh
closed
3 months ago
5
Stability testing for client
#75
stevengans
closed
3 months ago
0
Possible workaround for the issues with multimodal...
#74
joakimeriksson
closed
3 months ago
2
fail to run this example file [examples/multimodal/main.py]
#73
dabing1205
opened
4 months ago
1
System Message causing no answer from Assistant
#72
pedrognsmartins
opened
4 months ago
5
From ollama to transformer.AutoModelForCausalLM
#71
Demirrr
closed
4 months ago
1
Add example for chat with history
#70
bibhas2
opened
4 months ago
2
ollama server hangs constantly
#69
rihp
closed
4 months ago
5
Added example showing how to use generate with images
#68
iplayfast
opened
4 months ago
0
[query] Is it not possible to run ollama python library on a colab notebook ?
#67
timtensor
closed
4 months ago
2
readme needs to show example of image generations
#66
iplayfast
closed
4 months ago
1
chat and cli chat get different outputs when using llava
#65
iplayfast
closed
4 months ago
1
Add iterative_chat and examples for simlified chat history keeping
#64
connor-makowski
closed
4 months ago
2
Add in simple to use iterative chat (chat with history)
#63
connor-makowski
closed
4 months ago
6
Setting which GPU to use with PyTorch
#62
merlinlikethewizard
opened
4 months ago
1
Fixes URL parsing when basic auth is included in the URL.
#61
lemeur
opened
4 months ago
4
Where does ollama.pull() store the models?
#60
HotLoverGirl69
closed
4 months ago
1
Can't get Async to stop
#59
grahama1970
closed
4 months ago
1
python user agent
#58
mxyng
closed
4 months ago
0
llama2-uncensored:70b might be pointing to regular llama2:70b?
#57
makestuff4fun
opened
4 months ago
0
ollama.generate & ollama.chat hangs after about 30*mins
#56
stevengans
closed
4 months ago
5
Testing repeated calls
#55
stevengans
closed
4 months ago
0
Feat/pydantic models & refactor
#54
Howe829
opened
4 months ago
1
:pray: Feature request > Create model from file path with SDK
#53
adriens
closed
4 months ago
1
create example
#52
mxyng
closed
4 months ago
0
keep_alive control how long models stay loaded
#51
stevengans
closed
4 months ago
1
fix: encode base64 inputs
#50
mxyng
closed
5 months ago
0
Previous
Next