-
### The Feature
Add keep_alive parameter to ollama models
### Motivation, pitch
Since Ollama 0.1.23, it is possible to set a keep_alive parameter value when calling ollama completion or generate ap…
-
Hi,
Thank you so much for the excellent work , am also trying to build a webgl viewer for my 3d models, your project giving me a hint how to build it, and i know little bit about osgjs,
i am running…
-
I get this error:
```
chat_template, stop_word, yes_map_eos_token, ollama_modelfile = CHAT_TEMPLATES[chat_template]
~~~~~~~~~…
-
Changed the directory name (Fooocus-ControlNet-SDXL) but still getting this error -
Traceback (most recent call last):
File "/content/Fooocus-ControlNet-SDXL/entry_with_update.py", line 47, in …
-
This is a little more complicated as it will require creating an Ollama Modelfile / manifest in addition to linking the models.
- lm-studio (mostly) parses the filename and the GGML/GGUF metadata t…
-
[Asked and suggested here](http://answers.ros.org/question/263693/nextage-ros-bridge-simulation-issue/):
> Hi everyone,
> I think this is not a question, but a suggestion.
>
> I have been follo…
-
First of all, thanks for building this tool and releasing it as open source. I like that the interfaces seem similar to `docker`.
I also like the idea of Modelfile. Maybe it could also be used to d…
-
I am in the process of integrating a deep learning model within my flutter app. I want to load the model on a C++ runtime (onnxruntime) through Flutter FFI.
The model file in my assets is retrieved…
-
# 1. Ollama
## 1. use Ollama CLI:
```
ollama serve
ollama run llama2:7b, llama3, llama3:70b, mistral, dophin-phi, phi, neural-chat, codellama, llama2:13b, llama2:70b
ollama list
ollama show
…
-
### Describe the issue
Im using ML.net and The Onnx Runtime (3.0.1, 1.17.1 respectively) to load an Onnx model for inference from within a Unity(2021.3.23f1) application. The packages were installe…