-
Viele Standorte im HaCraFu-Netz benutzen den genannten Router. Für die Übertragung ins bbb-configs fehlt allerdings das entsprechende Modelfile.
Die Datei lege ich an unter: group_vars/model_tplink_a…
-
```
import ollama from 'ollama'
const modelfile = `
FROM llama3.1
SYSTEM "You are mario from super mario bros."
`
await ollama.create({ model: 'example', modelfile: modelfile })
```
from https://www.…
-
Hi there,
I am trying to use slim models from llmware such as 'slim-sql-tool' with Ollama. but I need to create a prompt template in Modelfile and I was wondering what would it be look like. In your …
-
how can we export saved model files?
I want to summarize the model files and streamline the current chat context
-
### What is the issue?
I tried to import finetuned llama-3.2-11b-vision, but I got "Error: unsupported architecture."
In order to make sure my model is not the problem, I downloaded [meta-llama/Ll…
-
excuse me, sir
i would like to know abaout your database and your modelfile, can you give me your modelfile or your data training that you use?
please.
thank you. sorry if i disturb you
-
### Describe the bug
- Cloned the repo
- Installed everything needed
- Created the modelfile FROM qwen2.5-coder:7b
PARAMETER num_ctx 32768
and run the query on powershell but either i don't see o…
-
We are trying to load the Mediapipe's Face landmark model on the DLPU using the Larod API but we fail to do so with the following error:
![image](https://github.com/user-attachments/assets/42d446c5…
Poufy updated
1 month ago
-
I got the following error when running model Imported from GGUF which is generated from the model fine-tuned with LoRA.
Error: llama runner process has terminated: GGML_ASSERT(src1t == GGML_TYPE_F…
-
Version: Deno 1.46.2
```ts
import { InferenceSession } from "npm:onnxruntime-web";
const modelFile = await Deno.readFile("./model.onnx");
InferenceSession.create(modelFile, {
executionPro…