Open petrm opened 20 hours ago
is this bookmark by any chance a link to an image? Because if yes, hoarder converts it to an image and runs the image model on it instead of the text model. You might want to change the image model as well and see if it helps.
I am using ollama for tagging. INFERENCE_TEXT_MODEL is set to llama3.2 and is being used as expected. There is for some reason one bookmark, where Hoarder tries to use gpt-4o-mini and ignores the configuration from the environment variable.
Regenerating all AI tags ends up with the same bookmark failing.
How can I debug this?
hoarder | 2024-10-16T20:38:19.675Z error: [inference][1224] inference job failed: ResponseError: model "gpt-4o-mini" not found, try pulling it first hoarder | ResponseError: model "gpt-4o-mini" not found, try pulling it first hoarder | at checkOk (/app/apps/workers/node_modules/.pnpm/ollama@0.5.9/node_modules/ollama/dist/shared/ollama.9c897541.cjs:72:9) hoarder | at process.processTicksAndRejections (node:internal/process/task_queues:105:5) hoarder | at async post (/app/apps/workers/node_modules/.pnpm/ollama@0.5.9/node_modules/ollama/dist/shared/ollama.9c897541.cjs:120:3) hoarder | at async Ollama.processStreamableRequest (/app/apps/workers/node_modules/.pnpm/ollama@0.5.9/node_modules/ollama/dist/shared/ollama.9c897541.cjs:232:25) hoarder | at async OllamaInferenceClient.runModel (/app/apps/workers/inference.ts:2:3086) hoarder | at async OllamaInferenceClient.inferFromImage (/app/apps/workers/inference.ts:2:3915) hoarder | at async inferTags (/app/apps/workers/openaiWorker.ts:6:3014) hoarder | at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:6:6316) hoarder | at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/@hoarder+queue@file+packages+queue/node_modules/@hoarder/queue/runner.ts:2:2567)