Closed s1lverkin closed 6 days ago
I'm experienceing the same issue. It appears somewhere chatgpt might have been hard coded as it's calling that model.
Are you by any chance still using the old setup of separate container between web and workers? If yes, you'll want to add 'INFERENCE_TEXT_MODEL' to the web container as well. And you should really consider moving away from the separated containers as we're planning to deprecate them (check release notes of version 0.16).
Yeah, I am on Unraid, just updated it through GUI...
It worked! Thank you so much for such amazing tool!
That fixed me too! Thanks so much!
Describe the Bug
I am not able to pull this off on OLLAMA, it always keep asking for gpt-4o-mini.
ResponseError: model "gpt-4o-mini" not found, try pulling it first at k (/app/apps/web/.next/server/chunks/440.js:7:99328) ... 8 lines matching cause stack trace ... at async a (/app/apps/web/.next/server/chunks/440.js:4:32960) { code: 'INTERNAL_SERVER_ERROR', name: 'TRPCError', [cause]: N [ResponseError]: model "gpt-4o-mini" not found, try pulling it first at k (/app/apps/web/.next/server/chunks/440.js:7:99328) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async O (/app/apps/web/.next/server/chunks/440.js:7:100059) at async z.processStreamableRequest (/app/apps/web/.next/server/chunks/440.js:7:101491) at async Q.runModel (/app/apps/web/.next/server/chunks/6815.js:1:14222) at async Q.inferFromText (/app/apps/web/.next/server/chunks/6815.js:1:14774) at async /app/apps/web/.next/server/chunks/6815.js:7:183 at async h.middlewares (/app/apps/web/.next/server/chunks/440.js:4:33566) at async a (/app/apps/web/.next/server/chunks/440.js:4:32960) at async a (/app/apps/web/.next/server/chunks/440.js:4:32960) { error: 'model "gpt-4o-mini" not found, try pulling it first', status_code: 404 } }
Steps to Reproduce
Expected Behaviour
AI Summary
Screenshots or Additional Context
No response
Device Details
No response
Exact Hoarder Version
0.19.0