hoarder-app / hoarder

A self-hostable bookmark-everything app (links, notes and images) with AI-based automatic tagging and full text search
https://hoarder.app
GNU Affero General Public License v3.0
6.34k stars 226 forks source link

Summarize with AI not working on OLLAMA #660

Closed s1lverkin closed 6 days ago

s1lverkin commented 6 days ago

Describe the Bug

I am not able to pull this off on OLLAMA, it always keep asking for gpt-4o-mini.

ResponseError: model "gpt-4o-mini" not found, try pulling it first at k (/app/apps/web/.next/server/chunks/440.js:7:99328) ... 8 lines matching cause stack trace ... at async a (/app/apps/web/.next/server/chunks/440.js:4:32960) { code: 'INTERNAL_SERVER_ERROR', name: 'TRPCError', [cause]: N [ResponseError]: model "gpt-4o-mini" not found, try pulling it first at k (/app/apps/web/.next/server/chunks/440.js:7:99328) at process.processTicksAndRejections (node:internal/process/task_queues:105:5) at async O (/app/apps/web/.next/server/chunks/440.js:7:100059) at async z.processStreamableRequest (/app/apps/web/.next/server/chunks/440.js:7:101491) at async Q.runModel (/app/apps/web/.next/server/chunks/6815.js:1:14222) at async Q.inferFromText (/app/apps/web/.next/server/chunks/6815.js:1:14774) at async /app/apps/web/.next/server/chunks/6815.js:7:183 at async h.middlewares (/app/apps/web/.next/server/chunks/440.js:4:33566) at async a (/app/apps/web/.next/server/chunks/440.js:4:32960) at async a (/app/apps/web/.next/server/chunks/440.js:4:32960) { error: 'model "gpt-4o-mini" not found, try pulling it first', status_code: 404 } }

Steps to Reproduce

  1. Have only ollama setup
  2. Click on summarize AI.
  3. Error appear

Expected Behaviour

AI Summary

Screenshots or Additional Context

No response

Device Details

No response

Exact Hoarder Version

0.19.0

roadkingvrod commented 6 days ago

I'm experienceing the same issue. It appears somewhere chatgpt might have been hard coded as it's calling that model.

MohamedBassem commented 6 days ago

Are you by any chance still using the old setup of separate container between web and workers? If yes, you'll want to add 'INFERENCE_TEXT_MODEL' to the web container as well. And you should really consider moving away from the separated containers as we're planning to deprecate them (check release notes of version 0.16).

s1lverkin commented 6 days ago

Yeah, I am on Unraid, just updated it through GUI...

It worked! Thank you so much for such amazing tool!

roadkingvrod commented 6 days ago

That fixed me too! Thanks so much!