Closed phoniclynx closed 1 month ago
Did you correctly setup all the required env variables without any typos in their name?
HOARDER_VERSION=release NEXTAUTH_SECRET=Ztp2XdD....wE5WE9r....icV MEILI_MASTER_KEY=Nyk4l6....3Pki9....xvM NEXTAUTH_URL=http:/192.168.1.5:3000
not an expert in portainer, but found this: https://www.portainer.io/blog/using-env-files-in-stacks-with-portainer
Did you rename the env file in docker compose to be stack.env?
Yes the above translates to this:
No I mean, in the stack definition itself. You'll find references to '.env'. You probably should change those to 'stack.env' if you haven't done so already?
oh you have to remove those otherwise it won't build... I had it working and tried to add the OpenAI key, so I deleted the stack and recreated it not knowing that I could have just added it.. I deleted the volumes and stuff.. cleared the browser caches etc.. and I only get that error now
if I understand correctly, you shouldn't remove them but instead rename them to "stack.env"
so the fix was to change:
env_file:
- .env
to
env_file:
- stack.env
yes that's what I'm suggesting. Did it work?
It worked... now I can't seem to get OpenAI to work.. it doesn't seem to hit my OpenAI account.. it said it's Never been accessed on the OpanAI API webpage
I assume the key is correct? Can you share the error logs from the container?
I don't even think it's trying?
✓ Starting...
✓ Ready in 159ms
Corepack is about to download https://registry.npmjs.org/pnpm/-/pnpm-9.0.0-alpha.8.tgz.
> @hoarder/workers@0.1.0 start:prod /app/apps/workers
> tsx index.ts
2024-10-04T08:10:35.601Z info: Workers version: 0.17.1
2024-10-04T08:10:35.608Z info: [Crawler] Connecting to existing browser instance: http://chrome:9222
(node:121) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
2024-10-04T08:10:35.617Z info: [Crawler] Successfully resolved IP address, new address: http://172.18.0.4:9222/
2024-10-04T08:10:39.691Z info: Starting crawler worker ...
2024-10-04T08:10:39.692Z info: Starting inference worker ...
2024-10-04T08:10:39.692Z info: Starting search indexing worker ...
2024-10-04T08:14:01.059Z info: [Crawler][1] Will crawl "https://tteck.github.io/Proxmox/" for link with id "y7tv5hn2q8w6uh4kyjz1uwsn"
2024-10-04T08:14:01.059Z info: [Crawler][1] Attempting to determine the content-type for the url https://tteck.github.io/Proxmox/
[meilisearch] Updating desired filterable attributes to id,userId from
[meilisearch] Updating desired sortable attributes to createdAt from
2024-10-04T08:14:01.267Z info: [search][2] Attempting to index bookmark with id y7tv5hn2q8w6uh4kyjz1uwsn ...
2024-10-04T08:14:01.321Z info: [Crawler][1] Content-type for the url https://tteck.github.io/Proxmox/ is "text/html; charset=utf-8"
2024-10-04T08:14:01.332Z info: [search][2] Completed successfully
2024-10-04T08:14:02.603Z info: [Crawler][1] Successfully navigated to "https://tteck.github.io/Proxmox/". Waiting for the page to load ...
2024-10-04T08:14:04.647Z info: [Crawler][1] Finished waiting for the page to load.
2024-10-04T08:14:04.713Z info: [Crawler][1] Finished capturing page content and a screenshot. FullPageScreenshot: false
2024-10-04T08:14:04.718Z info: [Crawler][1] Will attempt to extract metadata from page ...
2024-10-04T08:14:04.929Z info: [Crawler][1] Will attempt to extract readable content ...
2024-10-04T08:14:05.076Z info: [Crawler][1] Done extracting readable content.
2024-10-04T08:14:05.115Z info: [Crawler][1] Stored the screenshot as assetId: 1c629f89-6d67-4d72-8758-cc8501bab6d2
2024-10-04T08:14:05.436Z info: [Crawler][1] Done extracting metadata from the page.
2024-10-04T08:14:05.473Z info: [Crawler][1] Completed successfully
2024-10-04T08:14:06.121Z debug: [inference][3] No inference client configured, nothing to do now
2024-10-04T08:14:06.121Z info: [inference][3] Completed successfully
2024-10-04T08:14:06.361Z info: [search][4] Attempting to index bookmark with id y7tv5hn2q8w6uh4kyjz1uwsn ...
2024-10-04T08:14:06.424Z info: [search][4] Completed successfully
2024-10-04T08:14:18.447Z info: [search][6] Attempting to index bookmark with id svp5jqofzfistjdd4nxi162x ...
2024-10-04T08:14:18.504Z info: [Crawler][5] Will crawl "https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html" for link with id "svp5jqofzfistjdd4nxi162x"
2024-10-04T08:14:18.504Z info: [Crawler][5] Attempting to determine the content-type for the url https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html
2024-10-04T08:14:18.509Z info: [search][6] Completed successfully
2024-10-04T08:14:18.775Z info: [Crawler][5] Content-type for the url https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html is "text/html; charset=UTF-8"
2024-10-04T08:14:19.728Z info: [Crawler][5] Successfully navigated to "https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html". Waiting for the page to load ...
2024-10-04T08:14:22.734Z info: [Crawler][5] Finished waiting for the page to load.
2024-10-04T08:14:22.823Z info: [Crawler][5] Finished capturing page content and a screenshot. FullPageScreenshot: false
2024-10-04T08:14:22.828Z info: [Crawler][5] Will attempt to extract metadata from page ...
2024-10-04T08:14:24.189Z info: [Crawler][5] Will attempt to extract readable content ...
2024-10-04T08:14:25.204Z info: [Crawler][5] Done extracting readable content.
2024-10-04T08:14:25.209Z info: [Crawler][5] Stored the screenshot as assetId: 66b95c1d-8251-490b-880f-74129d1797a9
2024-10-04T08:14:25.314Z info: [Crawler][5] Done extracting metadata from the page.
2024-10-04T08:14:25.314Z info: [Crawler][5] Downloading image from "https://www.derekseaman.com/wp-content/uploads/proxmox-logo2.jpg"
2024-10-04T08:14:26.143Z info: [Crawler][5] Downloaded image as assetId: a0ea5f70-b203-4167-ae57-22699fbb425d
2024-10-04T08:14:26.184Z info: [Crawler][5] Completed successfully
2024-10-04T08:14:26.216Z debug: [inference][7] No inference client configured, nothing to do now
2024-10-04T08:14:26.216Z info: [inference][7] Completed successfully
2024-10-04T08:14:26.238Z info: [search][8] Attempting to index bookmark with id svp5jqofzfistjdd4nxi162x ...
2024-10-04T08:14:26.304Z info: [search][8] Completed successfully
2024-10-04T08:15:05.273Z debug: [inference][9] No inference client configured, nothing to do now
2024-10-04T08:15:05.273Z info: [inference][9] Completed successfully
2024-10-04T08:15:05.347Z info: [search][10] Attempting to index bookmark with id voaxsubvby0p0118tcetbqga ...
2024-10-04T08:15:05.410Z info: [search][10] Completed successfully
2024-10-04T08:15:12.431Z info: [search][11] Attempting to index bookmark with id voaxsubvby0p0118tcetbqga ...
2024-10-04T08:15:12.495Z info: [search][11] Completed successfully
2024-10-04T08:15:32.308Z debug: [inference][12] No inference client configured, nothing to do now
2024-10-04T08:15:32.308Z info: [inference][12] Completed successfully
2024-10-04T08:15:32.538Z info: [search][13] Attempting to index bookmark with id zhe353988fdrjz9ogfjzfxgq ...
2024-10-04T08:15:32.599Z info: [search][13] Completed successfully
2024-10-04T08:17:36.499Z info: [Crawler][14] Will crawl "https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html" for link with id "svp5jqofzfistjdd4nxi162x"
2024-10-04T08:17:36.499Z info: [Crawler][14] Attempting to determine the content-type for the url https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html
2024-10-04T08:17:36.619Z info: [Crawler][14] Content-type for the url https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html is "text/html; charset=UTF-8"
2024-10-04T08:17:37.559Z info: [Crawler][14] Successfully navigated to "https://www.derekseaman.com/2023/10/home-assistant-proxmox-ve-8-0-quick-start-guide-2.html". Waiting for the page to load ...
2024-10-04T08:17:40.538Z info: [Crawler][14] Finished waiting for the page to load.
2024-10-04T08:17:40.630Z info: [Crawler][14] Finished capturing page content and a screenshot. FullPageScreenshot: false
2024-10-04T08:17:40.640Z info: [Crawler][14] Will attempt to extract metadata from page ...
2024-10-04T08:17:41.976Z info: [Crawler][14] Will attempt to extract readable content ...
2024-10-04T08:17:43.018Z info: [Crawler][14] Done extracting readable content.
2024-10-04T08:17:43.023Z info: [Crawler][14] Stored the screenshot as assetId: 46b9fb89-f755-4665-b827-a179da496f80
2024-10-04T08:17:43.127Z info: [Crawler][14] Done extracting metadata from the page.
2024-10-04T08:17:43.127Z info: [Crawler][14] Downloading image from "https://www.derekseaman.com/wp-content/uploads/proxmox-logo2.jpg"
2024-10-04T08:17:43.954Z info: [Crawler][14] Downloaded image as assetId: c26b8a7a-6bad-47a8-9748-a9c05976d290
2024-10-04T08:17:44.007Z info: [Crawler][14] Completed successfully
2024-10-04T08:17:44.024Z info: [Crawler][15] Will crawl "https://tteck.github.io/Proxmox/" for link with id "y7tv5hn2q8w6uh4kyjz1uwsn"
2024-10-04T08:17:44.024Z info: [Crawler][15] Attempting to determine the content-type for the url https://tteck.github.io/Proxmox/
2024-10-04T08:17:44.031Z info: [search][17] Attempting to index bookmark with id svp5jqofzfistjdd4nxi162x ...
2024-10-04T08:17:44.041Z debug: [inference][16] No inference client configured, nothing to do now
2024-10-04T08:17:44.041Z info: [inference][16] Completed successfully
2024-10-04T08:17:44.112Z info: [search][17] Completed successfully
2024-10-04T08:17:44.357Z info: [Crawler][15] Content-type for the url https://tteck.github.io/Proxmox/ is "text/html; charset=utf-8"
2024-10-04T08:17:45.439Z info: [Crawler][15] Successfully navigated to "https://tteck.github.io/Proxmox/". Waiting for the page to load ...
2024-10-04T08:17:47.443Z info: [Crawler][15] Finished waiting for the page to load.
2024-10-04T08:17:47.502Z info: [Crawler][15] Finished capturing page content and a screenshot. FullPageScreenshot: false
2024-10-04T08:17:47.507Z info: [Crawler][15] Will attempt to extract metadata from page ...
2024-10-04T08:17:47.564Z info: [Crawler][15] Will attempt to extract readable content ...
2024-10-04T08:17:47.620Z info: [Crawler][15] Done extracting readable content.
2024-10-04T08:17:47.626Z info: [Crawler][15] Stored the screenshot as assetId: edabe378-ecdc-4b95-8a6b-b31031342936
2024-10-04T08:17:47.645Z info: [Crawler][15] Done extracting metadata from the page.
2024-10-04T08:17:47.678Z info: [Crawler][15] Completed successfully
2024-10-04T08:17:48.069Z debug: [inference][18] No inference client configured, nothing to do now
2024-10-04T08:17:48.069Z info: [inference][18] Completed successfully
2024-10-04T08:17:48.129Z info: [search][19] Attempting to index bookmark with id y7tv5hn2q8w6uh4kyjz1uwsn ...
2024-10-04T08:17:48.191Z info: [search][19] Completed successfully
this means that you didn't set your openAI API key in the env variables
ok.. so I got it working in the sense of it now tells me that my quota has now been exceeded, however there are funds in there $15 in total.. but not sure if I needed to set something else up?
2024-10-04T08:42:01.288Z info: [inference][48] Starting an inference job for bookmark with id "y7tv5hn2q8w6uh4kyjz1uwsn"
2024-10-04T08:42:03.256Z error: [inference][48] inference job failed: Error: 429 You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.
Error: 429 You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.
at APIError.generate (/app/apps/workers/node_modules/.pnpm/openai@4.29.0/node_modules/openai/error.js:62:20)
at OpenAI.makeStatusError (/app/apps/workers/node_modules/.pnpm/openai@4.29.0/node_modules/openai/core.js:263:33)
at OpenAI.makeRequest (/app/apps/workers/node_modules/.pnpm/openai@4.29.0/node_modules/openai/core.js:306:30)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async OpenAIInferenceClient.inferFromText (/app/apps/workers/inference.ts:2:1934)
at async inferTagsFromText (/app/apps/workers/openaiWorker.ts:32:158)
at async inferTags (/app/apps/workers/openaiWorker.ts:32:375)
at async Object.runOpenAI [as run] (/app/apps/workers/openaiWorker.ts:32:3805)
at async Runner.runOnce (/app/apps/workers/node_modules/.pnpm/@hoarder+queue@file+packages+queue/node_modules/@hoarder/queue/runner.ts:2:2567)
Are you using a project API key that points to a project with no quota?
I had the same problem initially and learned that I needed to use the legacy user keys instead:
It just started working out of the blue. I didn't touch anything
Seems resolved then.
The solution worked for @phoniclynx because he likely wasn't deploying via a repository. I just want to share the solution that worked for me (deploying via GitHub on Portainer) in case someone finds this through a search.
Add the variables in the Docker Compose environment and pass them through the Portainer UI.
My compose: https://github.com/fpatrick/compose/blob/main/hoarder/docker-compose.yml
On portainer ui set:
HOARDER_VERSION=release NEXTAUTH_SECRET=YOURLONGSTRING MEILI_MASTER_KEY=YOURANOTHERLONGSTRING NEXTAUTH_URL=http://YOURURL:PORT
Using Portainer to install Hoarder but for some reason, it hasn't seen the environmental variables