asylumexp / Proxmox

Proxmox VE Helper-Scripts - Ported to ARM64
https://pimox-scripts.com
MIT License
34 stars 1 forks source link

System freezing for overseer and flowiseai #60

Closed SpartanTech closed 3 months ago

SpartanTech commented 3 months ago

Please verify that you have read and understood the guidelines.

Yes

A clear and concise description of the issue.

Hi guys,

I successfully installed AdSense, another DNS program, and a VM as well. I'm running a pimox 8.1 set up on a raspberry pi 4 Gb.

When I try to install overseer as well as the AI LXC, both of them freeze my entire system when it is finally installing the last step. I've also made sure to try and stop any container running in case there is a memory issue.

I have a feeling it's a memory or CPU issue. Are there logs somewhere I can provide here to help figure out what's going on?

It freezes on installing overseer or installing flowiseai, it has a little spinner as well. The spinner gets slower and slower, I noticed the system slows down, and then only a hard reset fixes it.

When I come back to the system, the containers are still made but it never gets to the installing part. I can't log into the containers and get logs because of the automated process, I don't know the login

I can provide whatever blogs are necessary, I kind of want to figure this out as well personally for curiosity means

I love your domain and help with the pimox community by the way!

What settings are you currently utilizing?

Default Settings

Which Linux distribution are you employing?

Debian 12

If relevant, including screenshots or a code block can be helpful in clarifying the issue.

No response

Please provide detailed steps to reproduce the issue.

Run either overseer LXC on my machine or the AI Llc, default settings. Complete system freeze on install last step.

asylumexp commented 3 months ago

Yes, that’s certainly a ram issue, you could try allocating more to swap. You won’t be able to install Flowise with only 4GB of ram, as that’s usually just what’s allocated to it, overseerr you could probably by stopping the other LXCs during the install and allocating more swap. There isn't any logs for what you’re asking for, but you could just look using top in ssh or look at the Proxmox panel during the install.

asylumexp commented 3 months ago

Oops just read what you wrote again, you can enter those LXCs using pct enter <id> in the host terminal

SpartanTech commented 3 months ago

Ty for the pct enter info. Didnt know that, has been an issue for me before.

I doubled my swap memory from 2GB to 4GB, its a 4GB Rpi, and now it doesn't halt the entire system, but enabling verbose mode when installing gives me this:

`[4/4] Building fresh packages... $ husky install |husky - Git hooks installed Done in 390.33s. yarn run v1.22.22 $ yarn build:next && yarn build:server $ next build |warn - You have enabled experimental features (scrollRestoration, largePageDataBytes) in next.config.js. warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk.

/Attention: Next.js now collects completely anonymous telemetry regarding usage. This information is used to shape Next.js' roadmap and prioritize features. You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL: https://nextjs.org/telemetry

|warn - The Next.js plugin was not detected in your ESLint configuration. See https://nextjs.org/docs/basic-features/eslint#migrating-existing-config info - Linting and checking validity of types
info - Disabled SWC as replacement for Babel because of custom Babel configuration "babel.config.js" https://nextjs.org/docs/messages/swc-disabled Browserslist: caniuse-lite is outdated. Please run: npx browserslist@latest --update-db Why you should do it regularly: https://github.com/browserslist/browserslist#browsers-data-updating info - Using external babel configuration from /opt/overseerr/babel.config.js \fo - Creating an optimized production build .. <--- Last few GCs --->

[11231:0x40a9acf0] 521099 ms: Mark-Compact (reduce) 508.6 (522.0) -> 507.9 (522.5) MB, 1768.47 / 0.00 ms (average mu = 0.279, current mu = 0.040) allocation failure; scavenge might not succeed

<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory ----- Native stack trace -----

\ 1: 0xb7d2ac node::OOMErrorHandler(char const, v8::OOMDetails const&) [/usr/bin/node] 2: 0xeb284c v8::Utils::ReportOOMFailure(v8::internal::Isolate, char const, v8::OOMDetails const&) [/usr/bin/node] 3: 0xeb2a1c v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate, char const, v8::OOMDetails const&) [/usr/bin/node] 4: 0x10ba61c [/usr/bin/node] 5: 0x10d0af4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node] 6: 0x10a985c v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node] 7: 0x10aa620 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node] 8: 0x108a35c v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/bin/node] 9: 0x149c094 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long, v8::internal::Isolate*) [/usr/bin/node] 10: 0x189ca84 [/usr/bin/node] |Aborted error Command failed with exit code 134. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. error Command failed with exit code 134. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. \ [ERROR] in line 46: exit code 0: while executing command $STD yarn build root@tower:~# free -h total used free shared buff/cache available Mem: 3.7Gi 2.5Gi 641Mi 18Mi 658Mi 1.2Gi Swap: 4.0Gi 690Mi 3.3Gi root@tower:~# `

A step in the right direction! Any advice? Seems like the build is running out of some alternative memory space? 3GB still free immediately after I ran the command? Hmm

Research told me to use: rm -rf node_modules rm -rf .next yarn cache clean yarn install export NODE_OPTIONS="--max-old-space-size=4096"

But Im unfamiliar how to add that since the install is being done on the fly in the host container and not the host. I cant pause the script before build and perform those can i? Unsure if this will even fix it

Edit: Pending test. Increased container memory, starting build again. It works!! Thank you for your help, I'm glad it was that simple. Added an extra 512 for some reason to the default container build. It worked!

asylumexp commented 3 months ago

Which one were you installing? I'll update the script to increase the default amount of ram

SpartanTech commented 3 months ago

Overseerr, required 512 of additional memory in the container for it to work flawlessly. Flowai worked fine with the default settings. This is after of course I increased the swap on the system itself from 2GB to 4 GB

🙏