openhab / openhabian

openHABian - empowering the smart home, for Raspberry Pi and Debian systems
https://community.openhab.org/t/13379
ISC License
820 stars 251 forks source link

Zram size based on ram size ? #1646

Closed mstormi closed 2 years ago

mstormi commented 2 years ago

See e.g. https://community.openhab.org/t/openhabian-restore-doesnt-work/131433/5)

I myself have come close to those limits, too,. and now I am also using larger sizes since I moved from a RPi 3 to a 4 w/ 2GB.

I think we should make zram sizes in ztab dependent on hardware (RAM size). @ecdye wdyt? maybe double it for RPi4 ?

ecdye commented 2 years ago

Can't just double it for RPi4 because there are 1gb models floating around (no longer in production). I have a 1gb model 4. I'll look into dynamically adjusting zram sizes but in the end there is only so much we can cover for to offer the best experience for everyone.

Eventually the user needs to tailor their zram to fit their use case and I think the zram-config documentation outlines how to do that pretty well.

JAMESBOWLER commented 2 years ago

As the issue is only related to restoring openHAB my suggestion would be to disable zram during restore as Marcus suggested in his post.

mstormi commented 2 years ago

Can't just double it for RPi4 because there are 1gb models floating around (no longer in production)

Ok then determine mem size instead and make it based on that.

mstormi commented 2 years ago

I believe we need to improve on zram sizes and speed up a little here: I just had a partial lockup of my own system, logs were no longer written plus some issues with some of persisted item data (some items were graphable others no longer), I believe writing to some of the persistence files failed, too.

No hints in the logs I found so far. The only but important and matching observation I made was that zramctl showed a TOTAL on the log zram filesystem that was equal to the configured mem_size (200M). Disk size was well below the disk_size limit (600M).

ecdye commented 2 years ago

Well, that means that the compression could not compress it enough and the memory limit was hit making the zram no longer writable. You should increase your mem limit not your disk size this is exactly the issue I mentioned in #1647 and why I don't want to increase the disk size more.

mstormi commented 2 years ago

yeah I did but anyone could be hit by that. It was in operations, no special case such as a restore. We would now needto consider increasing that, too. And on disk_size I still don't believe in those 400% being a hard limit

ecdye commented 2 years ago

That was the whole point of the #1647 PR it addresses those concerns, somewhat conservatively but I think that because it is the first time it has happened it should be fine as a first fix. If we find that many users have issues with it later then we can give further options, or if the issues are few and far between simply point the users to the docs to change it on their own. If you want to review #1647 and we can at the very least get it merged.

We can't raise the mem_limit on all systems though because on a 1GB RPi (many users still use them) the mem_limit is already borderline in terms to memory usage and what is being left for other processes.

mstormi commented 2 years ago

We can't raise the mem_limit on all systems though because on a 1GB RPi

Agree remember it was me to determine that in the very first place and I saw you doubled it in #1647 for systems with 2+GB. But in addition you should also increase disk_size in ztab_lm like I comment in #1647

mstormi commented 2 years ago

While on it, also remove setting -Xmx / -Xms via EXTRA_JAVA_OPTS in openhabian.bash if has_highmem() That'll make Java use dynamic values based on phys mem size