openhab / openhabian

openHABian - empowering the smart home, for Raspberry Pi and Debian systems
https://community.openhab.org/t/13379
ISC License
820 stars 251 forks source link

InfluxDB/Grafana on zram #1131

Closed mstormi closed 3 years ago

mstormi commented 4 years ago

change package install to have the database files below /var/lib/openhab2/persistence

I have no idea how much data that means eventually adapt max in ztab or even cancel this feature ? @holgerfriedrich are you willing to take on it ?

holgerfriedrich commented 4 years ago

Not sure about the write pattern of influxdb and if it makes sense to put on ZRAM. I suppose the size of influxdb data is highly dependent on the personal use (my DB is ~200MB).

Technically, relocating is possible in influxdb.conf(don't forget to chown the new dir to the correct user). As a linux admin, I do not like the idea of changing standard directories. And - this might be personal opinion - I am quite concerend about loosing/corrupting this data, so I would keep it on the normal filesystem.

mstormi commented 4 years ago

I am quite concerend about loosing/corrupting this data

If in doubt, better lose Influx data than the whole SD card. But whether to run ZRAM actually is a different, in fact mostly unrelated question. Users can switch on or off. But if on, Influx data should be on ZRAM else we're putting the whole system at risk despite running ZRAM. My persistence data sums up to 73/11 MB raw/compressed. The ztab default setting is 500/150 and I'd guess the DB is well compressible. So I'd think we wouldn't even need to change that.

Can create a patch to make Influx use /var/lib/openhab/persistence please?

holgerfriedrich commented 4 years ago

@mstormi can I be sure that zram service is running when Grafana/InfluxDB start? If not ensured already, we need to take extra action like adding After=zram.service - but probably not all users are on systemd yet....

mstormi commented 4 years ago

You need to ensure that by adding After=zram-config.service to /etc/systemd/system/influxdb.service (or whatever the service script is named). Yes we are sure all users are on systemd. Raspbian has been based on that ever.

holgerfriedrich commented 4 years ago

@mstormi what about users running openhabian on a different system, https://www.openhab.org/docs/installation/openhabian.html#other-linux-systems-add-openhabian-just-like-any-other-software ?

mstormi commented 4 years ago

Those we officially support also use systemd. FWIW, any recent distro does. BTW can you check out #1159 too, please, when you work on Influx. Do you have an x86/Ubuntu system available ? If no at least the Travis cloud provides one. (edit: just created #1160 so never mind) You could look at #1158, and let me know if you have a test Ubuntu

holgerfriedrich commented 4 years ago

I am a bit concerned that patching influxdb.service and grafana.service will be overwritten by next deb package update. What about adding it to openhabian-zram/zram-config.service to option Before=? (You could add it in your own repo, and I make sure that it is there when we install Grafana/InfluxDB - then we cover both the case ZRAM is installed after Grafana/InfluxDB and that it is a existing install with ZRAM already in place) WDYT?

mstormi commented 4 years ago

no, these kind of forward references are creating dependencies with ugly-to solve problems for us. Usually modified .service files are not overwritten on upgrade. What you can do if you're really concerned is create an override.conf like we do in openhab.bash

mstormi commented 4 years ago

Could we get this in by tomorrow? Would like to release.

holgerfriedrich commented 4 years ago

@mstormi This will not be finished the next days, sorry. Moving the directories (or better symlinking) for new installs is not a big deal, but have to think about existing installs / what happens if data dirs already exist in two different locations. Don't want to see anyone complaining about lost data after InfluxDB reinstall via out tool.

Updated my local branch to current upstream. Could already be safe (mv will complain if dest dir is not empty and break the install).

mstormi commented 4 years ago

We have a well established practice of handling upgrades that I don't think we should deviate from here. If someone has an existing InfluxDB, it'll continue to work like that after updating openHABian because Influx config remains unchanged. Still no ZRAM used for his data. Not even when he selects 02 - System Upgrade. InfluxDB will only ever be placed on ZRAM when some user installs it for the first time or in the know that he's doing an explicit re-installation (then he purges manually first and also knows he needs to take care of data migration - that in turn we shouldn't touch and leave it to users). So it should be as simple as to change the data install dir only and don't handle migrations. It's also the right point in time as we can announce the change in behavior as part of 1.6. Doing so between releases will way more often be missed by users. Do you think you could add that part today in time for release?

mstormi commented 4 years ago

@holgerfriedrich any news?

mstormi commented 4 years ago

@holgerfriedrich any news?

davebuk commented 3 years ago

Hi @mstormi I have just installed openhabian 1.6.3 on a new Pi4 2GB and had planned to use a USB SSD for the files. Reading up on the zram feature, zram seems the best way to go rather than the SSD but am unsure on which route to take with reinstating/merging my current influxDB.

Is the current recommendation of adding the influx directories to the zram config the way to go, or is the option talked about here of adding influx to the persistence directory something that might happen soon?

mstormi commented 3 years ago

I don't know, @holgerfriedrich will you do it some day ?

davebuk commented 3 years ago

I guess you would like to have it that the influxdb database is automatically stored under /var/lib/openhab/persistence. With my very limited Linux knowledge I believe I could either:

  1. Leave openhabian running on zram, connect SSD, make/mount a directory and then adjust influxdb.conf to use those directories.
  2. Use the 'move to USB' option in openhabian and run it all from the SSD (except boot) and disable zram.
  3. Make/mount a directory under /var/lib/openhab/persistence say /var/lib/openhab/persistence/influxdb-zram and adjust the influxdb.conf to suit.

Separately, does Grafana need configuring as well for zram or can that stay in the current configuration?

mstormi commented 3 years ago

I guess you would like to have it that the influxdb database is automatically stored under /var/lib/openhab/persistence

Yes this is what this issue is about.

As to your questions, please address them in the forum, Github is not the right medium for that.

holgerfriedrich commented 3 years ago

Hey @mstormi, I am a not really up to date on your progress with openhabian. Do we in the meantime have a sync mechanism to ZRAM which can ensure that we at least sync back to disk once a day. For me the influx stuff is a longterm storage, and I would not be happy to loose a few days. I have seen a few complaints in the knx user forum about loosing data due to power fail.

mstormi commented 3 years ago

Do we in the meantime have a sync mechanism to ZRAM

No. But having Influx write to SD crashing the whole box is definitely worse than to loose some data (which btw you can restore if you backup daily). When you have a shot at it, please also check #1448

mstormi commented 3 years ago

@holgerfriedrich this one is also still assigned to you

mstormi commented 3 years ago

@holgerfriedrich any progress?

ecdye commented 3 years ago

If I am correct, it should be pretty simple. We know it already works well with the default openHAB 3 persistence of rrdj4 so in theory it should just be a matter of adding the directory like we do with the FIND3 install and figuring out the optimal parameters to make it work well.

mstormi commented 3 years ago

I think so too. As Holger doesn't, would you take it ?

ecdye commented 3 years ago

Sure, it might take a week or two for me to find time for it but I can take this one on.