Closed olifre closed 10 months ago
You could change the defaults here: https://github.com/opensciencegrid/xrootd-monitoring-shoveler/blob/main/config/config.yaml#L43 and https://github.com/opensciencegrid/xrootd-monitoring-shoveler/blob/main/queue.go#L47
And also add to the .goreleaser.yaml under contents
:
- dst: /var/lib/xrootd-monitoring-shoveler
type: dir
file_info:
mode: 0700
@djw8605 Thanks, will do that in a follow-up commit and add it to the PR :+1: .
I realize only now, looking at your YAML snippet, that the tool seems to expect that it is executed as root
, and also the systemd
unit does not specify a user it is executing as.
Is this really expected? Since WLCG is asking whether we put this tool into production, and not even xrootd
itself is running as root
, I'd rather not run a monitoring tool as root
user.
For sure, this can go in a follow-up PR (or an issue, since my knowledge of Go is very limited...), but it seems running this as `root is not really necessary.
You are absolutely right that the shoveler doesn't have to run as root. There is nothing that requires root in it. Most of the deployments of the shoveler have been in containers (for example, in kubernetes), so the root issue hasn't really come up. But yes, I would accept a pull request changing the user to something appropriate.
Should the queue be in a sub directory under /var/lib/xrootd-monitoring-collector
? For example /var/lib/xrootd-monitoring-collector/queue
Should the queue be in a sub directory [...]
Sure, that is very reasonable! Especially in case other files will ever be added at a later point which may be stored persistently. I'll adapt the commits later on and re-push.
may I suggest to use /var/spool to store persistent data instead of /var/lib? it seems a more proper place
may I suggest to use /var/spool to store persistent data instead of /var/lib? it seems a more proper place
You are correct, indeed, while both are persistent, for queue data, spool
is a much better fit since it is data meant to be processed later.
I'll edit the commits accordingly later on, thanks! :+1:
@kazeborja Done, it's now changed to /var/spool/xrootd-monitoring-shoveler/queue
, this is definitely more correct :+1: .
@djw8605 Happy New Year to you :+1: .
Just pinging to check whether you had time to look at this PR? I keep getting reminders by GGUS/WLCG to roll this tool out every few days, and plan to open another PR (I already have to code developed) to implement "running as user" once this PR and my other one are in.
Thanks in advance!
All looks good to me. Any idea how an upgrade would go from the current version?
That's a good question. The way things are now, I would expect things to be abandoned indeed if there is still something left in the queue before the upgrade.
Adding an explicit migration procedure is likely too heavy to cover this unlikely (one-time) case — maybe it would be sufficient to mention this in the release notes? Those who have a not-yet-submitted queue to carry over could either configure xrootd-monitoring-shoveler
to use the old path manually, or migrate the queue and use the new default.
What do you think?
Indeed, I think it's probably enough to add it to the release notes (as if anyone reads that though).
@djw8605 Thanks for merging! In fact, I read the release notes of (almost) all packages which I run in production and which are not distro-maintained before upgrading them, but that's indeed not the usual site-admin approach, sadly.
/tmp
is cleaned up regularly bysystemd-tmpfiles
, so persistent queues will be lost if the service is down for prolonged times. Examples of this are e.g. https://github.com/indigo-dc/oidc-agent/issues/519 On some systems,/tmp
is stored on a ramdisk, so it will be lost upon reboot.For persistent storage,
/var/lib
should be used according to FHS, and it seems reasonable to use the name of the package as subdirectory there to ensure correct namespacing / prevent collisions.Ideally, this directory would be created (with correct SELinux contexts) by the RPM / DEB files — if someone can point me to the location of these, I can also create a PR for these.