dun / munge

MUNGE (MUNGE Uid 'N' Gid Emporium) is an authentication service for creating and validating user credentials.
GNU Lesser General Public License v3.0
250 stars 46 forks source link

Unauthorized credential for client UID=0 GID=0 #130

Closed ZXRobotum closed 1 year ago

ZXRobotum commented 1 year ago

Hello all,

I somehow just have a problem with MUNGE in conjunction with SLURM with the error message shown in the headline and I just don't have a "PLAN" why I get this error message.

It works fine with my "small" cluster, but not with the productive cluster. I have to say that I am currently still using the MUNGE from DEBIAN as a distribution...

The "munge.key" is the same everywhere and I do not get any error message when calling "munge -n | unmunge" or "ssh "munge -c0 -z0 -n" | unmunge. Only in connection with SLURM. According to SLURM, the connection is accepted, but a "feedback" is then no longer given....

Can anyone help me with this? Thanks in advance.....

Z. Matthias

dun commented 1 year ago

This error message indicates the MUNGE credential was encoded with a MUNGE_OPT_UID_RESTRICTION and/or MUNGE_OPT_GID_RESTRICTION option which allows the decoding of that credential to be restricted to a specific UID and/or GID. It implies the same MUNGE key was used on both the encoding and decoding nodes since credential decryption was successful.

This error message also notes the client process attempting to decode the credential is running as root (UID=0 and GID=0). My guess as to the problem here is the SlurmUser (in slurm.conf, see here) differs between the node encoding the credential and the node decoding the credential. This is typically set to the slurm user (a non-privileged system account), but it looks like the node generating the above error message is running with SlurmUser set to root. If that's not the problem, you'll need to follow-up with the Slurm community for an answer.

ZXRobotum commented 1 year ago

Thank you very much for your help. On Slurm.conf is have these: SlurmUser=slurm SlurmdUser=root

Well, I create the new "munge.key" with the following command, like your page: sudo -u munge ${sbindir}/mungekey --verbose

On all my systems UID & GID from slurs & munge are the same....

As I wrote before, my small test cluster works fine with the same settings, compile operations, etc. Only the large cluster does not like more... And I don't find these mistake, failure, etc.

dun commented 1 year ago

The munge.key appears to be fine since the credential is successfully decoded. But the credential has been encoded with a uid restriction for a non-root user (presumably the slurm user), and the process attempting to decode it is running as root (hence the authorization error).

The only advice I have to offer here is to double-check that the slurm account (with the same uid) exists on all nodes in your large cluster, that SlurmUser=slurm is set, and perhaps restart the slurm service on all nodes in case the configuration changed after the service was initially started.

tazend commented 1 year ago

Hi @ZXRobotum

just came accross this error too in our productive cluster and wondered why this happens. Do the productive and your test cluster have the same slurm version?

Because in Slurm 22.05 (presumably) the Slurm devs added this check in the init function of their auth_munge plugin.

Now, whenever a job starts, a new slurmstepd process is spawned and this init function is called. If you have verbose mode on, you should see something like this every time a job step is launched on a node in your syslog:

slurmstepd[16784]: cred/munge: init: Munge credential signature plugin loaded

As the comments in their source code say, they only check if munge is configured to allow the root user to decode any incoming credential. They are creating a pseudo credential encoded with a different uid, and try to decode this as the root User. If this succeeds, you would be prompted with a error-message telling you to disable the setting in munge that root is allowed to decode any credential, as only the SlurmUser should be allowed to decode the credentials from slurmctld.

In short: The Unauthorized credential for client UID=0 GID=0 is just a byproduct of their safety check to see whether root is able to decode any credential and can be, as to my (hopefully correct) interpretation, savely ignored.

dun commented 1 year ago

@tazend, thanks for looking into this! :star:

It's unfortunate their safety check is causing confusion. Ideally this would be documented in their installation faq.

ZXRobotum commented 1 year ago

@tazend, many thanks for this and the ingenious bug-finding....

First of all, my test and CORE clusters are set up completely identically. The only difference between the two is that the test cluster consists of real machines and the CORE cluster of cloud instances.

Whether one can really ignore this is really the question here, which I am still trying to answer through error analysis. I see the error message under SLURM on the CORE system:

SlurmCTLD: 2023-02-13T14:28:34.802] JobId=370421 nhosts:1 ncpus:1 node_req:64000 nodes=CompNode01 [2023-02-13T14:28:34.802] Node[0]: [2023-02-13T14:28:34.802] Mem(MB):15998:0 Sockets:1 Cores:6 CPUs:6:0 [2023-02-13T14:28:34.802] Socket[0] Core[0] is allocated [2023-02-13T14:28:34.802] Socket[0] Core[1] is allocated [2023-02-13T14:28:34.802] Socket[0] Core[2] is allocated [2023-02-13T14:28:34.802] Socket[0] Core[3] is allocated [2023-02-13T14:28:34.802] Socket[0] Core[4] is allocated [2023-02-13T14:28:34.802] Socket[0] Core[5] is allocated [2023-02-13T14:28:34.802] -------------------- [2023-02-13T14:28:34.802] cpu_array_value[0]:6 reps:1 [2023-02-13T14:28:34.802] ==================== [2023-02-13T14:28:34.803] sched/backfill: _start_job: Started JobId=370421 in Artificial on CompNode01 [2023-02-13T14:28:34.910] _slurm_rpc_requeue: Requeue of JobId=370421 returned an error: Only batch jobs are accepted or processed [2023-02-13T14:28:34.914] _slurm_rpc_kill_job: REQUEST_KILL_JOB JobId=370421 uid 0 [2023-02-13T14:28:34.915] job_signal: 9 of running JobId=370421 successful 0x8004 [2023-02-13T14:28:35.917] _slurm_rpc_complete_job_allocation: JobId=370421 error Job/step already completing or completed

SlurmD: [370420.extern] fatal: Could not create domain socket: Operation not permitted [2023-02-13T14:13:12.412] error: _forkexec_slurmstepd: slurmstepd failed to send return code got 0: Resource temporarily unavailable [2023-02-13T14:13:12.417] Could not launch job 370420 and not able to requeue it, cancelling job

And with this, the SlurmD process aborts the processing and reports back to the CTLD that the JOB cannot be executed. And I find absolutely no explanation for this. I only see on both sides CTLD and SlurmD, the "unauthorised credential for client .....". - How did you solve the problem in the end? With this FLAG under MUNGE or rather under SLURM? Best regards from Berlin.....

Z. Matthias

tazend commented 1 year ago

Hi @ZXRobotum

We can continue discussion here in this issue, if you want (just to not further hijack this issue for discussing perhaps unrelated slurm errors.)

tazend commented 1 year ago

@dun

Yeah a mention in the documentation would be good. I opened a bug report for this: https://bugs.schedmd.com/show_bug.cgi?id=16035 Lets see what they say :)