IBM / ansible-power-aix

Developer contributions for Ansible Automation on Power
https://ibm.github.io/ansible-power-aix/
GNU General Public License v3.0
81 stars 94 forks source link

ibm.power_aix.mount mounts remote NFS successfully, but doesn´t umount it. #417

Closed StevePOI closed 2 months ago

StevePOI commented 7 months ago

Describe the bug

I use ibm.power_aix.mount as roles in my playbooks, one task to mount a certain NFS (for EMGR, INSTALLP Module etc...):


Today I noticed that the umount part is not working anymore, it gives this "OK" message, so everything is green:

ok | msg: Filesystem/Mount point '/mnt' is not mounted

(Line https://github.com/IBM/ansible-power-aix/blob/dev-collection/plugins/modules/mount.py#L433 )

This behaviour must be relatively recent, it used to work in the last months of 2023 and I´m using this since 2021.

So I´m left with many mounted /mnt on my systems after each job run, please advise :) .

To Reproduce

Use the ibm.power_aix.mount Module to umount a remote NFS v3.

Expected behavior

According to https://github.com/IBM/ansible-power-aix/blob/dev-collection/plugins/modules/mount.py#L456 it should be:

result['msg'] = "Unmount successful."

And /mnt should be gone.

Environment (please complete the following information):

StevePOI commented 7 months ago

Addendum:

It works on a machine where the NFS is permanently mounted and in the /etc/filesystems:

Umount NFS 'nfs-swdepot'... NIMSERVER_NAME done | msg: Unmount successful.

After running the playbook /mnt is gone here, while the other servers who only mount it via Ansible temporarily report:

aix72test01t ok | msg: Filesystem/Mount point '/mnt' is not mounted

while it is still there:

[aix72test01t:/home/USER]# mount|grep /mnt nfs-swdepot /swdepot /mnt nfs3 Feb 29 12:38

I´m confused, is this a check further up in the code?

StevePOI commented 7 months ago

Addendum, this works as expected, so it seems it´s nothing specific about my AIX servers but a change in the ibm.power_aix.mount Module:


StevePOI commented 6 months ago

Hello @nitismis !

Can I provide any additional information :) ?

Thanks,

With kind regards,

Steve

nitismis commented 6 months ago

@StevePOI ... I will prioritize this defect in our next sprint (starting Monday). You can provide the verbose out of the playbook.

Thanks

StevePOI commented 6 months ago

@nitismis

Thanks for including it in the next sprint!

Is this verbose enough (-vvvv, we always see those kerberos messages):

`Escalation succeeded

(0, b'\r\n\r\n{"changed": false, "msg": "Filesystem/Mount point \'/mnt\' is not mounted", "cmd": "", "stdout": "", "stderr": "", "invocation": {"module_args": {"state": "umount", "mount_over_dir": "/mnt", "force": false, "removable_fs": false, "read_only": false, "mount_dir": null, "node": null, "mount_all": null, "alternate_fs": null, "fs_type": null, "vfsname": null, "options": null}}}\r\n', b"OpenSSH_9.2p1, OpenSSL 1.1.1v 1 Aug 2023\r\ndebug1: Reading configuration data /SCRIPTS/aau/.ssh/config\r\ndebug1: /SCRIPTS/aau/.ssh/config line 1: Applying options for *\r\ndebug1: using libz from /usr/opt/rpm/lib/libz.a(libz.so.1) \n\r\ndebug1: init_libz_ptrs success\r\ndebug1: Failed dlopen: /usr/krb5/lib/libkrb5.a(libkrb5.a.so): Could not load module /usr/krb5/lib/libkrb5.a(libkrb5.a.so).\nSystem error: No such file or directory\n\r\ndebug1: Error loading Kerberos, disabling Kerberos auth.\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/SCRIPTS/aau/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/SCRIPTS/aau/.ssh/known_hosts2'\r\ndebug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 13107706\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to notesmig01t closed.\r\n") ESTABLISH SSH CONNECTION FOR USER: None SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o BindAddress=aixinfra01p -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o 'ControlPath="/SCRIPTS/aau/.ansible/cp/948c094c44"' notesmig01t '/bin/sh -c '"'"'rm -f -r /home/aix_ansible_user/.ansible/tmp/ansible-tmp-1711100616.585834-16712124-117415238073420/ > /dev/null 2>&1 && sleep 0'"'"'' (0, b'', b"OpenSSH_9.2p1, OpenSSL 1.1.1v 1 Aug 2023\r\ndebug1: Reading configuration data /SCRIPTS/aau/.ssh/config\r\ndebug1: /SCRIPTS/aau/.ssh/config line 1: Applying options for *\r\ndebug1: using libz from /usr/opt/rpm/lib/libz.a(libz.so.1) \n\r\ndebug1: init_libz_ptrs success\r\ndebug1: Failed dlopen: /usr/krb5/lib/libkrb5.a(libkrb5.a.so): Could not load module /usr/krb5/lib/libkrb5.a(libkrb5.a.so).\nSystem error: No such file or directory\n\r\ndebug1: Error loading Kerberos, disabling Kerberos auth.\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/SCRIPTS/aau/.ssh/known_hosts'\r\ndebug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/SCRIPTS/aau/.ssh/known_hosts2'\r\ndebug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 13107706\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n") notesmig01t ok: { "changed": false, "cmd": "", "invocation": { "module_args": { "alternate_fs": null, "force": false, "fs_type": null, "mount_all": null, "mount_dir": null, "mount_over_dir": "/mnt", "node": null, "options": null, "read_only": false, "removable_fs": false, "state": "umount", "vfsname": null } }, "msg": "Filesystem/Mount point '/mnt' is not mounted", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } `
StevePOI commented 4 months ago

Hello @nitismis !

Don´t want to be a nuisance, but do you have new information for me :) ?

Thanks,

With kind regards,

Steve

nitismis commented 4 months ago

Hi @StevePOI, Recently, we have changed the command in mount module to use "mount" instead of "df" because some mounts like autofs are not listed by df. The problem here now, is in your case the output of mount is bit different then that is for us, may be because of version difference.

Your mount output: [aix72test01t:/home/USER]# mount|grep /mnt nfs-swdepot /swdepot /mnt nfs3 Feb 29 12:38

Ours: mount|grep /mnt /aix_fvt/gsa_export /mnt nfs Feb 07 04:09 soft,vers=2 We have one extra options and because of that in your case module is not picking up correct value. This is should have been handled through our module and we are working on that, we need to have a discussion with filesystem team.

StevePOI commented 4 months ago

[aix72test01t:/home/USER]# mount|grep /mnt nfs-swdepot /swdepot /mnt nfs3 Feb 29 12:38

Ours: mount|grep /mnt /aix_fvt/gsa_export /mnt nfs Feb 07 04:09 soft,vers=2

Hello @nitismis !

Could the difference here be that your mount seems to be a local FS/LV exported as NFS and mine is a NAS filesystem which is mounted remotely to my AIX server?

Thanks,

With kind regards,

Stephan Dietl

nitismis commented 4 months ago

@StevePOI , No ... there are some other possibilities which we are digging in and trying to come up with a generic solution for all, it may take another week but rest assured that this fix is going to be in our next major release, June end. And, even before release, when we push our fix to the repository, I would like you to help us in verifying the fix. Hope you would like to involve !

Thanks, and regards, Nitish

StevePOI commented 4 months ago

Hello @nitismis !

@StevePOI , No ... there are some other possibilities which we are digging in and trying to come up with a generic solution for all, it may take another week but rest assured that this fix is going to be in our next major release, June end.

Good to hear, thanks!

And, even before release, when we push our fix to the repository, I would like you to help us in verifying the fix. Hope you would like to involve !

Yes, I´d be glad to help if instructed how :) !

Thanks, with kind regards,

Steve

StevePOI commented 2 months ago

Hey @nitismis !

Sorry, did I miss my chance to test the fix :) ? Version 1.9.1 has been released but according to the changelog without anything NFS related?

Thanks for a quick update and I´m willing to help with this fix,

With kind regards,

Steve

nitismis commented 2 months ago

Hi @StevePOI, sorry we could not ship this one in our latest release. We had a last minute change in priority and we could not thoroughly test the fix. Also, this defect has dependency on AIX filesystem team and we have to collaborate with them. We have again resumed our efforts on this one and will update you next couple of days.

nitismis commented 2 months ago

@StevePOI, Can you please verify if this fix is working in your environment ? https://github.com/IBM/ansible-power-aix/pull/557

StevePOI commented 2 months ago

Hello @nitismis !

Sorry, was on vacation until today, will test ASAP :) ! Thanks for your efforts!

With kind regards,

Steve

StevePOI commented 2 months ago

Hello @nitismis !

Can confirm that this fixes the issue:

` [user@server:/home/user]# mount node mounted mounted over vfs date options

     /dev/hd4         /                jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd2         /usr             jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd9var      /var             jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd3         /tmp             jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd1         /home            jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /dev/hd11admin   /admin           jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /proc            /proc            procfs Jun 07 05:09 rw              
     /dev/hd10opt     /opt             jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /dev/livedump    /var/adm/ras/livedump jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /dev/logs        /var/adm/logs    jfs2   Jun 07 05:09 rw,log=/dev/hd8 

nfs-14-swdepot /swdepot /mnt nfs3 Jul 29 12:25

server done: { "changed": true, "cmd": "/usr/sbin/umount /mnt", "invocation": { "module_args": { "alternate_fs": null, "force": false, "fs_type": null, "mount_all": null, "mount_dir": null, "mount_over_dir": "/mnt", "node": null, "options": null, "read_only": false, "removable_fs": false, "state": "umount", "vfsname": null } }, "msg": "Unmount successful.", "rc": 0, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] }

[user@server:/home/user]# mount node mounted mounted over vfs date options

     /dev/hd4         /                jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd2         /usr             jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd9var      /var             jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd3         /tmp             jfs2   Jun 07 05:08 rw,log=/dev/hd8 
     /dev/hd1         /home            jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /dev/hd11admin   /admin           jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /proc            /proc            procfs Jun 07 05:09 rw              
     /dev/hd10opt     /opt             jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /dev/livedump    /var/adm/ras/livedump jfs2   Jun 07 05:09 rw,log=/dev/hd8 
     /dev/logs        /var/adm/logs    jfs2   Jun 07 05:09 rw,log=/dev/hd8 

[user@server:/home/user]# `

Thanks a lot, if no other issues with this fix surface I´d suggest to add it to the next release :) !

With kind regards,

Steve

nitismis commented 2 months ago

@StevePOI , thanks for verification and sorry for the delay. I have merged the PR, closing the issue.