Closed StevePOI closed 2 months ago
Addendum:
It works on a machine where the NFS is permanently mounted and in the /etc/filesystems:
Umount NFS 'nfs-swdepot'... NIMSERVER_NAME done | msg: Unmount successful.
After running the playbook /mnt is gone here, while the other servers who only mount it via Ansible temporarily report:
aix72test01t ok | msg: Filesystem/Mount point '/mnt' is not mounted
while it is still there:
[aix72test01t:/home/USER]# mount|grep /mnt nfs-swdepot /swdepot /mnt nfs3 Feb 29 12:38
I´m confused, is this a check further up in the code?
Addendum, this works as expected, so it seems it´s nothing specific about my AIX servers but a change in the ibm.power_aix.mount Module:
Hello @nitismis !
Can I provide any additional information :) ?
Thanks,
With kind regards,
Steve
@StevePOI ... I will prioritize this defect in our next sprint (starting Monday). You can provide the verbose out of the playbook.
Thanks
@nitismis
Thanks for including it in the next sprint!
Is this verbose enough (-vvvv, we always see those kerberos messages):
`Escalation succeeded
Hello @nitismis !
Don´t want to be a nuisance, but do you have new information for me :) ?
Thanks,
With kind regards,
Steve
Hi @StevePOI, Recently, we have changed the command in mount module to use "mount" instead of "df" because some mounts like autofs are not listed by df. The problem here now, is in your case the output of mount is bit different then that is for us, may be because of version difference.
Your mount output: [aix72test01t:/home/USER]# mount|grep /mnt nfs-swdepot /swdepot /mnt nfs3 Feb 29 12:38
Ours: mount|grep /mnt /aix_fvt/gsa_export /mnt nfs Feb 07 04:09 soft,vers=2 We have one extra options and because of that in your case module is not picking up correct value. This is should have been handled through our module and we are working on that, we need to have a discussion with filesystem team.
[aix72test01t:/home/USER]# mount|grep /mnt nfs-swdepot /swdepot /mnt nfs3 Feb 29 12:38
Ours: mount|grep /mnt /aix_fvt/gsa_export /mnt nfs Feb 07 04:09 soft,vers=2
Hello @nitismis !
Could the difference here be that your mount seems to be a local FS/LV exported as NFS and mine is a NAS filesystem which is mounted remotely to my AIX server?
Thanks,
With kind regards,
Stephan Dietl
@StevePOI , No ... there are some other possibilities which we are digging in and trying to come up with a generic solution for all, it may take another week but rest assured that this fix is going to be in our next major release, June end. And, even before release, when we push our fix to the repository, I would like you to help us in verifying the fix. Hope you would like to involve !
Thanks, and regards, Nitish
Hello @nitismis !
@StevePOI , No ... there are some other possibilities which we are digging in and trying to come up with a generic solution for all, it may take another week but rest assured that this fix is going to be in our next major release, June end.
Good to hear, thanks!
And, even before release, when we push our fix to the repository, I would like you to help us in verifying the fix. Hope you would like to involve !
Yes, I´d be glad to help if instructed how :) !
Thanks, with kind regards,
Steve
Hey @nitismis !
Sorry, did I miss my chance to test the fix :) ? Version 1.9.1 has been released but according to the changelog without anything NFS related?
Thanks for a quick update and I´m willing to help with this fix,
With kind regards,
Steve
Hi @StevePOI, sorry we could not ship this one in our latest release. We had a last minute change in priority and we could not thoroughly test the fix. Also, this defect has dependency on AIX filesystem team and we have to collaborate with them. We have again resumed our efforts on this one and will update you next couple of days.
@StevePOI, Can you please verify if this fix is working in your environment ? https://github.com/IBM/ansible-power-aix/pull/557
Hello @nitismis !
Sorry, was on vacation until today, will test ASAP :) ! Thanks for your efforts!
With kind regards,
Steve
Hello @nitismis !
Can confirm that this fixes the issue:
` [user@server:/home/user]# mount node mounted mounted over vfs date options
/dev/hd4 / jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd2 /usr jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd9var /var jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd3 /tmp jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd1 /home jfs2 Jun 07 05:09 rw,log=/dev/hd8
/dev/hd11admin /admin jfs2 Jun 07 05:09 rw,log=/dev/hd8
/proc /proc procfs Jun 07 05:09 rw
/dev/hd10opt /opt jfs2 Jun 07 05:09 rw,log=/dev/hd8
/dev/livedump /var/adm/ras/livedump jfs2 Jun 07 05:09 rw,log=/dev/hd8
/dev/logs /var/adm/logs jfs2 Jun 07 05:09 rw,log=/dev/hd8
nfs-14-swdepot /swdepot /mnt nfs3 Jul 29 12:25
server done: { "changed": true, "cmd": "/usr/sbin/umount /mnt", "invocation": { "module_args": { "alternate_fs": null, "force": false, "fs_type": null, "mount_all": null, "mount_dir": null, "mount_over_dir": "/mnt", "node": null, "options": null, "read_only": false, "removable_fs": false, "state": "umount", "vfsname": null } }, "msg": "Unmount successful.", "rc": 0, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] }
[user@server:/home/user]# mount node mounted mounted over vfs date options
/dev/hd4 / jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd2 /usr jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd9var /var jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd3 /tmp jfs2 Jun 07 05:08 rw,log=/dev/hd8
/dev/hd1 /home jfs2 Jun 07 05:09 rw,log=/dev/hd8
/dev/hd11admin /admin jfs2 Jun 07 05:09 rw,log=/dev/hd8
/proc /proc procfs Jun 07 05:09 rw
/dev/hd10opt /opt jfs2 Jun 07 05:09 rw,log=/dev/hd8
/dev/livedump /var/adm/ras/livedump jfs2 Jun 07 05:09 rw,log=/dev/hd8
/dev/logs /var/adm/logs jfs2 Jun 07 05:09 rw,log=/dev/hd8
[user@server:/home/user]# `
Thanks a lot, if no other issues with this fix surface I´d suggest to add it to the next release :) !
With kind regards,
Steve
@StevePOI , thanks for verification and sorry for the delay. I have merged the PR, closing the issue.
Describe the bug
I use ibm.power_aix.mount as roles in my playbooks, one task to mount a certain NFS (for EMGR, INSTALLP Module etc...):
name: Mount NFS '{{ NFSMOUNT }}' ibm.power_aix.mount: node: '{{ NFSMOUNT }}' mount_over_dir: /mnt mount_dir: /swdepot
And afterwards I umount it again:
name: Umount NFS '{{ NFSMOUNT }}' ibm.power_aix.mount: state: umount mount_over_dir: /mnt
Today I noticed that the umount part is not working anymore, it gives this "OK" message, so everything is green:
ok | msg: Filesystem/Mount point '/mnt' is not mounted
(Line https://github.com/IBM/ansible-power-aix/blob/dev-collection/plugins/modules/mount.py#L433 )
This behaviour must be relatively recent, it used to work in the last months of 2023 and I´m using this since 2021.
So I´m left with many mounted /mnt on my systems after each job run, please advise :) .
To Reproduce
Use the ibm.power_aix.mount Module to umount a remote NFS v3.
Expected behavior
According to https://github.com/IBM/ansible-power-aix/blob/dev-collection/plugins/modules/mount.py#L456 it should be:
And /mnt should be gone.
Environment (please complete the following information):