bushvin / ansible-module-xfs_quota

This ansible module allow one to set xfs user, group and project quotas
GNU General Public License v3.0
8 stars 2 forks source link

msg": "/ is not located on an xfs filesystem on Centos 7 #2

Closed nicomagliaro closed 5 years ago

nicomagliaro commented 5 years ago

I found a Bug running this plugin with this enviroment: Centos 7.5 Python 2.7 Ansible 2.6

But the bug is within the python code itself: ASK [quotas : apply xfs quota] *************************************************************************************************************************** Tuesday 04 September 2018 11:21:24 -0300 (0:00:00.108) 0:00:33.228 ***** failed: [xx.xx.xxx.xx] (item={u'mountpoint': u'/', u'name': u'user1', u'quota': u'200G'}) => {"changed": false, "item": {"mountpoint": "/", "name": "user1", "quota": "200G"}, "msg": "/ is not located on an xfs filesystem."}

While / is a valid mountpoint, but for Centos, the output of cat /proc/mounts produces: rootfs / rootfs rw 0 0 /dev/vda1 / xfs rw,noatime,attr2,inode64,noquota 0 0 explanation cloud be on this thread

the function ` def get_fs_by_mountpoint(mountpoint): mpr = None with open('/proc/mounts', 'r') as s: for line in s.readlines(): mp = line.strip().split() if len(mp) == 6 and mp[1] == mountpoint: mpr = dict(zip(['spec', 'file', 'vfstype', 'mntopts', 'freq', 'passno'], mp)) mpr['mntopts'] = mpr['mntopts'].split(',') break return mpr

`

will return the first match for the output.

I found a fix by adding a 3rd validation to the function: ` def get_fs_by_mountpoint(mountpoint): mpr = None with open('/proc/mounts', 'r') as s: for line in s.readlines(): mp = line.strip().split() if len(mp) == 6 and mp[1] == mountpoint and mp[2] == 'xfs': mpr = dict(zip(['spec', 'file', 'vfstype', 'mntopts', 'freq', 'passno'], mp)) mpr['mntopts'] = mpr['mntopts'].split(',') break return mpr

`

This should do the trick for Centos and the rest of OS.

bushvin commented 5 years ago

Hello Nicolas,

Thank you for submitting this issue.

Could you send me a copy of your (scrubbed) /proc/mounts file, as I cannot reproduce this on my own CentOS 7.5.1804 system:

~]# awk '{print $2}' /proc/mounts 
/sys
/proc
/dev
/sys/kernel/security
/dev/shm
/dev/pts
/run
/sys/fs/cgroup
/sys/fs/cgroup/systemd
/sys/fs/pstore
/sys/fs/cgroup/memory
/sys/fs/cgroup/freezer
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/pids
/sys/fs/cgroup/blkio
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/devices
/sys/kernel/config
/
/sys/fs/selinux
/proc/sys/fs/binfmt_misc
/dev/mqueue
/dev/hugepages
/sys/kernel/debug
/boot
/proc/sys/fs/binfmt_misc
/run/user/993
/run/user/1000

If I am correct you are getting this issue because your /proc/mounts contains 2 entries for /, with the first being your initramfs mounpoint?

Thanks in advance!

nicomagliaro commented 5 years ago

This is the fukll output of cat /proc/mounts

` [root@Ansible1 ansible-tbx]# cat /proc/mounts rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 devtmpfs /dev devtmpfs rw,nosuid,size=3994584k,nr_inodes=998646,mode=755 0 0 securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0 tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0 cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0 pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0 cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0 cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0 cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0 configfs /sys/kernel/config configfs rw,relatime 0 0 /dev/vda1 / xfs rw,noatime,attr2,inode64,noquota 0 0 systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14629 0 0 mqueue /dev/mqueue mqueue rw,relatime 0 0 hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0 debugfs /sys/kernel/debug debugfs rw,relatime 0 0 tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=800852k,mode=700 0 0

[root@Ansible1 ansible-tbx]# awk '{print $2}' /proc/mounts / /sys /proc /dev /sys/kernel/security /dev/shm /dev/pts /run /sys/fs/cgroup /sys/fs/cgroup/systemd /sys/fs/pstore /sys/fs/cgroup/blkio /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/freezer /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/pids /sys/fs/cgroup/hugetlb /sys/fs/cgroup/devices /sys/fs/cgroup/memory /sys/fs/cgroup/perf_event /sys/fs/cgroup/cpuset /sys/kernel/config / /proc/sys/fs/binfmt_misc /dev/mqueue /dev/hugepages /sys/kernel/debug /run/user/0

` Notice the / mountpoint is lister twice within. We are running Centos7 but I think this mount configuration should be related to all systemd linux distros. I hope this could help

bushvin commented 5 years ago

Thanks for your feedback. The code is merged, and I will make some further changes as lines 188 and 189 are no longer correct