mount -t svfs \
-o mode=510 \
-o attr \
-o container=foo \
swift \
/var/spool/foo
# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
swift 3000 3000 0 100% /var/spool/foo
Results you expected :
mount -t svfs \
-o mode=510 \
-o attr \
-o container=foo \
-o maxObjects=6000 \
swift \
/var/spool/foo
# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
swift 6000 3000 3000 50% /var/spool/foo
Additional information :
It would be useful to have mount option about how many objects can be created.
So the inodes are not being shown as "always full" (100%).
Because a typical heath checks automatically detects that as a critical issue.
If there is any option for inodes steering kindly please point me to.
Could it be that some quota option on Swift is not set,
so the default max objects (max inodes) is a current number of objects created ?
Thank you for an awesome idea - that project!
Context
Steps to reproduce this issue :
Results you expected :
Additional information :
It would be useful to have mount option about how many objects can be created. So the inodes are not being shown as "always full" (100%). Because a typical heath checks automatically detects that as a critical issue.
If there is any option for inodes steering kindly please point me to.
Could it be that some quota option on Swift is not set, so the default max objects (max inodes) is a current number of objects created ?