Closed richman1000000 closed 5 years ago
I have a system which exports small files in great amount! one folder have 62000 files and I export 16 of those folders currently. I export them to mounted NFS folder. If NFS is shared from ext4 - no problem, but shared from sdfs - and I have this error.
max-open-files="1024" set vdedup1tb2-volume-cfg.xml
in the xml set safe-close to true.
On Tue, Apr 3, 2018 at 4:27 AM, richman1000000 notifications@github.com wrote:
I have a system which exports small files in great amount! one folder have 62000 files and I export 16 of those folders currently. I export them to mounted NFS folder. If NFS is exported from ext4 - no problem, but exported from sdfs - and I have this error.
max-open-files="1024" set vdedup1tb2-volume-cfg.xml
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/opendedup/sdfs/issues/70#issuecomment-378217437, or mute the thread https://github.com/notifications/unsubscribe-auth/ADa7qKbpvL4YKmMF9q8SlMX5OuG_UTTgks5tk1yygaJpZM4TDzKG .
HI! I've tried safe-close="true" and safe-close="false" problem still persists
since service NFS have a max open file 1024 and max-open-files="1024" in xml is set, I've decided to check path that is opened by process during this error
most of those opened files is DDB
you can try it yourself by the way. I think it is better if you reproduse this problem in your environment. export schema from SAP HANA database on NFS+sdfs storage
I've checked this one Datish Cloud Storage Gateway, - it has same problem
I found resolution of my problem!!
The source of problem is interaction of SDFS + NFS3 server ontop +NFS4 client. echo 0 > /proc/sys/fs/leases-enable is workaround. It does not resolve original problem of NFS, but makes it working. i think you can close this issue.
there is always this error "too many"
here the screenshot of running dedup process during backup.