Closed GoogleCodeExporter closed 9 years ago
Opps, forgot the attachment. Here it is.
Original comment by edoua...@gmail.com
on 11 Mar 2008 at 10:40
Attachments:
You're merely running into typical file descriptor limits--nothing to do with
MacFUSE.
In the case of sshfs, the sftp-server program (on the server side, as the name
suggests) is limiting you--it doesn't allow more
than 100 open descriptors. This limit is hardcoded into sftp-server (note that
this limit is in addition to the standard Unix-y
file descriptor limit). The only way you can "fix" this is to recompile
sftp-server on the server.
In the case of encfs, sounds like you are running into the system's per-user
file descriptor limit (256 open files on Leopard).
You could raise this limit several ways.
Original comment by si...@gmail.com
on 11 Mar 2008 at 3:39
The per-user file descriptor limits was my first guess when I ran into this
problem too. However I can link 1100
files (tested using that fusebug.cc program above) without any problems on a
non-macuse, non-encfs part of
my filesystem. And using the shell's limit command to set the file descriptor
ceiling to a higher value had no
effect on the failure scenario within the macuse/encfs part of the filesystem.
I'm happy if the answer is "there is no possible way macfuse can be causing
this issue, it has to be encfs". Is that
absolutely the case?
Original comment by edoua...@gmail.com
on 11 Mar 2008 at 7:40
> I'm happy if the answer is "there is no possible way macfuse can be causing
this issue, it has to be encfs". Is
that absolutely the case
MacFUSE doesn't open files. The user-space file system written atop MacFUSE (in
this case encfs) does.
MacFUSE just talks to the user-space file system to retrieve file handles. So,
it has to be something other than
MacFUSE.
I used your fusebug.cc successfully with an argument of 110 on another MacFUSE
file system (fusexmp_fh, the
loopback file system example that's in the MacFUSE source tree).
> And using the shell's limit command to set the file descriptor ceiling to a
higher
Which process are you setting the limit for? Try setting it for encfs itself,
not fusebug.
Original comment by si...@gmail.com
on 12 Mar 2008 at 5:22
Typo in my earlier post: I tried fusebug.cc with an argument of 1100 with
fusexmp (not fusexmp_fh).
fusexmp doesn't keep state, so fewer file descriptors are likely to be open
with it. fusexmp_fh does keep state,
and I do get the "errno=24" error (the error is EMFILE, which means fusexmp_fh
failed to open a file
descriptor). To fix that, you'll have to increase the allowed number of open
files for the process that's trying
to open them: in my example, fusexmp_fh, and in your case, encfs. (As root, you
can also set the global
limits).
You can use setrlimit like this:
...
struct rlimit limit;
int err;
err = getrlimit(RLIMIT_NOFILE, &limit);
if (err == 0) {
limit.rlim_cur = 10240;
(void) setrlimit(RLIMIT_NOFILE, &limit);
perror("setrlimit");
}
...
The 10240 number in my example comes from the default max-files-per-proc limit
in Leopard. Use the
sysctl command to look at this number:
sysctl -a | grep files
...
kern.maxfiles = 12288
kern.maxfilesperproc = 10240
...
Original comment by si...@gmail.com
on 12 Mar 2008 at 6:08
Fantastic - thank-you.
I grabbed the latest encfs (1.4.1.1) and put a setrlimit() call in at sartup,
compiled it, and then I tested linking 3000 files. Success!
> Which process are you setting the limit for? Try setting it for encfs itself,
not fusebug.
This is where I had gone wrong in my diagnosis - heh, what a rookie mistake! :-)
Thanks again for the help and advice.
Original comment by edoua...@gmail.com
on 12 Mar 2008 at 7:45
Original issue reported on code.google.com by
edoua...@gmail.com
on 11 Mar 2008 at 10:39