Closed GoogleCodeExporter closed 8 years ago
Hmmm... I was just able to copy a 100MB file to s3 using r145 with no problems
(linux)...
>>> MacFUSE: force ejecting (no response from user space 5)
not sure what this is exactly; under what basis is MacFuse declaring that there
is
"no response" from s3fs?!?
could it possibly be a readwrite_timeout issue? it defaults to 10 seconds but
perhaps
that is a little bit too aggressive? try changing it to perhaps 60 seconds and
try again?
Original comment by rri...@gmail.com
on 30 Apr 2008 at 8:50
Sorry, I forgot to include something in my original post. I'm behind an http
proxy,
so I'm using the the http_proxy environment variable, so curl connects using the
CONNECT method of the proxy.
I tracked the network usage, s3fs gets most of the file through, then throws
that error.
[17:26] s3fs - 72.21.211.210:80 close, 83689758 bytes (79.8 MB) sent, 499 bytes
received
I changed the source & recompiled, to these variables:
static long connect_timeout = 20;
static time_t readwrite_timeout = 30;
And I'm at r145
Athas:~/src/s3fs-cpp/s3fs Athas$ svn info
. . .
Last Changed Rev: 145
Last Changed Date: 2008-04-27 15:27:37 -0400 (Sun, 27 Apr 2008)
I did several mkfile tests, if I say 80MB it works, but if I say 90MB it fails
(same
error as before)
Original comment by Ath...@gmail.com
on 30 Apr 2008 at 9:41
any luck on this? could the proxy be tearing down the connection after a period
of time?
Original comment by rri...@gmail.com
on 2 May 2008 at 2:24
I haven't had time to look at this since. I tunneled it through the proxy in 2
different ways. First using curl's http_proxy environment variable and second
through a kernel-based proxy tool (it adds proxy support to applications that
don't
otherwise have it), but the result was the same.
I have sustained http transfers larger than 90 MB before, and there's no
problem when
I tried JungleDisk, so there is something else must be at play here.
I'll play around with it over the next week or two, but I'm in the middle of
finals
for school.
Original comment by Ath...@gmail.com
on 4 May 2008 at 3:17
Hi!
I have the same problem, but I don't have any proxies in place.
s3fs version is r152
Darwin macbook 9.2.2 Darwin Kernel Version 9.2.2: Tue Mar 4 21:17:34 PST 2008;
root:xnu-1228.4.31~1/RELEASE_I386 i386
Mac OS X 10.5.2 (9C7010)
I tried bunch of timeout options (3600 seconds) but behavior is always the same.
It fails after 90-120 seconds of a long operation. I used "time cp bigfile
s3-mount"
to check this.
Internet connection works fine.
Also I tried with/without local cache and nothing changes.
In Terminal window it says:
Socket is not connected.
And in system.log it says:
May 15 10:42:41 macbook kernel[0]: MacFUSE: force ejecting (no response from
user
space 5)
May 15 10:42:41 macbook KernelEventAgent[24]: tid 00000000 received VQ_DEAD
event (32)
Original comment by dimaulu...@gmail.com
on 15 May 2008 at 3:17
I believe I have found solution/workaround.
It happens because it is the way MacFUSE works on a Leopard.
As stated here: http://code.google.com/p/macfuse/wiki/OPTIONS (search by
"daemon_timeout")
daemon_timeout MacFUSE option controls calls to underlying user file systems
and when
timeout occurs it shows timeout-alert-dialog (on a Tiger) or automatically
ejects the
volume because "there is no alert dialog on a Leopard".
So when I added option "-odaemon_timeout=<big number>" to s3fs arguments big
file
have been copied successfully.
Hope this helps.
Original comment by dimaulu...@gmail.com
on 15 May 2008 at 3:41
Thanks,
above comment helped a lot :-) i was wondering what was the reason that
'MacFUSE:
force ejecting (no response from user space 5)' error is showing up and process
was
killed.
Original comment by ama...@gmail.com
on 16 Jul 2008 at 2:06
Hi,
I had the same problem for quite small files (~10MB), the daemon_timeout option
fixed
the issue, but I ran into it again with larger files (~70MB).
I am running Mac OS X Leopard 10.5.3, MacFUSE-2.0.3,2 and s3fs from svn at
revision 185.
Original comment by philippe...@gmail.com
on 6 Jan 2009 at 8:47
Same issue here. Using mac os x and it mostly works. The problem is it bombs
when I try to upload files >10M. I've even tried setting {{{-
odaemon_timeout=180000}}} . I'm using `rsync` and I compiled it with the most
recent version of `libcurl`. MacFUSE 2.0.3.
{{{
MacFUSE: force ejecting (no response from user space 5)
}}}
rsync reads as follows:
{{{
rsync: writefd_unbuffered failed to write 32768 bytes [sender]: Broken pipe (32)
rsync: close failed on "/path/to/foreign/file": Socket is not connected (57)
rsync error: error in file IO (code 11) at
/SourceCache/rsync/rsync-35.2/rsync/receiver.c(647) [receiver=2.6.9]
rsync: connection unexpectedly closed (89569 bytes received so far) [generator]
rsync error: error in rsync protocol data stream (code 12) at
/SourceCache/rsync/rsync-35.2/rsync/io.c(452) [generator=2.6.9]
rsync: connection unexpectedly closed (52 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at
/SourceCache/rsync/rsync-35.2/rsync/io.c(452) [sender=2.6.9]
}}}
Original comment by nmurraya...@gmail.com
on 26 Feb 2009 at 9:52
This is a duplicate of two newer issues:
issue #106 - Get this working in OSX
issue #142 - Implement multi-part uploads for large files
Merging with issue #142 as it seems that this is more related to uploading of
large files than to OSX itself.
Original comment by dmoore4...@gmail.com
on 27 Dec 2010 at 11:55
Original issue reported on code.google.com by
Ath...@gmail.com
on 30 Apr 2008 at 7:47