LittleFlower2019 / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

Error closing/opening file: Input/output error #353

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
Detailed description of observed behavior:

Able to copy small files over without issue. Haven't determined what the 
threshold is, but I am trying to upload a 70MB file, and I get an error after 
copying (and sometimes before copying) that reads "Error closing/opening file: 
Input/output error". I am able to upload this file flawlessly with DragonDisk.

What steps will reproduce the problem - please be very specific and
detailed. (if the developers cannot reproduce the issue, then it is
unlikely a fix will be found)?

Mount S3 drive. Try to copy over file at least 70MB. Try a zip file too (not 
sure if that makes a difference, I also tried a tar file).

===================================================================
The following information is very important in order to help us to help
you.  Omission of the following details may delay your support request or
receive no attention at all.
===================================================================
Version of s3fs being used (s3fs --version): 1.71

Version of fuse being used (pkg-config --modversion fuse): 2.8.6

System information (uname -a): Linux jamesgq 3.5.0-34-generic 
#55~precise1-Ubuntu SMP Fri Jun 7 16:25:50 UTC 2013 x86_64 x86_64 x86_64 
GNU/Linux

Distro (cat /etc/issue): Ubuntu 12.04.2 LTS

s3fs command line used (if applicable):

/etc/fstab entry (if applicable):

s3fs syslog messages (grep s3fs /var/log/syslog):

Jun 28 14:07:08 jamesgq s3fs: init $Rev: 444 $
Jun 28 14:16:46 jamesgq s3fs: init $Rev: 444 $
Jun 28 14:18:20 jamesgq s3fs: Could not determine UploadId
Jun 28 14:19:09 jamesgq s3fs: Could not determine UploadId
Jun 28 14:20:38 jamesgq s3fs: timeout  now: 1372429238  curl_times[curl]: 
1372429207l  readwrite_timeout: 30
Jun 28 14:21:00  s3fs: last message repeated 35 times
Jun 28 14:24:29 jamesgq s3fs: init $Rev: 444 $
Jun 28 14:24:30 jamesgq s3fs: init $Rev: 444 $
Jun 28 14:37:08 jamesgq s3fs: timeout  now: 1372430228  curl_times[curl]: 
1372430197l  readwrite_timeout: 30
Jun 28 14:38:23  s3fs: last message repeated 25 times
Jun 28 16:09:37 jamesgq s3fs: Could not determine UploadId
Jun 28 16:15:49 jamesgq s3fs: Could not determine UploadId
Jun 28 16:17:03 jamesgq s3fs: Could not determine UploadId

Original issue reported on code.google.com by m...@jamesrobb.ca on 28 Jun 2013 at 4:21

GoogleCodeExporter commented 8 years ago
Hi,

I execute the command: /usr/bin/s3fs -o 
host=http://s3-website-us-east-1.amazonaws.com brhandbucket /mnt/s3

but now, when I try to create or copy a fole to /mnt/s3/ it shows me the 
following error line:
root@ip-10-245-31-31:/mnt/s3# echo > asd
-su: asd: No such file or directory

Is there anybody that know how to solve it?

Regards,
Nicolas.

Original comment by nlo...@edge-americas.com on 25 Jul 2013 at 4:05

GoogleCodeExporter commented 8 years ago
and the syslog file shows me: 
s3fs: init $Rev: 444 $

Original comment by nlo...@edge-americas.com on 25 Jul 2013 at 4:06

GoogleCodeExporter commented 8 years ago
Hi,

I'm sorry for replying too late.

I updated new version v1.73.
If you can, please use it and check this issue.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 23 Aug 2013 at 6:21

GoogleCodeExporter commented 8 years ago
Hi,

I have the version 1.74 installed and I get the same error when copying a 40MB 
file.
It used to work fine, but all of a sudden it started happening. Wondering if it 
has something to do with the number of objects in the bucket.

Thanks!

Original comment by erac...@gmail.com on 1 Dec 2013 at 10:19

GoogleCodeExporter commented 8 years ago
Hi, 

(I'm sorry for replying too slow.)
We moved s3fs project from googlecodes to 
github(https://github.com/s3fs-fuse/s3fs-fuse).
And latest version fixed some bugs.

If you can, please try to use latest version or latex master branch on github.

And if you can, please try to run s3fs with multireq_max option as small 
number(ex. multireq_max=3).

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 1 Jun 2014 at 3:41