sherwinchetan / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

S3 + CentOs5 + FUSE #53

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. s3fs fresh /home/mnt/s3
2. cd /home/mnt/s3
3. Freezes, cannot read s3 Directory, can not use Rsync get the error 
below:

If i do "dh -f" it says there is 256T available though I can not list my 
files or write to the s3 mount. 

[root@localhost /]# rsync -avz –delete /home/mysqldumps /mnt/s3
building file list … done
mysqldumps/
rsync: recv_generator: failed to stat “/mnt/s3/mysqldumps/backup.txt”: Not 
a directory (20)
sent 85 bytes received 26 bytes 74.00 bytes/sec
total size is 124 speedup is 1.12
rsync error: some files could not be transferred (code 23) at main.c(892) 
[sender=2.6.8]

When I install s3fs i get:

make install
g++ -ggdb -Wall -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse  -pthread -
lfuse -lrt -ldl    -L/usr/kerberos/lib -lcurl -lgssapi_krb5 -lkrb5 -
lk5crypto -lcom_err -lresolv -ldl -lidn -lssl -lcrypto -lz   -
I/usr/include/libxml2 -L/usr/lib -lxml2 -lz -lm -lcrypto s3fs.cpp -o s3fs
s3fs.cpp:440: warning: 'size_t readCallback(void*, size_t, size_t, void*)' 
defined but not used
ok!
cp -f s3fs /usr/bin

Any help would be appreciated.
Thnx

Original issue reported on code.google.com by gmil...@gmail.com on 20 Mar 2009 at 4:44

GoogleCodeExporter commented 8 years ago
using local_cache? if so then try disabling it and retrying (or just rm -rf the 
local
cache)

Original comment by rri...@gmail.com on 20 Mar 2009 at 5:23

GoogleCodeExporter commented 8 years ago

I have tried...

s3fs fresh /mnt -ouse_cache=0

I cannot find the local_cache to remove it... 

Could it be it is because i used s3 Organiser (firefox addon) to create the 
bucket?

Original comment by gmil...@gmail.com on 20 Mar 2009 at 6:27

GoogleCodeExporter commented 8 years ago
Seeing exactly the same problem on Fedora 8 with latest build

Original comment by fiddlest...@gmail.com on 8 Apr 2010 at 7:37

GoogleCodeExporter commented 8 years ago
Can one of you guys try this again with the latest tarball or svn revision?  If 
you're still seeing the same results, let's work together so we can get it 
re-created and debugged. Thanks.

Original comment by dmoore4...@gmail.com on 10 Nov 2010 at 6:32

GoogleCodeExporter commented 8 years ago
Same problem here with version r203 on my EU West micro instance trying to 
access to my EU S3 bucket.
Ready to help if needed. Just tell me what you need to know

Original comment by jose.ra...@gmail.com on 3 Dec 2010 at 10:11

GoogleCodeExporter commented 8 years ago
Jose, there have been many improvements and bug fixes since r203, please 
download, compile, install and test the latest tarball. Thanks.

If things still don't work right, then I'll need your /etc/fstab entry or your 
command line invocation.

I have Fedora 14 and CentOS 5.5 in which I can try to duplicate the issue.

Remember, the mixing S3 clients (S3Fox, bucket explorer, ...) with s3fs is 
incompatible and not currently supported. (reading shouldn't present too much 
of a problem, but if you create files/directories with another client, s3fs 
will not recognize them -- I'm not sure if this is contributing to your issue 
or not?)

Original comment by dmoore4...@gmail.com on 3 Dec 2010 at 4:57

GoogleCodeExporter commented 8 years ago
Thanks!
I was mixing clients. I see that they are not very compatible between each 
other (for instance, now they mistake s3fs-created folders as files so I cannot 
access them).

I also have installed r271.

Keep on the good work. This is a really needed solution for EC2/S3.

Original comment by jose.ra...@gmail.com on 7 Dec 2010 at 10:25

GoogleCodeExporter commented 8 years ago
Looks like a couple of contributing factors here. s3fs-specific issues appear 
to have been addressed.

Original comment by dmoore4...@gmail.com on 7 Dec 2010 at 5:21