JackYeh / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

Cannot use rm -rf #65

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Create a directory: mkdir foo
2. Create a file under the new directory: echo "test" >foo/bar
3. Try to remove the dir: rm -rf foo

What is the expected output? What do you see instead?

Expected: no output

Error:
$ rm -rf foo
rm: cannot remove `foo': Bad address

What version of the product are you using? On what operating system?

r177 with Fuse 2.7.4

Please provide any additional information below.

Thanks for the software, much appreciated.  :)

Original issue reported on code.google.com by jvi...@gmail.com on 5 Aug 2009 at 5:48

GoogleCodeExporter commented 8 years ago
hmmm...

bad address is EFAULT

[EFAULT]
    Bad address.

there is no reference to EFAULT in s3fs.cpp, so, must be coming from somewhere 
else
and s3fs is just transparently passing it thru

have not seen this one before!

Original comment by rri...@gmail.com on 5 Aug 2009 at 6:28

GoogleCodeExporter commented 8 years ago
Interesting.  Are you unable to reproduce the error?

I can give you access to a server where it is reproduceable, if that would help 
you.

The interesting part is that the error message varies, just as it does for 
Issue 64.

For example, I tried the 'rm -rf' again and got this:
rm: cannot remove `foo': Level 3 reset

Any ideas why these error messages are always changing?

Thanks.

Original comment by jvi...@gmail.com on 5 Aug 2009 at 6:55

GoogleCodeExporter commented 8 years ago
have not tried reproducing; neither have I heard this problem from anyone else

does everything else work? i.e., you can read/write files just fine?!? its just 
that
the only thing that does not seems to work is rm -rf ?

could amazon acl factor into the issue? (i.e., any possiblity that the acls were
changed on the bucket/objkects for read-only?

Original comment by rri...@gmail.com on 5 Aug 2009 at 8:12

GoogleCodeExporter commented 8 years ago
Yes, everything else is working fine.  The 'rm -rf' isn't a showstopper, just a
curious oddity.

If I manually remove the file within, then rmdir the directory, everything 
works.

$ rm foo/bar
$ rmdir foo

However, the 'rm -rf' does not.  Strange, isn't it?

Here is the mount command I'm using:

s3fs -o default_acl=public-read <bucket> <mountpoint>

Original comment by jvi...@gmail.com on 5 Aug 2009 at 8:29

GoogleCodeExporter commented 8 years ago
another thought: did u use another s3 tool to manipulate the bucket contents? 
if so
then that might cause troubles

either way: try using another s3 tool such as jets3t cockpit to inspect the 
contents
of the bucket; perhaps there is a 'orphaned' s3 object in the folder that is 
causing
s3fs to think the folder is not empty

Original comment by rri...@gmail.com on 5 Aug 2009 at 9:41

GoogleCodeExporter commented 8 years ago
Closing out this old issue and cannot duplicate. Please try the latest version 
of the software. If the issue is still present, please open a new issue.

Original comment by dmoore4...@gmail.com on 29 Oct 2010 at 4:48