LittleFlower2019 / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

Missing directory in bucket #366

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Detailed description of observed behavior:

We have multiple machines that mount and read the S3 Bucket.  One machine 
writes to the bucket (producer) and the other machines just read what is on the 
bucket (consumers).  All machines use the same s3fs module

But, oftentimes, the consumers do not see the directories created by the 
producer.

What steps will reproduce the problem - please be very specific and
detailed. (if the developers cannot reproduce the issue, then it is
unlikely a fix will be found)?

Mount the same bucket on two machines.  create a directory and some files.  
Check if the directory exists with the other machine.

===================================================================
The following information is very important in order to help us to help
you.  Omission of the following details may delay your support request or
receive no attention at all.
===================================================================
Version of s3fs being used (s3fs --version):
1.61
Version of fuse being used (pkg-config --modversion fuse):
2.8.6
System information (uname -a):
GNU/Linux
Distro (cat /etc/issue):

s3fs command line used (if applicable):

/etc/fstab entry (if applicable):

s3fs syslog messages (grep s3fs /var/log/syslog):

Original issue reported on code.google.com by jeremyvillalobos on 23 Aug 2013 at 12:17

GoogleCodeExporter commented 8 years ago
Hi,

I chacked this issue with latest version s3fs, It works without problem.
Please use latest version and check this issue with it.

I closed this issue, and if you find a bug please let me know.

Thanks in advance.

Original comment by ggta...@gmail.com on 23 Aug 2013 at 1:14

GoogleCodeExporter commented 8 years ago
I upgraded two of the machines to version 1.72 from the download tab.

The same behavior happens.  A simple test file is quickly synched, but new 
folders are never shown on the other computer.

Original comment by jeremyvillalobos on 23 Aug 2013 at 5:05

GoogleCodeExporter commented 8 years ago
Hi,

I changed status for this issue.
But I did not reappeared this problem yet.

Please let me know about your command line for running s3fs.

Thans in advance.

Original comment by ggta...@gmail.com on 23 Aug 2013 at 3:14

GoogleCodeExporter commented 8 years ago
To mount the bucket I use
s3fs bucket_name /target/dir -o allow_other -o use_cache=/cache/dir -o retries=5

Original comment by jeremyvillalobos on 23 Aug 2013 at 7:35

GoogleCodeExporter commented 8 years ago
Hi,

Thnaks for your command line.

But I could not reappear this problem, s3fs found the directory/file which is 
updated on another machines.
When s3fs lists in the directory, s3fs sends request to S3(bucket list). And 
when s3fs looks the file directly, s3fs sends request(head), too.
Thus s3fs always gets correct file list and file attributes(except specified 
enable_noobj_cache).
I'm sorry that I do not understand the reason for this problem without 
reappearing it.

If you can, please run s3fs with "-f" and foreground.
Then you will get many log on your display, it probably helps us for solving 
this issue.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 26 Aug 2013 at 7:36

GoogleCodeExporter commented 8 years ago
The version that was showing this error was version 1.61.  Once I updated to 
version 1.73 the error was fixed.  

The post where I said I had updated the machines was incorrect because version 
1.61 was installed in another path that took PATH precedence.

Original comment by jeremyvillalobos on 28 Aug 2013 at 12:53

GoogleCodeExporter commented 8 years ago
Hi,

Thanks for your reports.
So I closed this issue.

Thanks a lot.

Original comment by ggta...@gmail.com on 29 Aug 2013 at 6:41