Open GoogleCodeExporter opened 9 years ago
Hi
Maybe, the time out error is caused by killed s3fs process.
Your problem is memory leaks by s3fs.
I reports #5 of Issue 314 about memory leaks.
http://code.google.com/p/s3fs/issues/detail?id=314#c5
I want to know whether your problem is caused by libcurl.
(And if you can, please see Issue 343 which possibly is same as your problem)
Please check your libcurl(libnss or libssl(openssl), version), and let me know.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 18 Jun 2013 at 2:44
Here are the versions we are currently using:
libcurl/7.22.0
OpenSSL/1.0.1
zlib/1.2.3.4
libidn/1.23
librtmp/2.3
Original comment by jlhawn.p...@gmail.com
on 4 Jul 2013 at 12:39
Hi,
It seems libs version does not have a problem.
I updated the codes today, please check to use latest revision.(now r454)
On latest version, I did not get any error when I uploaded over 6GB files to S3.
I hope this revision solves your problem.
Otherwise If you can, please check with valgrind etc(tools).
Thanks in advance for your assistance.
Original comment by ggta...@gmail.com
on 5 Jul 2013 at 6:42
We've been especially confused about this because it seemed like s3fs wasn't
even being used at all while this was happening. We had only mounted a bucket
and never actually used it.
It turns out that a daily cron job (mlocate) would crawl the entire directory
tree to build a search index. This included stating hundreds of thousands of
files!
We'll continue testing to look for strange behavior, but you might want to test
for yourself how s3fs behaves when running the mlocate cron. (it should be
located at /etc/cron.daily on an ubuntu install)
Original comment by jlhawn.p...@gmail.com
on 16 Jul 2013 at 1:05
You should also advise those who install that they may want to add fuse.s3fs to
the PRUNEFS list in /etc/updatedb.conf
Original comment by jlhawn.p...@gmail.com
on 16 Jul 2013 at 4:50
As you suggested, I ran the s3fs command with Valgrind. I then started up the
mlocate cron job. It immediately began to traverse our entire S3 bucket. The
memory footprint of s3fs startout out at around 120MB. After an hour, this grew
to over 300MB. I then stopped the process and unmounted the bucket.
Surprisingly, Valgrind reported Zero leaked memory.
We have several machines that run the same s3fs command. When mlocate was
running on some of these machines it would take so long that the next daily
mlocate cron job would start before the other had even finished. On a few of
these machines the memory footprint of s3fs would grow so large that the kernel
would kill the s3fs process.
After looking at the s3fs logs (using the -f option) while running mlocate, it
seem that all that it is doing is listing directories and stat-ing files. The
majority of the logs are related to the StatCache. Reading from the source
code, and the wiki, the stat cache should never grow to more than 1000 entries,
and the memory of this is an estimated 4MB. Is there a way to explain why the
memory usage appears to grow so far beyond this?
Original comment by jlhawn.p...@gmail.com
on 18 Jul 2013 at 6:13
Hi,
I'm sorry for replying late.
At first, v1.73 is updated, it fixed a bug for retrying request.
Probably, I think that this version gives quite a few change in this problem.
And I updated some codes(as r479) after v1.73 about initializing curl and
openssl.
Unfortunately I could not find a certain reason and solution.
So the memory leak by s3fs depends on the environment(libcurl+openssl/libnss,
os?), I cannot mention that is a certain cause and is fixed.
If you can, please compile it and test for r479.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 27 Aug 2013 at 8:34
Hi
(we moved s3fs from googlecodes to
github(https://github.com/s3fs-fuse/s3fs-fuse)).
If you can, please try to do following:
1) libcurl version
if you use libcurl with NSS library, you should check the version of libcurl.
Because libcurl with NSS under 7.21.4 version has memory leaking bug.
2) multireq_max
If you use latest version of s3fs, please try to specify multireq_max option.
It is 20 parallel count as default, but you maybe to set small number.(ex,
"multireq_max=3").
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 1 Jun 2014 at 3:31
Original issue reported on code.google.com by
jlhawn.p...@gmail.com
on 17 Jun 2013 at 9:22