Open GoogleCodeExporter opened 8 years ago
s3fs has a memory leak. Please see Issue 278.
Original comment by nicho...@ifactorinc.com
on 11 Jan 2013 at 7:46
Hi,
Latest s3fs(now v1.67) is fixed some memory leak.
Please try to check it.
I closed this issue so I arrange opened past Issue.
If you find more memory leak, please let me know and re-post new issue with
more information.
Regards,
Original comment by ggta...@gmail.com
on 15 Apr 2013 at 7:19
Hi
This issue is re-opened because s3fs has memory leak yet.
I reapeared memory leak with using HTTPS. (but I could not reapear it without
https(http))
So that, Issue 191, Issue 278, Issue 343 are included this issue.
Please let me know more information for fixing memory leak.
Thank you for your help.
Original comment by ggta...@gmail.com
on 10 Jun 2013 at 4:24
I checked this issue, but I could not solve this problem fundamentally.
(I found some small malfunction, but could not solve this.)
However, all who have same issue are not the same conditions, but I found one
of cause.
When s3fs connects with HTTPS and the libcurl with NSS, s3fs leaks memory in
curl with nss.
(This case is already posted Issue 191 by huwtlewis)
Maybe this case is occurred when libcurl is under 7.21.5 version.
** see) http://curl.haxx.se/changes.html
I try to run s3fs with libcurl 7.30.0(nss, no openssl), then it seems work good.
And I checked libcurl(openssl, no nss), it seems s3fs has no problem while
running.
I continue to look into a problem, and try to fix this.
But anyone who has this problem, please let me know about your libcurl version
and nss/openssl.
And if you can, please try to run another version or openssl instead of nss.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 13 Jun 2013 at 6:40
Hi, all
I checked about curl version and libnss/openssl.
At first, if s3fs run with libcurl linked libnss(not OpenSSL), maybe you need
to use libcurl version 7.21.5 after.
If you use libcurl(with libnss) under version it, s3fs has many memory leak.
But you use libcurl(with libnss) after 7.21.5, s3fs leaks memory only 40
bytes(maybe this is for loading nss module…)
Next, if you use libcurl with OpenSSL, s3fs does not leak memory.(I checked
version 7.19.7)
* Summary for s3fs and libcurl ( libnss, openssl ):
I tested it by the following combinations.
curl 7.19.7 + libnss 3.13.1.0 --> memory leak(many)
curl 7.19.7 + openssl 1.0.0 --> no memory leak
curl 7.30.0 + libnss 3.14.0.0 --> no memory leak(but only 40 bytes for loading libs)
curl 7.30.0 + openssl 1.0.0 --> no memory leak
So that, I updated FuseOverAmazon wiki for this problem.
http://code.google.com/p/s3fs/wiki/FuseOverAmazon
Someone who has this problem(memory leak), please check your libcurl version
and linking library(libnss or openssl).
And please let me know your opinion about this reply.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 15 Jun 2013 at 4:08
Hello guys,
I don't know what my reply will be worth for, but I am also getting a pretty
serious memory leak.
The S3FS process does grow by 4K after every operation.
As I read your comments I understood that some of my libs might be too old, and
after investigation, I must confess it is the case.
The OS I am currently using is a CentOS 6.4, and I really begin to hate that
distro, since all packages are more than 3 years old, when the distro is from
... 2013.
So of course at first s3fs would not compile because of the too old FUSE. I
recompiled it.
Then I checked about the Curl library, which was also too old, and I recompiled
it.
Apparently the only security lib that is installed on my system is openss, did
not find any trace of libnss.
Anyway I'm still getting that issue, and I don't really know where it comes
from. Might be libidn or zlib, which are compiled against libcurl. The matter
is that I'm supposed to use the OS... not recompile it, otherwise I could use a
LFS.
Anyway, S3FS seems to know about the problem since I'm getting that libraries
too old message.
fuse: warning: library too old, some operations may not not work
Or maybe I should understand that it is not S3FS which is shouting but FUSE ?
Is my assumption correct ?
In any case that makes CentOS an unreliable distro for S3FS. :-/
I'll check with FUSE and come back to you.
Original comment by olivier....@gmail.com
on 9 Jul 2013 at 9:30
Hello again,
I can't really say a lot more.
Just realized that there was also a kernel module for FUSE, and this is not
embedded in the lib.
So I am wondering if the complaint about the "library too old" does not come
from the interaction with the kernel module. The fact is that Kernel is 2.6.32,
but I am not able to get the revision of it. I can only say that it is a
"recent" kernel for CentOS.
Modinfo is the following :
[root@ip-10-154-193-232 fuse-2.9.3]# modinfo fuse
filename: /lib/modules/2.6.32-358.2.1.el6.x86_64/kernel/fs/fuse/fuse.ko
alias: char-major-10-229
license: GPL
description: Filesystem in Userspace
author: Miklos Szeredi <miklos@szeredi.hu>
srcversion: 0957DD49586EC513678776E
depends:
vermagic: 2.6.32-358.2.1.el6.x86_64 SMP mod_unload modversions
parm: max_user_bgreq:Global limit for the maximum number of
backgrounded requests an unprivileged user can set (uint)
parm: max_user_congthresh:Global limit for the maximum congestion
threshold an unprivileged user can set (uint)
At this point, I am a bit stuck, but my previous statement is, TMHO, still
valid. I'm supposed to use to OS, and not recompile from scratch, or I'll go
god knows where.
Would you like to have details please let me know. Fact is that also on a
t1.micro EC2 machines from Amazon.
Best regards,
---
Olivier
Original comment by olivier....@gmail.com
on 9 Jul 2013 at 9:48
Hello, Maquaire
I'm sorry for that I don't understand all of this problem, but I checked s3fs
with libraries by valgrind(tool).
It said that memory leaks is in libcurl->libnss.
Then I think this problem depends on only libnss(and libcurl), not OS and
drivers.
(My module is older than your: 2.6.32-71.29.1.el6.x86_64)
I did not try to use old FUSE, but maybe s3fs could work almost functions with
old FUSE….
(Probably, some functions are not worked.)
For EC2, could you make custom image with another libraries?
(I'm sorry for I do not know detail about EC2.)
Thanks for your assistance.
Original comment by ggta...@gmail.com
on 10 Jul 2013 at 7:07
S3FS 1.71 compiled against libcurl-7.27.0 and I have a nice slow, steady
memleak march toward OOM. Re-mounting s3fs causes a nice dump and restart of
the march. Graphic attached. Happy to help with any debugging.
root 21572 0.6 92.1 2382384 1563116 ? Ssl Aug02 28:39 s3fs
Original comment by matthew....@spatialkey.com
on 5 Aug 2013 at 1:03
Attachments:
s3fs-1.71 compiled against libcurl-7.27.0 and nss-3.14.0 leaks.
s3fs-1.71 compiled against libcurl-7.31.0 and nss-3.14.3 leaks.
These leaks only happen when using SSL to connect to S3. Attached a graph. Same
load, but since switching to non-HTTPS, no leak of note. I know you know it's a
libcurl/nss issue, just thought I'd throw out some more information.
Original comment by matthew....@spatialkey.com
on 8 Aug 2013 at 4:58
Attachments:
Hi,
I'm sorry for replying late.
I checked codes about this problem, but I could not find a certain reason and
solution.
However, I changed some codes about initializing curl and openssl, it updated
as r479.
So the memory leak by s3fs depends on the environment(libcurl+openssl/libnss,
os?), I cannot mention that is a certain cause and is fixed.
If you can, please compile it and test for r479.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 27 Aug 2013 at 8:23
I built the latest from SVN and have it running with the mount using SSL on a
low-priority cluster member. I'll let you know how it looks. Thanks!
Original comment by matthew....@spatialkey.com
on 27 Aug 2013 at 7:05
Hi,
We feced with the similar situation. Looks like we have memory leak or other
strange behavior for non SSL usage. We have been facing with this for the last
four days.
Steps to reproduce:
1. Mount S3 bucket with huge amount of files (1 million) and start using
mounted folder (only cp commands). Everything works fine all day long.
#s3fs -o allow_other,uid=500,gid=500 <S3 bucket> /mnt/s3
2. Stop using the mounted folder for 16 hours. (Go home and sleep)
3. Start using the mounted folder (Return to the office)
4. s3fs hang and started to occupy all free memory and processor. (see attached
picture). We don't have any messages in syslog and in messages. s3fs just hang.
System information:
# cat /etc/*release*
CentOS release 6.3 (Final)
CentOS release 6.3 (Final)
CentOS release 6.3 (Final)
cpe:/o:centos:linux:6:GA
# s3fs --version
1.72 - revision r469
#curl --version
curl 7.32.0 (x86_64-redhat-linux-gnu) libcurl/7.32.0 OpenSSL/1.0.0 zlib/1.2.3
c-ares/1.10.0 libidn/1.18 libssh2/1.4.3
Version of fuse being used
2.8.4
Version of nss being used
3.14.3
We've installed the latest version of s3fs (r481) and will observe the behavior.
Original comment by Yury_bal...@pubget.com
on 30 Aug 2013 at 12:59
Attachments:
So the SSL leak still exists, although it is *much* slower than it used to be
(compare to previous graphs) The attached graph shows the same mount under the
same load with SSL (slow jog to death) and then remounted without SSL (no leak).
Original comment by matthew....@spatialkey.com
on 3 Sep 2013 at 12:45
Attachments:
Hi, Yury_baltrushevich
I'm sorry for replying too late.
I had to take long time for checking and changing s3fs leaking.
I fixed some codes for memory leak.
It probably does not have memory leak with no-SSL.
But the memory leak is not fixed completely.(with SSL(NSS)).
Please check after r483.
And please take care, this revision adds "nosscache" option and
"--enable-nss-init" configure option, changes default parallel count for head
request(500->20).
Last, I have a question for your environment.
Do you have 1 million object in ONE directory object?
I think it is hard to list objects, and s3fs needs huge memory.
Original comment by ggta...@gmail.com
on 14 Sep 2013 at 10:03
Hi, matthew
(I'm sorry for replying too late.
I had to take long time for checking and changing s3fs leaking.)
I fixed some codes for memory leak.
It probably does not have memory leak with no-SSL.
But the memory leak is not fixed completely.(with SSL(NSS)).
Please check after r483.
And please take care, this revision adds "nosscache" option and
"--enable-nss-init" configure option, changes default parallel count for head
request(500->20).
Last, I have a question for your environment.
Do you have 1 million object in ONE directory object?
I think it is hard to list objects, and s3fs needs huge memory.
Original comment by ggta...@gmail.com
on 14 Sep 2013 at 10:05
Hi, matthew
Sorry for my mis-comment.
Revision 483(482), this revision calls initializing functions for libxml2 and
NSS.
You can run autogen.sh and do configure with "--enable-nss-init", then these
functions are called in main function.
If your machine does not have nss-devel package, please install it.
This revision could not fix memory leak completely, but some leaks is fixed.
Please try to check it.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 15 Sep 2013 at 1:09
Hi, matthew
Maybe, I can not fix this issue unless libnss+libcurl is not version up.
I rechecked s3fs codes and memory leaks.
I think that this problem is caused by the fact that memory leaks in curl(with
nss).
I found same issue in curl mailing list:
http://curl.haxx.se/mail/lib-2013-08/0175.html
Mailing list cause is seem same as this issue.
In addition, I tested about mallopt environments.
I set "MALLOC_MMAP_MAX_=0" because the area can not allocate mmap area, and
other environment(MALLOC_TRIM_THRESHOLD_=0, MALLOC_TOP_PAD_=0) is set too.
In this way, s3fs's VIRT(top command) memory could be kept low size.
Thereafter, I run s3fs with "max_stat_cache_size=0" which means that s3fs uses
no stat cache.
But after sending many head request(ex. ls command lists many file), the size
of RES memory(and VIRT) was increased by this operation.
So, I think that this means leaking memory at libnss+libcurl.( in other case,
the libssl+libcurl is not increased memory)
r482 fixed some codes for memory leaks, and added codes for initializing NSS.
But it is not enough solution for this problem, the cause is deep in lib curl
and libnss.
I think it is hard to fix this issue and takes a lot of times.
Do you think about this?
If you have another idea or reason, please let me know.
Original comment by ggta...@gmail.com
on 19 Sep 2013 at 7:24
Hi, matthew
I'm sorry for replying this issue very slow.
I could know about s3fs memory leaking about maybe this case.
On my case, the libcurl version is under 7.21.4, this version has a bug about
memory leaking.
(For example, cent-os uses yum repository, it does not have latest
curl(libcurl) version.)
The bug is only a case of built with NSS library.
You can see "nss: avoid memory leaks and failure of NSS shutdown" in 7.21.4
release notes.
I tried to update latest libcurl with NSS, and test with s3fs.
After that, it worked good for me, it seems memory leaking does not increase.
*** NOTES
We moved s3fs repo from Googlecodes to
github(https://github.com/s3fs-fuse/s3fs-fuse).
And I updated master branch today, which supports more two SSL library for s3fs.
If you use libcurl with NSS, you should build s3fs with NSS library.
Please try to build s3fs with NSS.
If you can, please upgrade libcurl and s3fs(on Github), and try to check memory
leaking.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 1 Jun 2014 at 3:15
Hi,
I have one box configured as below:
* S3FS master (1.77+)
* fuse 2.9.3
* libcurl 7.36.0
* nss 3.16.0
I will let it run for a couple days (barring problems) and update you. Thanks
so much for continuing to work on this problem!!
Original comment by matthew....@spatialkey.com
on 2 Jun 2014 at 1:26
Original issue reported on code.google.com by
xtru...@gmail.com
on 11 Jan 2013 at 4:02