Closed GoogleCodeExporter closed 8 years ago
Although normally we will not post a big file of 500M, but I think 100 images
of 5M raw image is a common case.
I also the option with -debug=false, the result is the same.
Original comment by darkdark...@gmail.com
on 5 Aug 2013 at 4:42
Or maybe the cache?
It seems that Haystack will cache the file just written.
I am not sure.
Original comment by darkdark...@gmail.com
on 5 Aug 2013 at 4:53
When reading and writing, the file content are read into memory. So if you are
just reading one or two big files, the memory will go up. If you read or write
100 5M files, the memory will not go wild. Please confirm whether this helps
your use case.
Original comment by chris...@gmail.com
on 5 Aug 2013 at 5:08
Thanks very much.
re-test:
1. if just 100 5M files, yes, the momory is not going crazy.
2. if upload 500M file 4 times, the memory will go crazy, but it will drop to
normal after about 10 minutes. (Are there some tricks?)
Since I can control the size of file uploading, weed-fs fits my env.
Thanks.
Original comment by darkdark...@gmail.com
on 6 Aug 2013 at 3:29
Thanks for confirmation! WeedFS was not created for super large files, but for
many small files.
Memory garbage collection is managed by Go itself.
If serving large files is a target and Go memory GC can not keep up, we can
manage the memory by code. But seems this use case does not need it yet.
Original comment by chris...@gmail.com
on 6 Aug 2013 at 7:23
Original issue reported on code.google.com by
darkdark...@gmail.com
on 5 Aug 2013 at 4:33