Closed GoogleCodeExporter closed 9 years ago
I just tested and it also happens even when the partition is 5 gig.
I originally thought the bug was only when the file got stuck in the middle of
the
copy (example 1.7 gig file copying with 1 gig /tmp partition). But the copy
went
through perfect. The /tmp size was not released.
3 questions:
1) Is there a way to manually release the files until the bug is fixed?
2) Is there a way to use a different /tmp folder so the stability of the server
is
not affected?
3) If not, is there a way to limit the s3fs to 80% of the /tmp directory so the
server doesn't crash?
-Sammy Capuano
Original comment by danyba...@gmail.com
on 21 Sep 2008 at 6:14
1) umount and then mount
2) s3fs uses tmpfile()... see "man tmpfile"
3) disk quota?
Original comment by rri...@gmail.com
on 21 Sep 2008 at 6:45
Sorry, i just noticed that it did release it when the copy succeeded.
Original comment by danyba...@gmail.com
on 21 Sep 2008 at 8:39
I posted related comments in issue 41
Original comment by danyba...@gmail.com
on 21 Sep 2008 at 9:59
Thanks for the answer on #2.
But where is the /tmp directory defined in your code?
We want to change it to another location.
Original comment by sa...@siliconinnovations.com
on 5 Oct 2008 at 10:45
Hi Sammy-
s3fs uses tmpfile() to create temporary files; I had a quick peek on how to
configure
the location (e.g., TMPDIR environment variable) but had no luck;
either that or configurable tmpdir would need to be added as a feature
Original comment by rri...@gmail.com
on 5 Oct 2008 at 3:55
A review of the code shows that all file descriptors that are associated with
tmpfile() are closed in one way or another. Documentation for tmpfile() states:
"The file is deleted automatically when it is closed or when the program
terminates."
Closing this old issue.
Original comment by dmoore4...@gmail.com
on 12 Feb 2011 at 5:36
Original issue reported on code.google.com by
rri...@gmail.com
on 19 Sep 2008 at 2:48