chigga102 / s3fuse

Automatically exported from code.google.com/p/s3fuse
Other
0 stars 0 forks source link

Slow upload speeds compaired to gsutil #9

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1.#gsutil cp largefile gs://bucket
2.Monitor upload throughput on network
3.Stop transfer
4.#s3fuse /mnt/bucket
5.#cp largefile /mnt/bucket/largefile
6.Monitor upload although on network

What is the expected output? What do you see instead?
Expected output is similar transfer speeds, possibly between s3fuse and gsutil
Experienced output is gsutil at ~12Mbps and s3fuse at ~1.5Mbps

What version of the product are you using? On what operating system?
latest rpm

Please provide any additional information below.
Are there any options to increase the speed of s3fuse? I know it's emulating 
the google cloud store commands, but I would hope such a difference in speed is 
a result of that.
P.S. great program, I'm pretty happy so far with your work.
Attached is config

Original issue reported on code.google.com by ACiDG...@gmail.com on 14 Jul 2013 at 9:19

Attachments:

GoogleCodeExporter commented 8 years ago
I'll have to investigate this a little further.  In the interim, can you try 
setting "upload_chunk_size" in s3fuse.conf to 8192?  That's the chunk size 
gsutil uses and I'm curious to see if it makes a difference with s3fuse.

Also -- are you running in a Google Compute Engine instance?

Original comment by tar...@bedeir.com on 14 Jul 2013 at 9:59

GoogleCodeExporter commented 8 years ago
No, GCE on my account. Just using this for backup.

I figured out how to use duplicity with s3 compatibility on google cloud 
storage, which is also giving high upload speed as gsutil did

I'll report back with the upload_chunk_size once duplicity finishes so I can 
have full uplink to verify. I still may need s3fuse as I prefer oauth over HMAC.

Original comment by ACiDG...@gmail.com on 14 Jul 2013 at 11:46

GoogleCodeExporter commented 8 years ago
When I use upload_chunk_size=8192 I get the following error:

file::test_transfer_chunk_size: upload chunk size must be a multiple of 131072.
main: caught exception while initializing: invalid upload chunk size

Setting it to 131072 just for the doesn't improve the situation, but doesn't 
report the exception.

Original comment by ACiDG...@gmail.com on 15 Jul 2013 at 6:39

GoogleCodeExporter commented 8 years ago
Fair enough. I need to investigate a little.

Original comment by tar...@bedeir.com on 16 Jul 2013 at 12:30

GoogleCodeExporter commented 8 years ago
Thanks, if there's any further testing I can do, please let me know.

I also realized my bug report is full of grammatical errors and omissions, 
please don't judge my technical ability on this fact :)

Original comment by ACiDG...@gmail.com on 16 Jul 2013 at 6:45

GoogleCodeExporter commented 8 years ago
I finally got around to running some tests and yes, there is something of a gap 
in throughput.  In my tests, increasing "upload_chunk_size" to 2097152 (2 MB) 
made uploads about as fast as with gsutil.  The tradeoff with upping the 
transfer chunk size though is that if any one chunk fails, you're resending 
more data, but if your connection is reliable enough this shouldn't be an 
issue.  Please give it a shot and let me know.

Original comment by tar...@bedeir.com on 4 Aug 2013 at 2:10

GoogleCodeExporter commented 8 years ago

Original comment by tar...@bedeir.com on 25 Nov 2013 at 3:16