keroscarel / s3backer

Automatically exported from code.google.com/p/s3backer
GNU General Public License v2.0
0 stars 0 forks source link

Wish: throttling option #15

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
I'm using s3backer on one of my home servers to rsync my photo collection
to s3, but this causes my ssh sessions on other machines to be heavily
disturbed (a lot of typing latency). I've tried to throttle rsync
(--bwlimit) but since I cache s3backer the rsync throttling is not very
effective. So, I would love to see a (good) throttling/bwlimiting mechanism
implemented in s3backer, please ;)

Original issue reported on code.google.com by mrva...@gmail.com on 19 Oct 2009 at 12:46

GoogleCodeExporter commented 8 years ago
If you are using Linux you might want to take a look at traffic shaping:

  http://www.google.com/search?q=linux+traffic+shaping

This is a more general and precise solution to the bandwidth prioritization 
problem.

Also, you might consider reducing the number of block cache threads, which 
directly
limits the number of concurrent write operations.

Original comment by archie.c...@gmail.com on 19 Oct 2009 at 2:27

GoogleCodeExporter commented 8 years ago
Thanks for the tip, I am aware of the traffic shaping options in Linux but I'm 
not
using a linux router to connect to my ADSL line and secondly was hoping to 
avoid the
traffic shaping learning curve. Will try to reduce the block cache threads 
though!

Original comment by mrva...@gmail.com on 19 Oct 2009 at 3:15

GoogleCodeExporter commented 8 years ago
Could this cURL option be reused via cmdline option? 
CURLOPT_MAX_SEND_SPEED_LARGE
(http://curl.haxx.se/libcurl/c/curl_easy_setopt.html)
I'd do it myself, but I lack the C experience :(

Original comment by mrva...@gmail.com on 21 Oct 2009 at 7:32

GoogleCodeExporter commented 8 years ago
My C skills were less rusty than I thought, and the code is quite readable so I 
made
a stab at it. Please let me know if it's good enough to be included (it's a
straightforward copy of existing configuration code).
I tested it and it works, but only in combination with --blockCacheThreads=1, 
so you
might want to force it?

My SSH sessions are (more or less) responsive again! :)

Original comment by mrva...@gmail.com on 23 Oct 2009 at 8:10

Attachments:

GoogleCodeExporter commented 8 years ago
I just see that the change in s3b_config.h is not needed and should be skipped.

Original comment by mrva...@gmail.com on 23 Oct 2009 at 8:16

GoogleCodeExporter commented 8 years ago
Thanks for the patch. I can't get to this immediately but will work on it soon 
when
time permits.

Original comment by archie.c...@gmail.com on 26 Oct 2009 at 2:29

GoogleCodeExporter commented 8 years ago
Fixed in r420.

Original comment by archie.c...@gmail.com on 26 Oct 2009 at 6:48

GoogleCodeExporter commented 8 years ago
Looks good, but with all due respect I don't think it's a good idea to define 
the up-
and download speeds as bits/s. Bit/s are commonly used to describe raw transfer
speeds, cURL can only measure real bytes sent and will /never/ be aware of the
overhead imposed by the tranmission protocol, like TCP/IP. Even cURL defines the
parameter in Bytes/s, for a good reason.

Original comment by mrva...@gmail.com on 26 Oct 2009 at 7:04

GoogleCodeExporter commented 8 years ago
It doesn't really matter what the units are... bytes/sec = bits/sec * 8 
(s3backer
divides the quantity by eight before configuring cURL with it). So the choice 
should
be whatever is most natural for users. This is simply a measure of bandwidth, 
which
is a rate (quantity/time), not an absolute quantity.

I chose bits/sec because when people talk about bandwidth that's usually the 
way they
refer to it (e.g., people say "my DSL line gets 1.5Mbps downstream and 384kbps
upstream").

The TCP, etc. overhead is not counted or implied in either case, so that's not
relevant. In other words, I'm not aware of any convention that 
"bytes-per-second"
implicitly means "not counting overhead", whereas "bits-per-second" means 
"counting
overhead". You seem to be implying that there is... I'm curious where you got 
that
notion. The way I think of it, bandwidth is the same no matter how you logically
group the individual bits together.

Original comment by archie.c...@gmail.com on 26 Oct 2009 at 8:18

GoogleCodeExporter commented 8 years ago
It's exactly that distinction that I mean: when speaking in terms of DSL bw, 
people
refer to [M|k]bits/s, but you can not (or hardly) calculate that back to real
[k]bytes/s, because of the IP (and atm/pppoe) overhead. So, now telling 
s3backer to
limit it's upload rate to 100kbits/s like you suggest will probably result in 
using
up to 110kbits/s of the line. I agree I'm nitpicking here and the choice is 
trivial,
but what I try to say is that it may be misleading to suggest that the upload 
limit
of s3backer translates directly to the upload limit of the DSL line. It's wrong 
by at
least a factor ~10%. Bits/s is probably better understood (because of the DSL 
line
dicsussion). Bytes/s would just be 'more correct' because of the way cURL 
calculates it. 
In either case, I'm glad the option is implemented upstream ;)

Original comment by mrva...@gmail.com on 26 Oct 2009 at 8:33

GoogleCodeExporter commented 8 years ago
I think it's at least worth mentioning in the man page that the limits do not 
count
overhead. I'll add something to that effect.

Original comment by archie.c...@gmail.com on 26 Oct 2009 at 9:38

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 22 Oct 2010 at 8:01