Closed GoogleCodeExporter closed 9 years ago
try testing between the 2 machines with -w 256k switch added to both server and
client command line.
ADDITIONALLY, I have noticed 'discrepancies' in the way iperf reports
throughput, comparing data about the reporting interval, total data MBytes
transferred, and reported Mbps figure; togther with data from other tools that
show how many packets were transferred (and what sizes) in what time interval.
I SUSPECT this is due to iperf reporting being based on the TCP (or UDP)
payload; whilst other tools include IP, TCP/UDP & L2 headers in the calculation.
Net result is that iperf always seems to be 'pessimistic' compared to the way
other tools / theory report throughput.
Original comment by CharlesA...@gmail.com
on 16 Nov 2013 at 10:13
Yes, iperf is meant to measure "application level throughput", and not 'bits on
the wire" throughput.
I think this behavior is expected.
Original comment by bltier...@es.net
on 16 Dec 2013 at 4:25
Yeah although not counting packet headers should be a tiny factor since we're
sending huge packets. The initial report was a 25% deficit.
Original comment by jef.posk...@gmail.com
on 16 Dec 2013 at 4:57
Sorry for the delay, I didn't have the equipment to test the suggestions for a
while.
> CharlesA,
The '-w 256k' switch does help! So that would suggest that running iperf3
without the '-w' switch results in throttling due to buffering? I'll test and
report more details in a day or two.
>bltier,
I too think this is not an overhead issue. Why would there be more overhead in
running iperf3 than copying a file using 'copy'. If it were the other way
around, then...
Original comment by dse....@gmail.com
on 16 Dec 2013 at 5:19
It's certainly possible that the copy command also raises its window size.
Original comment by jef.posk...@gmail.com
on 16 Dec 2013 at 5:49
Here's some data on the effect of the -w switch on a 3 second test over a
Gigabit connection:
'iperf3 -c x.x.x.x -t 3 -w buff'
Buffer Size, k Throughput, Mbps
2 80
4 220
8 260
16 520
32 680
64 750
128 730
256 920
512 940
1024 940
2048 940
4096 450
I guess setting -w to 512k is the way to go if you want to test the maximum
link throughput.
Original comment by dse....@gmail.com
on 16 Dec 2013 at 5:53
What is your RTT? And what are your values for all of these (assuming your are
using Linux):
net.core.rmem_max
net.core.wmem_max
net.ipv4.tcp_rmem
net.ipv4.tcp_wmem
See: http://fasterdata.es.net/host-tuning/linux/
Original comment by bltier...@es.net
on 16 Dec 2013 at 6:03
Sorry, I'm using Windows 7 64-bit.
If by RTT you mean ping round trip time is <1 ms... I know, not useful.
Original comment by dse....@gmail.com
on 16 Dec 2013 at 6:05
Since we dont currently support windows, and since we think this is due to to
other system issues, marking this one 'wont fix'.
Original comment by bltier...@es.net
on 18 Dec 2013 at 10:22
Original issue reported on code.google.com by
dse....@gmail.com
on 13 Nov 2013 at 3:52