Open MCxiaofang opened 8 months ago
add the following code 1 ns3-quic-channel.h
void config_channel_params(int num_pkts, int interval_ms);
2 ns3-quic-channel.cc
void config_channel_params(int num_pkts, int interval_ms) {
kWritePackets = num_pkts;
kWriteDelta = QuicTime::Delta::FromMilliseconds(interval_ms);
}
3 ns3-quic-util.h
void set_quic_stream_buffer_threshold(uint64_t threshold, int num_pkt, int interval_ms);
4 ns3-quic-util.cc (of course, add header info: #include
void set_quic_stream_buffer_threshold(uint64_t threshold, int num_pkt, int interval_ms) {
SetQuicFlag(FLAGS_quic_buffered_data_threshold, threshold);
quic::config_channel_params(num_pkt, interval_ms);
//std::cout<<GetQuicFlag(FLAGS_quic_buffered_data_threshold)<<std::endl;
}
5 quic-main.cc (https://github.com/SoonyangZhang/quic-on-ns3/blob/main/scratch/quic-main.cc#L542)
set_quic_stream_buffer_threshold(1000*1024, 100, 5);
For 100Mbps, these params works.
it works!
I wanted to use the program to support testing of 100Mbps and 1000Mbps (40ms rtt)bottleneck link bandwidths, but the results performed abnormally at such high bandwidths, as summarized below。
Goodput is only 1500Kbps (whether 100Mbps or 1000Mbps bottleneck link bandwidth)
The sendrate, however, increases as the bandwidth of the bottleneck link increases (57622Kbps sendrate when the goodput is only 1500kbps)
The final statistic has a very low packet loss rate
NS3 simulation runs extremely fast (compared to a program that also runs normally with TCP at 100Mbps bottleneck link bandwidth) (even faster than a quic test program running at 10Mbps bottleneck link bandwidth) (thus proving that it's not the stats program that's at fault)
sendrate stops counting after 5.3s, according to the trace program, we found that it is because
if (m_rate ! = bps)
caused sendrate.txt to stop updating because the bandwidth calculated by the congestion control algorithm has remained constant since 5.3s. (The program calculates the send rate by dividing the congestion window size by srtt, and the cout reveals that the values of srtt and cwnd do not change after 5.3s.)Notice that the results are partially normal after making the following changes on 100Mbps(1000Mbps always strange), which can be used to further speculate on the root cause of the anomalous performance。
Here is the source code for my test file,Run the test file with the following command
./waf --run "scratch/dumbbell-quic --cc1=bbrv2 --cc2=cubic --lo=0 --nLeaf=4 --gap=2 --it=2"
nLeaf: Specify the number of single-ended user nodes for dumbbell topologies cc1: Congestion control algorithm used by half of the nodes in a prescribed dumbbell topology cc2: Specify the congestion control algorithm to be used by the nodes in the other half of the dumbbell topology gap: Interval between application launches(first app start at 5.0s) lo: Packet loss rate, 10 equals 0.01%, 100 equals 01%To demonstrate that it is the other configurations in NS3 that do not affect the results, the following code is simply replacing the application in the above test file with the native application in NS3 that is tested using TCP。 It can be observed that the rest of the code is completely unchanged except for the application section。 Strange point: but again, previous test results proved that the TrafficControl policy still affects the results (the algorithm works fine at 100Mbps after switching from DynamicQueueLimits to a DropTailQueue with a low Maxsize)