Open cedric-1 opened 8 years ago
Did you find your answer?
No, I used another mechanism adapted to my use case. I didn't have enough time to investigate further.
This error about the version is related to the OpenFlow version that you are using. To work with queues. you must work with OpenFlow13 or further.
version negotiation failed: we support versions 0x04 to 0x04 inclusive but peer supports no later than version 0x01
This command helps to change the version in the OVS switch. ovs-vsctl set bridge foo protocols=OpenFlow13
@cedric-1 I think the answer is htb's ceil rate is 1GHz by default
Hi,
I'm working with CPqD switch on Mininet, and I would like to put some QoS in my network, but I have some problems with the queuing mechanism : my queues fairly divide the avaiblable bandwidth, without taking into account the [BW] argument I pass them. I guess I missed something basic in the implementation, since I saw examples of CPqD queues working on the internet, but I can't figure out what I did wrong. Here are the details :
My settings :
ubuntu version : 14.04 mininet version : 2.2.1 then 2.3.0d1 (both tested) dpctl version : 1.3.0
I installed CPqD switch :
> direct mininet/util/install.sh -3f
(This command didn't raise any error, but to be sure I installed it with a second method on another VM :https://wiki.onosproject.org/display/ONOS/CPqD+1.3+switch+on+recent+Ubuntu+versions)
I suppressed the "--no-slicing" option in ~/mininet/mininet/node.py and ran :
This is the topology I use :
I launch it with --link tc option :
> sudo mn --custom ~/mininet/custom/topo.py --topo mytopo --switch user,protocols=OpenFlow13, --link tc
I also put some basic rules into s1 and s2 so that h1, h2 and h3 can ping h4 :
Here ping works fine.
I tried some Iperf tests between h1 - h4 and h2 - h4 to check the system behaviour :
h1> iperf -c 10.0.0.4 -u -M 1000 -t 30
Result : h1 -> h4 : 996317bps
NB : Iperf doesn't fully works on my system : UDP and TCP checksums are not valid, which probably cause the packets to be dropped by the server. So I don't launch any iperf server on h4, and I measure the rate of the UDP flows using wireshark>statisitics>conversation>UDP. This works for UDP, which doesn't requires any ACK, but not for TCP.
Result : random repartition of the bandwidth
Then I added two queues and the corressponding rules :
I made some checks :
This error is raised if more than one queue is set, else we have (for example) :
If I ping h4 from h1 and h2 packets pass in the queues
Now we launch the same iperfs :
Considering that queue's bandwidths are expressed in tenth of a percent, I excpected the following result : h1 -> h4 : 200000 bps h2 -> h4 : 800000 bps
But the result is : h1 -> h4 : 508954 bps h2 -> h4 : 508812 bps
After trying many times I concluded that enabling queuing fairly divides bandwidth between the two flows, no matter the specified bandwidth : 50-50 each.
I carried out some extra tests, just in case queue bandwidth definition is not the one expected :
Result : bandwidth divided 50-50
Imossible to establish these queues, probably because parameter is over 1000.
Impossible to establish these queues. We pass link bandwidth to 10mbps and we implement these queues :
Bandwith fairly divided again, around 5mbps for each flow.
If I put only one queue (and the corressponding flow rule) the result is the same, the bandwidth is fairly divided.
Logs :
These logs appear whether slincing is enabled or not.
Do you have any idea about that ?
Many thanks,
Cedric