CPqD / ofsoftswitch13

OpenFlow 1.3 switch.
http://cpqd.github.com/ofsoftswitch13
304 stars 192 forks source link

Troubles using queues with CPqD switch #224

Open cedric-1 opened 8 years ago

cedric-1 commented 8 years ago

Hi,

I'm working with CPqD switch on Mininet, and I would like to put some QoS in my network, but I have some problems with the queuing mechanism : my queues fairly divide the avaiblable bandwidth, without taking into account the [BW] argument I pass them. I guess I missed something basic in the implementation, since I saw examples of CPqD queues working on the internet, but I can't figure out what I did wrong. Here are the details :

My settings :

ubuntu version : 14.04 mininet version : 2.2.1 then 2.3.0d1 (both tested) dpctl version : 1.3.0

I installed CPqD switch : > direct mininet/util/install.sh -3f

(This command didn't raise any error, but to be sure I installed it with a second method on another VM :https://wiki.onosproject.org/display/ONOS/CPqD+1.3+switch+on+recent+Ubuntu+versions)

I suppressed the "--no-slicing" option in ~/mininet/mininet/node.py and ran :

> cd ~/mininet
> sudo make install

This is the topology I use :

""" 
Topo launched by mn cmd

    h1 ---            
           \   10M   
    h2 ---- s1 --- s2 ---- h4
           /         
    h3 ---
"""
#!/usr/bin/python

from mininet.topo import Topo
from mininet.net import Mininet
from mininet.node import Controller, RemoteController
from mininet.cli import CLI
from mininet.log import setLogLevel, info
from mininet.util import dumpNodeConnections

class MyTopo( Topo ):
    "Simple topology example."

    def __init__( self ):
        "Create custom topo."

        # Initialize topology
        Topo.__init__( self )

        # Add hosts and switches
        h1 = self.addHost( 'h1', ip='10.0.0.1', mac='00:00:00:00:00:01' )
        h2 = self.addHost( 'h2', ip='10.0.0.2', mac='00:00:00:00:00:02' )
        h3 = self.addHost( 'h3', ip='10.0.0.3', mac='00:00:00:00:00:03' )
        h4 = self.addHost( 'h4', ip='10.0.0.4', mac='00:00:00:00:00:04' )
        s1 = self.addSwitch( 's1' )
        s2 = self.addSwitch( 's2' )

        # Add links
        self.addLink( h1, s1, use_htb=True, max_queue_size = 1 )
        self.addLink( h2, s1, use_htb=True, max_queue_size = 1 )
        self.addLink( h3, s1, use_htb=True, max_queue_size = 1 )
        self.addLink( s1, s2, bw = 1, use_htb=True, max_queue_size = None)
        self.addLink( s2, h4, use_htb=True, max_queue_size = 1 )

topos = { 'mytopo': ( lambda: MyTopo() ) }

I launch it with --link tc option : > sudo mn --custom ~/mininet/custom/topo.py --topo mytopo --switch user,protocols=OpenFlow13, --link tc

I also put some basic rules into s1 and s2 so that h1, h2 and h3 can ping h4 :

> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=1 in_port=1 apply:output=4;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=1 in_port=2 apply:output=4;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=1 in_port=3 apply:output=4;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=1 in_port=4 apply:output=1,output=2,output=3;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=2 in_port=2,eth_type=0x800,ip_dst=10.0.0.1 apply:output=1;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=2 in_port=2,eth_type=0x800,ip_dst=10.0.0.2 apply:output=2;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=2 in_port=2,eth_type=0x800,ip_dst=10.0.0.3 apply:output=3;
> dpctl unix:/tmp/s2 flow-mod cmd=add,table=0,prio=1 in_port=1 apply:output=2;
> dpctl unix:/tmp/s2 flow-mod cmd=add,table=0,prio=1 in_port=2 apply:output=1

Here ping works fine.

I tried some Iperf tests between h1 - h4 and h2 - h4 to check the system behaviour :

h1> iperf -c 10.0.0.4 -u -M 1000 -t 30

Result : h1 -> h4 : 996317bps

NB : Iperf doesn't fully works on my system : UDP and TCP checksums are not valid, which probably cause the packets to be dropped by the server. So I don't launch any iperf server on h4, and I measure the rate of the UDP flows using wireshark>statisitics>conversation>UDP. This works for UDP, which doesn't requires any ACK, but not for TCP.

h1> iperf -c 10.0.0.4 -u -M 1000 -t 30
h2> iperf -c 10.0.0.4 -u -M 1000 -t 30

Result : random repartition of the bandwidth

Then I added two queues and the corressponding rules :

> dpctl unix:/tmp/s1 queue-mod 4 1 200;
> dpctl unix:/tmp/s1 queue-mod 4 2 800;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=3 in_port=1 apply:queue=1,output=4;
> dpctl unix:/tmp/s1 flow-mod cmd=add,table=0,prio=3 in_port=2 apply:queue=2,output=4

I made some checks :

> dpctl unix:/tmp/s1 queue-get-config 4
If I launch this command while multiple queues are set I have an error :
Jun 16 14:15:42|00001|ofl_str|WARN|Received queue prop has invalid length.
dpctl: Error unpacking reply.

This error is raised if more than one queue is set, else we have (for example) :

> dpctl unix:/tmp/s1 queue-get-config 4
q_cnf_repl{port="4" queues=[{q="1", props=[minrate{min rate="200"}]}]}
> dpctl unix:/tmp/s1 stats-queue 4 1
stat_repl{type="queue", flags="0x0", stats=[{port="4", q="1", tx_bytes="0", tx_pkt="0", tx_err="0"}]}
> dpctl unix:/tmp/s1 stats-queue 4 2
stat_repl{type="queue", flags="0x0", stats=[{port="4", q="2", tx_bytes="0", tx_pkt="0", tx_err="0"}]}

If I ping h4 from h1 and h2 packets pass in the queues

stat_repl{type="queue", flags="0x0", stats=[{port="4", q="1", tx_bytes="12927", tx_pkt="73", tx_err="0"}]}
stat_repl{type="queue", flags="0x0", stats=[{port="4", q="2", tx_bytes="17949", tx_pkt="91", tx_err="0"}]}

Now we launch the same iperfs :

h1> iperf -c 10.0.0.4 -u -M 1000 -t 30
h2> iperf -c 10.0.0.4 -u -M 1000 -t 30

Considering that queue's bandwidths are expressed in tenth of a percent, I excpected the following result : h1 -> h4 : 200000 bps h2 -> h4 : 800000 bps

But the result is : h1 -> h4 : 508954 bps h2 -> h4 : 508812 bps

After trying many times I concluded that enabling queuing fairly divides bandwidth between the two flows, no matter the specified bandwidth : 50-50 each.

I carried out some extra tests, just in case queue bandwidth definition is not the one expected :

> dpctl unix:/tmp/s1 queue-mod 4 1 20
> dpctl unix:/tmp/s1 queue-mod 4 2 80

Result : bandwidth divided 50-50

> dpctl unix:/tmp/s1 queue-mod 4 1 200000
> dpctl unix:/tmp/s1 queue-mod 4 2 800000

Imossible to establish these queues, probably because parameter is over 1000.

> dpctl unix:/tmp/s1 queue-mod 4 1 0.2
> dpctl unix:/tmp/s1 queue-mod 4 2 0.8

Impossible to establish these queues. We pass link bandwidth to 10mbps and we implement these queues :

> dpctl unix:/tmp/s1 queue-mod 4 1 2
> dpctl unix:/tmp/s1 queue-mod 4 2 8

Bandwith fairly divided again, around 5mbps for each flow.

If I put only one queue (and the corressponding flow rule) the result is the same, the bandwidth is fairly divided.

Logs :

> tail -f /tmp/s1-ofp.log
Jun 16 16:03:20|00299|vconn|WARN|tcp:127.0.0.1:6653: version negotiation failed: we support versions 0x04 to 0x04 inclusive but peer supports no later than version 0x01
Jun 16 16:03:20|00300|rconn|INFO|tcp:127.0.0.1:6653: connection failed (Protocol error)
Jun 16 16:03:20|00301|rconn|WARN|tcp:127.0.0.1:6653: connection dropped (Protocol error)
Jun 16 16:03:20|00302|rconn|INFO|tcp:127.0.0.1:6653: waiting 4 seconds before reconnect

These logs appear whether slincing is enabled or not.

Do you have any idea about that ?

Many thanks,

Cedric

mrezaeek commented 7 years ago

Did you find your answer?

cedric-1 commented 7 years ago

No, I used another mechanism adapted to my use case. I didn't have enough time to investigate further.

Lizeth2989 commented 5 years ago

This error about the version is related to the OpenFlow version that you are using. To work with queues. you must work with OpenFlow13 or further.

version negotiation failed: we support versions 0x04 to 0x04 inclusive but peer supports no later than version 0x01

This command helps to change the version in the OVS switch. ovs-vsctl set bridge foo protocols=OpenFlow13

Boyangjun commented 3 years ago

@cedric-1 I think the answer is htb's ceil rate is 1GHz by default