Closed reitblatt closed 10 years ago
Let's just report numbers for reactive Pyretic (or interpreter if it comes to that)? I'm guessing @joshreich won't have cycles to look into this before SIGCOMM as he's probably busy with his own submissions.
I've assigned this to @ngsrinivas for now as he's gained some expertise w/ the compilation system, @nkatta might also be able to help debug. As Nate guessed I am pretty short of cycles at the moment (actually at Georgia Tech working on the submission at the moment) , but I might have time on Sunday to look into this further.
FWIW, I can't reproduce this---I see a different bug instead. When I try running Mark's example, Pyretic installs a single send-to-controller rule on each switch.
EDIT: tested on pyretic-dev/master, commit b9c71f834fc233142e0b36cde42a31628a98028a.
Here's my setup:
Mark's code goes in: pyretic-dev/pyretic/modules/bug.py
Start Pyretic: $ pyretic.py -m p0 pyretic.modules.bug
Start mininet: $ mininet.sh --topo=linear,2
In mininet:
pingall * Ping: testing ping reachability h1 -> h2 h2 -> h1 * Results: 0% dropped (2/2 received) s1 ovs-ofctl dump-flows tcp:127.0.0.1:6634 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=70.676s, table=0, n_packets=14, n_bytes=979, idle_age=13, actions=CONTROLLER:65535
@cschlesi, @reitblatt could you report which respective repo (public/dev), branch, and commit number you ran this test on?
Public, master, a3581208f8ec3f19e51ff3ecf5f1ee4bd69e2425. Ran it on this topo:
from mininet.topo import Topo
class Triangle( Topo ):
def __init__( self ):
Topo.__init__( self )
h1 = self.addHost( 'h1' )
h2 = self.addHost( 'h2' )
s1 = self.addSwitch( 's1' )
s2 = self.addSwitch( 's2' )
s3 = self.addSwitch( 's3' )
self.addLink( h1, s1 )
self.addLink( h2, s3 )
self.addLink( s1, s3 )
self.addLink( s1, s2 )
self.addLink( s2, s3 )
topos = { 'tri': ( lambda: Triangle() ) }
@SiGe and I have been able to replicate Cole's error on the latest pyretic-dev master (commit b9c71f). @SiGe has also replicated Mark's original bug output on pyretic (public master). A couple of observations:
Mark's policy as written has some issues with the parentheses placement. Instead of doing a parallel composition for each direction of traffic across the switches, the policy does a sequential composition followed by a parallel composition. In fact, what could work is the new policy (after fixing the parentheses):
return ((match(srcmac=h1,dstmac=h2) >> ((match(switch=1) >> fwd(2))
+ (match(switch=2) >> fwd(2))))
+ (match(srcmac=h2,dstmac=h1) >> ((match(switch=1) >> fwd(1))
+ (match(switch=2) >> fwd(1)))))
This fixes the rules on the switches, but the weird thing is we can't get pings to work still. @SiGe and I are looking at it -- the packets seem to be hitting rules which drop packets:
On s1:
stats_reply (xid=0x9c381275): flags=none type=1(flow)
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=59999, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,in_port=2,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,actions=IN_PORT
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=59996, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,in_port=1,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01,actions=IN_PORT
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=59993, n_packets=5, n_bytes=314, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,actions=
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=59998, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,actions=output:2
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=59997, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,actions=
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=59995, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01,actions=output:1
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=59994, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01,actions=
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=60000, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_type=0x88cc,actions=CONTROLLER:65535
cookie=0, duration_sec=21s, duration_nsec=320000000s, table_id=0, priority=0, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,actions=CONTROLLER:65535
----
On s2:
stats_reply (xid=0xe6cf0664): flags=none type=1(flow)
cookie=0, duration_sec=40s, duration_nsec=407000000s, table_id=0, priority=59999, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,in_port=2,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,actions=IN_PORT
cookie=0, duration_sec=40s, duration_nsec=407000000s, table_id=0, priority=59996, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,in_port=1,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01,actions=IN_PORT
cookie=0, duration_sec=40s, duration_nsec=364000000s, table_id=0, priority=59993, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,actions=
cookie=0, duration_sec=40s, duration_nsec=407000000s, table_id=0, priority=59998, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,actions=output:2
cookie=0, duration_sec=40s, duration_nsec=407000000s, table_id=0, priority=59997, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,actions=
cookie=0, duration_sec=40s, duration_nsec=404000000s, table_id=0, priority=59995, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01,actions=output:1
cookie=0, duration_sec=40s, duration_nsec=364000000s, table_id=0, priority=59994, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_src=00:00:00:00:00:02,dl_dst=00:00:00:00:00:01,actions=
cookie=0, duration_sec=40s, duration_nsec=407000000s, table_id=0, priority=60000, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,dl_vlan=0xffff,dl_vlan_pcp=0x00,dl_type=0x88cc,actions=CONTROLLER:65535
cookie=0, duration_sec=40s, duration_nsec=407000000s, table_id=0, priority=0, n_packets=0, n_bytes=0, idle_timeout=0,hard_timeout=0,actions=CONTROLLER:65535
So, @SiGe and I have a fix. I don't think pyretic master has any bugs; but there were two issues with the way the policy was being specified.
return ((match(srcip=h1,dstip=h2) >> ((match(switch=1) >> fwd(2))
+ (match(switch=3) >> fwd(1))))
+ (match(srcip=h2,dstip=h1) >> ((match(switch=3) >> fwd(2))
+ (match(switch=1) >> fwd(1)))))
Now the pings h1 <--> h2 works. This is on pyretic public (master).
Awesome job guys
It looks like pyretic-dev master does have some issues, we'll get back on this shortly (going to SDN class presentations now).
Please invoice @princedpw 's grant for our platinum support plan.
On Fri, Jan 17, 2014 at 12:55 PM, Marco Canini notifications@github.comwrote:
Awesome job guys
— Reply to this email directly or view it on GitHubhttps://github.com/frenetic-lang/pyretic/issues/24#issuecomment-32628808 .
bravo!
Thanks guys. The discrepancy between the policy and the topo was a reporting error. I simplified the policy for the report, but forgot to give you the simplified topo. My bug was the parentheses.
Most universities in the NorthEast are on my grant. No need for a special charge.
On Fri, Jan 17, 2014 at 12:58 PM, Nate Foster notifications@github.comwrote:
Please invoice @princedpw 's grant for our platinum support plan.
On Fri, Jan 17, 2014 at 12:55 PM, Marco Canini notifications@github.comwrote:
Awesome job guys
— Reply to this email directly or view it on GitHub< https://github.com/frenetic-lang/pyretic/issues/24#issuecomment-32628808> .
— Reply to this email directly or view it on GitHubhttps://github.com/frenetic-lang/pyretic/issues/24#issuecomment-32629229 .
Technically, I'm located north east of Princeton. May I start abusing of your grant too? ;-)
Of course! I'm supporting several Inuit near the arctic circle.
On Fri, Jan 17, 2014 at 1:42 PM, Marco Canini notifications@github.comwrote:
Technically, I'm located north east of Princeton. May I start abusing of your grant too? ;-)
— Reply to this email directly or view it on GitHubhttps://github.com/frenetic-lang/pyretic/issues/24#issuecomment-32633850 .
Pyretic is not installing the rules expected (and installing unexpected rule) for this policy. In particular, it's not installing the "h2 -> h1" rules:
Rules installed for 1.0 user switch:
S1:
S2: