Closed vtolstov closed 10 years ago
I'm not using traditional LLDP messages. because I'm using my own format I should use a different type . Using traditional LLDP can cause problems on certain open flow switches.
I will move the layer 2 application into a example directory. On Mar 27, 2014 5:57 AM, "Vasiliy Tolstov" notifications@github.com wrote:
Why core.go contains 0xa0f1 for LLDP messsages? I can't find this protocol id, LLDP have only 0x88cc. Why not use only this value?
Also maybe move l2 example app to examples dir and create some more examples? FOr example i'm interesting to try create l2 multiswitch with topology detection...
Reply to this email directly or view it on GitHubhttps://github.com/jonstout/ogo/issues/27 .
Okay.
I read some openstack docs and they recommend now move openflow controller to compute node (not network like before) explaining this more fault tolerance and performance. But in this case they need neutron for coordinating openflow controllers. Does ogo can scale to handle 100-200 openflow switches and hadnle all arp from them? And last question - do you know how can create equal cost multipath routing (multipath) in openflow? As i understand i need to create flows with small hardtimeout and balance traffic. But does it possible not recreating flows?
Does ogo can scale to handle 100-200 openflow switches and hadnle all arp from them?
Maybe... Last time I did performance stuff with cbench, I was getting around 30,000 flow/sec. That was on a VM with 4 cores. I need to do some verification though; I think the numbers were higher.
And last question - do you know how can create equal cost multipath routing (multipath) in openflow?
Not exactly no, but that sounds roughly like the right idea. You should be careful not to break any TCP sessions. I've done something that sounds similar in the past, but it ended up being a mess of code. You might browse through FlowScale. It does port based load balancing. You may be able to use it's ideas to create a more fine grained approach.
2014-03-27 22:26 GMT+04:00 Jonathan Stout notifications@github.com:
Not exactly no, but that sounds roughly like the right idea. You should be careful not to break any TCP sessions. I've done something that sounds similar in the past, but it ended up being a mess of code. You might browse through FlowScale https://github.com/InCNTRE/FlowScale. It does port based load balancing. You may be able to use it's ideas to create a more fine grained approach.
Thank you, but i don't known java =(. Can you says me main idea which like openflow rules i need to create? As i understand flowscale breaks nw_dst by some mask and distribute load by two/many ports?
Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru jabber: vase@selfip.ru
Unfortunately java isn't my best language either. I've never actually looked in detail at the flowscale project; I only have a brief idea of what it does.
I ran cbench on the packet-parse branch (again on a quad-core vm). The result were better than I believed they would be.
ogo@Ubuntu-4-18-2013:~$ sudo cbench -c localhost -p 6633 -m 1000 -l 3 -s 16 -M 1000 -t
cbench: controller benchmarking tool
running in mode 'throughput'
connecting to controller at localhost:6633
faking 16 switches offset 1 :: 3 tests each; 1000 ms per test
with 1000 unique source MACs per switch
learning destination mac addresses before the test
starting test with 0 ms delay after features_reply
ignoring first 1 "warmup" and last 0 "cooldown" loops
connection delay of 0ms per 1 switch(es)
debugging info is off
09:55:22.370 16 switches: flows/sec: 4844 5471 3952 5273 5906 5268 6399 5404 6744 5003 5385 4169 9669 5053 4156 3627 total = 86.003497 per ms
09:55:23.475 16 switches: flows/sec: 7509 8591 8082 6156 7472 9004 5549 5920 10274 5403 6140 5731 6694 6991 5137 6099 total = 110.751889 per ms
09:55:24.580 16 switches: flows/sec: 5845 5118 5185 3019 5571 11080 5364 6515 8542 7421 7119 6894 9849 5845 5395 5710 total = 104.471164 per ms
RESULT: 16 switches 2 tests min/max/avg/stdev = 104471.16/110751.89/107611.53/3140.36 responses/s
This is way below what other controllers will perform, but should be able to handle a small network. There does seem to be some memory issues. I'll be looking into them over the next few weeks.
2014-03-29 23:24 GMT+04:00 Jonathan Stout notifications@github.com:
Unfortunately java isn't my best language either. I've never actually looked in detail at the flowscale project; I only have a brief idea of what it does.
I ran cbench on the packet-parse branch (again on a quad-core vm). The result were better than I believed they would be.
ogo@Ubuntu-4-18-2013:~$ sudo cbench -c localhost -p 6633 -m 1000 -l 3 -s 16 -M 1000 -t cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 16 switches offset 1 :: 3 tests each; 1000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 1 "warmup" and last 0 "cooldown" loops connection delay of 0ms per 1 switch(es) debugging info is off 09:55:22.370 16 switches: flows/sec: 4844 5471 3952 5273 5906 5268 6399 5404 6744 5003 5385 4169 9669 5053 4156 3627 total = 86.003497 per ms 09:55:23.475 16 switches: flows/sec: 7509 8591 8082 6156 7472 9004 5549 5920 10274 5403 6140 5731 6694 6991 5137 6099 total = 110.751889 per ms 09:55:24.580 16 switches: flows/sec: 5845 5118 5185 3019 5571 11080 5364 6515 8542 7421 7119 6894 9849 5845 5395 5710 total = 104.471164 per ms RESULT: 16 switches 2 tests min/max/avg/stdev = 104471.16/110751.89/107611.53/3140.36 responses/s
This is way below what other controllers will perform, but should be able to handle a small network. There does seem to be some memory issues. I'll be looking into them over the next few weeks.
Thanks!
Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru jabber: vase@selfip.ru
Why core.go contains 0xa0f1 for LLDP messsages? I can't find this protocol id, LLDP have only 0x88cc. Why not use only this value?
Also maybe move l2 example app to examples dir and create some more examples? FOr example i'm interesting to try create l2 multiswitch with topology detection...