noironetworks / apic-ml2-driver

Apache License 2.0
7 stars 11 forks source link

About APIC with VLAN only devices #148

Closed peter-wangxu closed 6 years ago

peter-wangxu commented 6 years ago

Just curious about how APIC driver forward traffic between VM(on OpenStack compute node) and VLAN enabled devices.

I am newbie to OpFlex, is it possible to try OpFlex without Cisco switch hardware?

amittbose commented 6 years ago

The APIC driver programs ACI network fabric to "stitch" traffic between VMs (which may be using VLAN or VxLAN encapsulation) and the "VLAN enabled" devices (which will use a configured VLAN).

To try out OpFlex, you can use a "mock server". See https://wiki.opendaylight.org/view/OpFlex:Building_and_Running#Running for more details.

peter-wangxu commented 6 years ago

Thanks for the suggestion. It's definitely worth a try.

Is there a way to get the configured VLAN for a neutron network? I want to figure out a way to fetch it via Neutron API/CLI, so that I can plug into my VLAN devices(not managed by APIC, but physically connected to the ACI network)accordingly.

peter-wangxu commented 6 years ago

More background regarding my question: We are trying to add Manila support for the OpFlex network type.

Currently we only support the Flat and VLAN, we pull the network information by Manila network plugin(https://github.com/openstack/manila/blob/master/manila/network/neutron/neutron_network_plugin.py), and configure the storage backend based on above information.

Can you please provide more guidance here?

Thanks Peter

amittbose commented 6 years ago

Sorry for the delayed response, I took some time off last few weeks.

To use your VLAN-enabled storage backend device with an OpFlex network, you'll need to use hierarchical binding in Neutron. What happens is that when you create a port that represents your storage device on a Neutron network of type OpFlex, the APIC driver will create a new VLAN network segment (i.e. allocate a new VLAN segmentation ID). It will then configure APIC to associate traffic from that backend device using that VLAN to the Neutron network on which the port was created.

The OpFlex network in Neutron doesn't have a segmentation-ID, so you cannot use the Neutron network's information to decide which VLAN to use for configuring the backend. You'll need to get the VLAN from the port binding information (bottom bound segment). Not sure if there is a Neutron API or RPC method to get this binding details, but it is definitely available to Neutron ML2 plugins.

More information on using hierarchical binding with OpFlex networks is available here: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/openstack/b_ACI_with_OpenStack_OpFlex_Deployment_Guide_for_Red_Hat/b_ACI_with_OpenStack_OpFlex_Deployment_Guide_for_Red_Hat_chapter_01.html#id_46534

peter-wangxu commented 6 years ago

Thanks for the detailed explanation.

Very helpful message, and additionally, the manila has a plugin for hierarchical port binding.

The whole picture of OpFlex integration becomes more complete now.

peter-wangxu commented 6 years ago

@amittbose Do you know whether it is possible to enable HPB within GBP(or unified mode)? Appreciate it if you have any document on how to do above.

ghost commented 6 years ago

I'd be interested in understanding the implications of using unified mode especially if the only traffic on ML2 is the NAS traffic for Manila. Would there be additional traffic overhead or configuration and management complexity? Even if unified worked as a means of implementing HPB on GBP, if the level of effort to implement and manage unified is too high, our client won't want to proceed down the unified path.

amittbose commented 6 years ago

@peter-wangxu Yes HPB is supported with unified mode (and unified mode is only way to make HPB work with GBP). Setting it up with unified mode is very similar to the old ML2 plugin, Ifti (@irathore) should be able to help you documentation.

irathore commented 6 years ago

Actually it is pretty simple. We create a physdom with the vlan pool and define it in the aimctl.conf (besides neutron upstream config, with multiple type and mechanism drivers. Plesae make sure apic_aim is the last mechanism listed)

In below example the physdom name is hpb-test and one host and 2 LBaaS (dpdk-comp3, lb-dmz an dlb-dmz2) agents are configured for hpb, we need apic-swicth section so we know which port to create VLAN binding for.

[apic_switch:101] dpdk-comp03 = 1/5

[apic_switch:102] lb-dmz = 1/27 lb-dmz2 = 1/27

[apic_physdom:hpb-test]