Closed ryanmerolle closed 2 years ago
I was thinking about how to approach this in a way that does not break existing deployments and should work no matter if someone passes a separate inventory per fabric, leverages SoT like NetBox, or passes one single multi-fabric ansible inventory.
group_by
would allow the role to not have to know what group the fabric_name
var is tied to. You could build groups based on these values. To limit the number of hosts in these groups, you could limit to only where type=="spine" since all fabrics need atleast 1 spine.@ryanmerolle - This is supported to and likely that you need to update the required hierarchy naming.
->fabric
(all interconnected devices and should only be one name)
----> dc_name
(think of this as the physical location)
--------> pod_name
see documentation
see documentation: https://avd.sh/en/latest/roles/eos_designs/doc/l3ls-evpn/fabric-topology/
See sample output of fabric documentation for multi-dc topology-> https://github.com/aristanetworks/ansible-avd/blob/devel/ansible_collections/arista/avd/molecule/eos_designs-twodc-5stage-clos/documentation/fabric/TWODC_5STAGE_CLOS-documentation.md
All I did was fork https://github.com/arista-netdevops-community/ipspace-webinar-september15-2020 and combine inventories given netbox will feed a of this data for me off the bat.
It sounds like in the repo I forked, fabric_name, given the current version of AVD could have used one fabric_name and different dc_names.
I read that documentation, and though dc_name did not explicitly say this is only for 5-stage CLOS, I for some reason understood it as such.
Here are my notes:
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 15 days
This is likely still valid to investigate
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 15 days
Should this be reopened?
Issue Type
Summary
Though the module eos_designs will take multiple devices/fabrics as an input, and spit out relevant files for each switch/device, it will only spit out 1 Fabric document given the way in which the task is run as run_once.
I suggest an approach of getting the unique Fabric Namesin the play, and looping through those Fabric Names in a task. You might be able to somehow run against 1 member device so you could inherit all the required group_vars and host_vars if needed. That you you could introduce a standard way of defining a fabric using something like looping through all files in a fabric_vars (similar to your global_vars approach).
Component Name
arista,avd,eos_designs
Steps to reproduce
Example play: https://gitlab.com/ryanmerolle/avd-demo/-/jobs/1274317183
Expected results
Actual results