Closed ninnux closed 12 months ago
Hi Ninnux.
Please also share your playbook.
~I will move this to a discussion, since this is more a troubleshooting exercise than a bug report.~ We don't have discussions enabled on ansible-cvp repo :)
@ninnux from your provided outputs I suspect "device_filter": ["LABBO11"],
do be the issue, since LABB011 is not part of the hostnames A1, A2, A3 or A4. Try removing the device_filter
from your playbook.
@ClausHolbechArista I didn't understand that device_filter variable in playbook selected on hostname. I understud that device_filter select root name inventory... I changed my devices hostname and It works. Thank you a lot. You solved my problem.
device_filter is supposed to limit which devices you send to cloudvision, and if no devices match the filter, it leads to this bad situation. We have improved this behavior in AVD 4.0.
The root name is set with container_root: 'DC1'
or similar.
in any case Thank you
Issue Summary
I have an AVD repository in a working production environment and I'm trying to build a lab environment with 4 Arista 7020SR, CVP and AVD and on lab it fails. I used same code and same package versions I have in production working environment: CVP 2022.3.0 arista.avd 3.8.1 arista.cvp 3.5.1 arista.eos 6.0.0 pyhton cvprac 1.3.1. Now I bootstrapped lab arista with ZTP and switches are recognized by CVP they are in "Undefined" container. When I try to deploy my network by AVD it fails on "arista.avd.eos_config_deploy_cvp : Configure devices on cv_server". Adding verbosity on ansible-playbook I have the error message reported in subject of this case. I find code where it fails and it seems that the problem is the "devices": null instead of a dict in witch there are nodes and related containers and configlets.
Could you help me?
Which component(s) of AVD impacted
other
How do you run AVD ?
Ansible CLI with AVD Runner
Input variables
Steps to reproduce
Relevant log output
Code of Conduct