Closed TaLoN1x closed 2 months ago
I suppose problem comes from the fact that I have clusters in 2 sites, I defined ClusterSite with filter, but not HostSite, as there is no specific pattern for that, doesn't it inherit Site information from the Cluster?
Devices are indexed in local inventory by site and name attributes. So if there is going to be the same device (same asset tag) with different sites, that is going to be two different devices in local inventory. That is the reason for the above error. I guess this happened when site for device was changed and new device with new site was not matched to existing one.
Current implementation doesn't give same site for all devices part of the same cluster. This is because cluster can be made of devices from different sites. This obviously can be changed in code, but I guess cluster can have more than one site?
Right, question is if I want to manage it manually within the Netbox, not to speciwy in the ssot, then sync starts to fail :/
The idea is that objects managed by the netbox-ssot shouldn't be edited by user because they have the same state as on the external apis. Otherwise on the next run the data would get rewritten to match data on external apis.
If I understand correctly your problem is with site attribute of device/host? Why not use source.hostSiteRelations
in yaml configuration to match host with site?
Ok, I manage to resole that specific issue creating a filter. Then I have another nut:
I have two identical machines in different Clusters and Different Sites but with the same name :). It is de facto synched copy. There is no way for me to define Site or Tenant based on the VM Name.
ERROR (ovirt-test): failed to sync oVirt vm: unexpected status code: 400: {"all":["Constraint “virtualization_virtualmachine_unique_name_cluster_tenant” is violated."]}
P.S. Errors could actually tell what object are they about. It is a bit hard to drill down to, I will try to make a pull request alter on that.
Yes, thats the problem because vms are only indexed by their names. So at the moment there can't be two vms with the same name. This is a bug and I will fix it (there can be same names within different clusters).
Also agree on more verbose log output, the name of the object which triggered the error should be provided (this should be trivial fix)...
another bug is vcpu count. I have VMs that are 2 virtual sockets, 1 core per socket, they are created with just 1 vCPU, thus should be 2.
Thanks for reporting that, could you open separate issue
Hello,
On repetitive runs ovirt source integrations somehow want to create cluster devices again. I think it should edit existing ones?
The Error: ERROR (ovirt-test): failed to add oVirt host my-ovird-node01.local with error: unexpected status code: 400: {"asset_tag":["device with this asset tag already exists."]}