metal-stack / mini-lab

a small, virtual setup to locally run the metal-stack
MIT License
56 stars 12 forks source link

Problem running example from readme #43

Closed GrigoriyMikhalkin closed 3 years ago

GrigoriyMikhalkin commented 3 years ago

OS: Ubuntu 20.04.1 LTS Vagrant: 2.2.9 Docker: 19.03.8 Docker-Compose: 1.27.3, build 4092ae5d

Had problem running example from README. When running make i get following error, although script finishes successfully:

deploy-partition | fatal: [leaf01]: UNREACHABLE! => changed=false 
deploy-partition |   msg: 'Failed to connect to the host via ssh: ssh: Could not resolve hostname leaf01: Name or service not known'
deploy-partition |   unreachable: true
deploy-partition | fatal: [leaf02]: UNREACHABLE! => changed=false 
deploy-partition |   msg: 'Failed to connect to the host via ssh: ssh: Could not resolve hostname leaf02: Name or service not known'
deploy-partition |   unreachable: true
deploy-partition | 
deploy-partition | PLAY RECAP *********************************************************************
deploy-partition | leaf01                     : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   
deploy-partition | leaf02                     : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
Full log ``` vagrant up Bringing machine 'leaf02' up with 'libvirt' provider... Bringing machine 'leaf01' up with 'libvirt' provider... ==> leaf02: Checking if box 'CumulusCommunity/cumulus-vx' version '3.7.13' is up to date... ==> leaf01: Checking if box 'CumulusCommunity/cumulus-vx' version '3.7.13' is up to date... ==> leaf02: Creating image (snapshot of base box volume). ==> leaf01: Creating image (snapshot of base box volume). ==> leaf02: Creating domain with the following settings... ==> leaf01: Creating domain with the following settings... ==> leaf02: -- Name: metalleaf02 ==> leaf02: -- Domain type: kvm ==> leaf01: -- Name: metalleaf01 ==> leaf01: -- Domain type: kvm ==> leaf02: -- Cpus: 1 ==> leaf02: -- Feature: acpi ==> leaf01: -- Cpus: 1 ==> leaf01: -- Feature: acpi ==> leaf02: -- Feature: apic ==> leaf02: -- Feature: pae ==> leaf01: -- Feature: apic ==> leaf01: -- Feature: pae ==> leaf02: -- Memory: 512M ==> leaf01: -- Memory: 512M ==> leaf02: -- Management MAC: ==> leaf01: -- Management MAC: ==> leaf01: -- Loader: ==> leaf02: -- Loader: ==> leaf01: -- Nvram: ==> leaf01: -- Base box: CumulusCommunity/cumulus-vx ==> leaf02: -- Nvram: ==> leaf02: -- Base box: CumulusCommunity/cumulus-vx ==> leaf01: -- Storage pool: default ==> leaf01: -- Image: /var/lib/libvirt/images/metalleaf01.img (6G) ==> leaf02: -- Storage pool: default ==> leaf02: -- Image: /var/lib/libvirt/images/metalleaf02.img (6G) ==> leaf01: -- Volume Cache: default ==> leaf02: -- Volume Cache: default ==> leaf01: -- Kernel: ==> leaf02: -- Kernel: ==> leaf01: -- Initrd: ==> leaf02: -- Initrd: ==> leaf01: -- Graphics Type: vnc ==> leaf01: -- Graphics Port: -1 ==> leaf02: -- Graphics Type: vnc ==> leaf02: -- Graphics Port: -1 ==> leaf01: -- Graphics IP: 127.0.0.1 ==> leaf02: -- Graphics IP: 127.0.0.1 ==> leaf01: -- Graphics Password: Not defined ==> leaf02: -- Graphics Password: Not defined ==> leaf01: -- Video Type: cirrus ==> leaf02: -- Video Type: cirrus ==> leaf01: -- Video VRAM: 9216 ==> leaf02: -- Video VRAM: 9216 ==> leaf01: -- Sound Type: ==> leaf01: -- Keymap: de ==> leaf02: -- Sound Type: ==> leaf01: -- TPM Path: ==> leaf02: -- Keymap: de ==> leaf02: -- TPM Path: ==> leaf01: -- INPUT: type=mouse, bus=ps2 ==> leaf01: -- RNG device model: random ==> leaf02: -- INPUT: type=mouse, bus=ps2 ==> leaf02: -- RNG device model: random ==> leaf01: Creating shared folders metadata... ==> leaf02: Creating shared folders metadata... ==> leaf01: Starting domain. ==> leaf02: Starting domain. ==> leaf01: Waiting for domain to get an IP address... ==> leaf02: Waiting for domain to get an IP address... ==> leaf01: Waiting for SSH to become available... ==> leaf02: Waiting for SSH to become available... leaf01: leaf01: Vagrant insecure key detected. Vagrant will automatically replace leaf01: this with a newly generated keypair for better security. leaf02: leaf02: Vagrant insecure key detected. Vagrant will automatically replace leaf02: this with a newly generated keypair for better security. leaf02: leaf02: Inserting generated public key within guest... leaf01: leaf01: Inserting generated public key within guest... leaf02: Removing insecure key from the guest if it's present... leaf01: Removing insecure key from the guest if it's present... leaf01: Key inserted! Disconnecting and reconnecting using new SSH key... leaf02: Key inserted! Disconnecting and reconnecting using new SSH key... ==> leaf01: Setting hostname... ==> leaf02: Setting hostname... ==> leaf01: Running provisioner: shell... ==> leaf02: Running provisioner: shell... leaf01: Running: /tmp/vagrant-shell20201024-51781-7ivrnw.sh leaf02: Running: /tmp/vagrant-shell20201024-51781-e8hvaf.sh leaf01: ################################# leaf01: Running Switch Post Config (config_switch.sh) leaf01: ################################# leaf02: ################################# leaf02: Running Switch Post Config (config_switch.sh) leaf02: ################################# leaf01: ################################# leaf01: Finished leaf01: ################################# leaf02: ################################# leaf02: Finished leaf02: ################################# ==> leaf01: Running provisioner: shell... ==> leaf02: Running provisioner: shell... leaf01: Running: /tmp/vagrant-shell20201024-51781-otw21i.sh leaf02: Running: /tmp/vagrant-shell20201024-51781-h3jegd.sh leaf01: #### UDEV Rules (/etc/udev/rules.d/70-persistent-net.rules) #### leaf01: INFO: Adding UDEV Rule: Vagrant interface = eth0 leaf01: INFO: Adding UDEV Rule: 44:38:39:00:00:1a --> swp1 leaf01: INFO: Adding UDEV Rule: 44:38:39:00:00:18 --> swp2 leaf01: ACTION=="add", SUBSYSTEM=="net", ATTR{ifindex}=="2", NAME="eth0", SUBSYSTEMS=="pci" leaf01: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:1a", NAME="swp1", SUBSYSTEMS=="pci" leaf01: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:18", NAME="swp2", SUBSYSTEMS=="pci" ==> leaf01: Running provisioner: shell... leaf02: #### UDEV Rules (/etc/udev/rules.d/70-persistent-net.rules) #### leaf02: INFO: Adding UDEV Rule: Vagrant interface = eth0 leaf02: INFO: Adding UDEV Rule: 44:38:39:00:00:04 --> swp1 leaf02: INFO: Adding UDEV Rule: 44:38:39:00:00:19 --> swp2 leaf02: ACTION=="add", SUBSYSTEM=="net", ATTR{ifindex}=="2", NAME="eth0", SUBSYSTEMS=="pci" leaf02: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:04", NAME="swp1", SUBSYSTEMS=="pci" leaf02: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="44:38:39:00:00:19", NAME="swp2", SUBSYSTEMS=="pci" ==> leaf02: Running provisioner: shell... leaf01: Running: /tmp/vagrant-shell20201024-51781-7bdyq2.sh leaf02: Running: /tmp/vagrant-shell20201024-51781-32eax6.sh leaf01: ### RUNNING CUMULUS EXTRA CONFIG ### leaf01: INFO: Detected a 3.x Based Release (3.7.13) leaf01: ### Disabling default remap on Cumulus VX... leaf01: INFO: Detected Cumulus Linux v3.7.13 Release leaf01: ### Fixing ONIE DHCP to avoid Vagrant Interface ### leaf01: Note: Installing from ONIE will undo these changes. leaf02: ### RUNNING CUMULUS EXTRA CONFIG ### leaf02: INFO: Detected a 3.x Based Release (3.7.13) leaf02: ### Disabling default remap on Cumulus VX... leaf02: INFO: Detected Cumulus Linux v3.7.13 Release leaf02: ### Fixing ONIE DHCP to avoid Vagrant Interface ### leaf02: Note: Installing from ONIE will undo these changes. leaf01: ### Giving Vagrant User Ability to Run NCLU Commands ### leaf02: ### Giving Vagrant User Ability to Run NCLU Commands ### leaf01: Adding user `vagrant' to group `netedit' ... leaf02: Adding user `vagrant' to group `netedit' ... leaf01: Adding user vagrant to group netedit leaf02: Adding user vagrant to group netedit leaf02: Done. leaf01: Done. leaf02: Adding user `vagrant' to group `netshow' ... leaf02: Adding user vagrant to group netshow leaf01: Adding user `vagrant' to group `netshow' ... leaf01: Adding user vagrant to group netshow leaf01: Done. leaf01: ### Disabling ZTP service... leaf02: Done. leaf02: ### Disabling ZTP service... leaf01: Removed symlink /etc/systemd/system/multi-user.target.wants/ztp.service. leaf02: Removed symlink /etc/systemd/system/multi-user.target.wants/ztp.service. leaf01: ### Resetting ZTP to work next boot... leaf02: ### Resetting ZTP to work next boot... leaf01: Created symlink from /etc/systemd/system/multi-user.target.wants/ztp.service to /lib/systemd/system/ztp.service. leaf02: Created symlink from /etc/systemd/system/multi-user.target.wants/ztp.service to /lib/systemd/system/ztp.service. leaf01: ### DONE ### leaf02: ### DONE ### ./env.sh docker-compose up --remove-orphans --force-recreate control-plane partition && vagrant up machine01 machine02 Recreating deploy-partition ... done Recreating deploy-control-plane ... done Attaching to deploy-partition, deploy-control-plane deploy-control-plane | deploy-control-plane | PLAY [provide requirements.yaml] *********************************************** deploy-partition | deploy-partition | PLAY [provide requirements.yaml] *********************************************** deploy-control-plane | deploy-control-plane | TASK [download release vector] ************************************************* deploy-partition | deploy-partition | TASK [download release vector] ************************************************* deploy-partition | ok: [localhost] deploy-control-plane | ok: [localhost] deploy-partition | deploy-partition | TASK [write requirements.yaml from release vector] ***************************** deploy-control-plane | deploy-control-plane | TASK [write requirements.yaml from release vector] ***************************** deploy-control-plane | ok: [localhost] deploy-partition | ok: [localhost] deploy-control-plane | deploy-control-plane | PLAY RECAP ********************************************************************* deploy-control-plane | localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 deploy-control-plane | deploy-partition | deploy-partition | PLAY RECAP ********************************************************************* deploy-partition | localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 deploy-partition | deploy-partition | - extracting ansible-common to /root/.ansible/roles/ansible-common deploy-partition | - ansible-common (v0.5.5) was installed successfully deploy-control-plane | - extracting ansible-common to /root/.ansible/roles/ansible-common deploy-control-plane | - ansible-common (v0.5.5) was installed successfully deploy-partition | - extracting metal-ansible-modules to /root/.ansible/roles/metal-ansible-modules deploy-partition | - metal-ansible-modules (v0.1.1) was installed successfully deploy-control-plane | - extracting metal-ansible-modules to /root/.ansible/roles/metal-ansible-modules deploy-control-plane | - metal-ansible-modules (v0.1.1) was installed successfully deploy-control-plane | - extracting metal-roles to /root/.ansible/roles/metal-roles deploy-control-plane | - metal-roles (v0.3.3) was installed successfully deploy-partition | - extracting metal-roles to /root/.ansible/roles/metal-roles deploy-partition | - metal-roles (v0.3.3) was installed successfully deploy-control-plane | deploy-control-plane | PLAY [deploy control plane] **************************************************** deploy-control-plane | deploy-control-plane | TASK [ingress-controller : Apply mandatory nginx-ingress definition] *********** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [ingress-controller : Deploy nginx-ingress service] *********************** deploy-partition | [WARNING]: * Failed to parse /root/.ansible/roles/ansible- deploy-partition | common/inventory/vagrant/vagrant.py with script plugin: Inventory script deploy-partition | (/root/.ansible/roles/ansible-common/inventory/vagrant/vagrant.py) had an deploy-partition | execution error: Traceback (most recent call last): File deploy-partition | "/root/.ansible/roles/ansible-common/inventory/vagrant/vagrant.py", line 452, deploy-partition | in main() File "/root/.ansible/roles/ansible- deploy-partition | common/inventory/vagrant/vagrant.py", line 447, in main hosts, meta_vars = deploy-partition | list_running_hosts() File "/root/.ansible/roles/ansible- deploy-partition | common/inventory/vagrant/vagrant.py", line 414, in list_running_hosts _, deploy-partition | host, key, value = line.split(',')[:4] ValueError: not enough values to unpack deploy-partition | (expected 4, got 1) deploy-partition | [WARNING]: * Failed to parse /root/.ansible/roles/ansible- deploy-partition | common/inventory/vagrant/vagrant.py with ini plugin: deploy-partition | /root/.ansible/roles/ansible-common/inventory/vagrant/vagrant.py:6: Expected deploy-partition | key=value host variable assignment, got: re deploy-partition | [WARNING]: Unable to parse /root/.ansible/roles/ansible- deploy-partition | common/inventory/vagrant/vagrant.py as an inventory source deploy-partition | [WARNING]: Unable to parse /root/.ansible/roles/ansible- deploy-partition | common/inventory/vagrant as an inventory source deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/prepare : Create namespace for metal stack] *** deploy-partition | deploy-partition | PLAY [pre-deployment checks] *************************************************** deploy-partition | deploy-partition | TASK [get vagrant version] ***************************************************** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/nsq : Gather release versions] *********** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/nsq : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/nsq : Deploy nsq] ************************ deploy-partition | changed: [localhost] deploy-partition | deploy-partition | TASK [check vagrant version] *************************************************** deploy-partition | skipping: [localhost] deploy-partition | deploy-partition | PLAY [deploy leaves and docker] ************************************************ deploy-partition | deploy-partition | TASK [Gathering Facts] ********************************************************* deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/nsq : Set services for patching ingress controller service exposal] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/nsq : Patch tcp-services in ingress controller] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/nsq : Expose tcp services in ingress controller] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal-db : Gather release versions] ****** deploy-control-plane | skipping: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal-db : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [Deploy metal db] ********************************************************* deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/rethinkdb-backup-restore : Gather release versions] *** deploy-control-plane | skipping: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/rethinkdb-backup-restore : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/rethinkdb-backup-restore : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/rethinkdb-backup-restore : Deploy rethinkdb (backup-restore)] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/ipam-db : Gather release versions] ******* deploy-control-plane | skipping: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/ipam-db : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [Deploy ipam db] ********************************************************** deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/postgres-backup-restore : Gather release versions] *** deploy-control-plane | skipping: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/postgres-backup-restore : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/postgres-backup-restore : Deploy postgres (backup-restore)] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/masterdata-db : Gather release versions] *** deploy-control-plane | skipping: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/masterdata-db : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [Deploy masterdata db] **************************************************** deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/postgres-backup-restore : Gather release versions] *** deploy-control-plane | skipping: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/postgres-backup-restore : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/postgres-backup-restore : Deploy postgres (backup-restore)] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal : Gather release versions] ********* deploy-control-plane | skipping: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal : Check mandatory variables for this role are set] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [Deploy metal control plane] ********************************************** deploy-control-plane | deploy-control-plane | TASK [ansible-common/roles/helm-chart : Create folder for charts and values] *** deploy-control-plane | changed: [localhost] deploy-control-plane | deploy-control-plane | TASK [ansible-common/roles/helm-chart : Copy over custom helm charts] ********** deploy-control-plane | changed: [localhost] deploy-control-plane | deploy-control-plane | TASK [ansible-common/roles/helm-chart : Template helm value file] ************** deploy-control-plane | changed: [localhost] deploy-control-plane | deploy-control-plane | TASK [ansible-common/roles/helm-chart : Calculate hash of configuration] ******* deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [ansible-common/roles/helm-chart : Deploy helm chart (metal-control-plane)] *** deploy-partition | fatal: [leaf02]: UNREACHABLE! => changed=false deploy-partition | msg: 'Failed to connect to the host via ssh: ssh: connect to host leaf02 port 22: No route to host' deploy-partition | unreachable: true deploy-partition | fatal: [leaf01]: UNREACHABLE! => changed=false deploy-partition | msg: 'Failed to connect to the host via ssh: ssh: connect to host leaf01 port 22: No route to host' deploy-partition | unreachable: true deploy-partition | deploy-partition | PLAY RECAP ********************************************************************* deploy-partition | leaf01 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 deploy-partition | leaf02 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 deploy-partition | localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 deploy-partition | deploy-partition exited with code 4 deploy-control-plane | changed: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal : Set services for patching ingress controller service exposal] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal : Patch tcp-services in ingress controller] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal : Patch udp-services in ingress controller] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal : Expose tcp services in ingress controller] *** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | TASK [metal-roles/control-plane/roles/metal : Wait until api is available] ***** deploy-control-plane | ok: [localhost] deploy-control-plane | deploy-control-plane | PLAY RECAP ********************************************************************* deploy-control-plane | localhost : ok=30 changed=4 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0 deploy-control-plane | deploy-control-plane exited with code 0 Bringing machine 'machine01' up with 'libvirt' provider... Bringing machine 'machine02' up with 'libvirt' provider... ==> machine01: Creating domain with the following settings... ==> machine02: Creating domain with the following settings... ==> machine02: -- Name: metalmachine02 ==> machine01: -- Name: metalmachine01 ==> machine02: -- Forced UUID: 2294c949-88f6-5390-8154-fa53d93a3313 ==> machine02: -- Domain type: kvm ==> machine01: -- Forced UUID: e0ab02d2-27cd-5a5e-8efc-080ba80cf258 ==> machine01: -- Domain type: kvm ==> machine02: -- Cpus: 1 ==> machine02: -- Feature: acpi ==> machine01: -- Cpus: 1 ==> machine02: -- Feature: apic ==> machine02: -- Feature: pae ==> machine01: -- Feature: acpi ==> machine01: -- Feature: apic ==> machine02: -- Memory: 1536M ==> machine02: -- Management MAC: ==> machine01: -- Feature: pae ==> machine02: -- Loader: /usr/share/OVMF/OVMF_CODE.fd ==> machine02: -- Nvram: ==> machine01: -- Memory: 1536M ==> machine01: -- Management MAC: ==> machine02: -- Storage pool: default ==> machine01: -- Loader: /usr/share/OVMF/OVMF_CODE.fd ==> machine01: -- Nvram: ==> machine02: -- Image: (G) ==> machine01: -- Storage pool: default ==> machine01: -- Image: (G) ==> machine02: -- Volume Cache: default ==> machine02: -- Kernel: ==> machine01: -- Volume Cache: default ==> machine02: -- Initrd: ==> machine01: -- Kernel: ==> machine02: -- Graphics Type: vnc ==> machine02: -- Graphics Port: -1 ==> machine01: -- Initrd: ==> machine01: -- Graphics Type: vnc ==> machine02: -- Graphics IP: 127.0.0.1 ==> machine01: -- Graphics Port: -1 ==> machine02: -- Graphics Password: Not defined ==> machine01: -- Graphics IP: 127.0.0.1 ==> machine02: -- Video Type: cirrus ==> machine01: -- Graphics Password: Not defined ==> machine01: -- Video Type: cirrus ==> machine02: -- Video VRAM: 9216 ==> machine01: -- Video VRAM: 9216 ==> machine02: -- Sound Type: ==> machine01: -- Sound Type: ==> machine02: -- Keymap: de ==> machine01: -- Keymap: de ==> machine02: -- TPM Path: ==> machine01: -- TPM Path: ==> machine02: -- Boot device: network ==> machine01: -- Boot device: network ==> machine02: -- Boot device: hd ==> machine02: -- Disks: sda(qcow2,6000M) ==> machine02: -- Disk(sda): /var/lib/libvirt/images/metalmachine02-sda.qcow2 ==> machine01: -- Boot device: hd ==> machine01: -- Disks: sda(qcow2,6000M) ==> machine02: -- INPUT: type=mouse, bus=ps2 ==> machine02: -- RNG device model: random ==> machine01: -- Disk(sda): /var/lib/libvirt/images/metalmachine01-sda.qcow2 ==> machine01: -- INPUT: type=mouse, bus=ps2 ==> machine01: -- RNG device model: random ==> machine02: Starting domain. ==> machine01: Starting domain. ```

After waiting for some time, vagrant global-status returns:

id       name      provider state   directory                            
-------------------------------------------------------------------------
4da85f4  leaf01    libvirt running /home/greesha/Data/Projects/mini-lab 
45d4ab1  leaf02    libvirt running /home/greesha/Data/Projects/mini-lab 
12f0ebf  machine02 libvirt running /home/greesha/Data/Projects/mini-lab 
1d95c76  machine01 libvirt running /home/greesha/Data/Projects/mini-lab 

So machines and switches are running. But docker-compose run metalctl machine ls returns empty list of machines. Would appreciate any help with it)

GrigoriyMikhalkin commented 3 years ago

Problem occurred because i set different default directory with Vagrant global state via VAGRANT_HOME env var.