crc-org / crc

CRC is a tool to help you run containers. It manages a local OpenShift 4.x cluster, Microshift or a Podman VM optimized for testing and development purposes
https://crc.dev
Apache License 2.0
1.25k stars 240 forks source link

CRC setup always on 127.0.0.1 #2593

Closed xgxtbku closed 3 years ago

xgxtbku commented 3 years ago

Hello.

I have a kinda frustrating problem. Tried to install and run crc on latest Ubuntu. But the setup always tries to set 127.0.0.1 as the ip. I cannot reach the console console-openshift-console.apps-crc.testing I am using VPN to connect my company network to be able to ssh to the server. Tried: https://github.com/code-ready/crc/wiki/VPN-support--with-an--userland-network-stack and https://github.com/code-ready/crc/issues/549 guides with no success. Can somebody help me to install it correctly on some different ip address. I believe the issue is mainly because crc try to use localhost as ip.

gbraad commented 3 years ago

You are not following the issue template, so it is hard to understand what steps have been tried and the configuration is.

however, in the user networking mode, 127.0.0.1 is supposed to eb used to overcome the route-all setup that most VPNSs force. In such a scenario, only ports on 127.0.0.1 are usable, such as 2222 forwards to the VMs internal 22 for SSH. Can this be reached with ssh core@127.0.0.1 -p2222 ?

xgxtbku commented 3 years ago

So more details:

SYS info

NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"

Content of the /etc/hosts

127.0.0.1 localhost api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
127.0.1.1 openadmin
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes

Content of the /etc/resolv.conf

#Generated by NetworkManager
search domain.name
nameserver 127.0.0.1
nameserver 127.0.1.1
nameserver 8.8.8.8
nameserver 1.1.1.1
nameserver 1.0.0.1

Content of the /etc/NetworkManager/NetWorkmanager.conf

[main]
plugins=ifupdown,keyfile

[ifupdown]
managed=false

[device]
wifi.scan-rand-mac-address=no

Content of the /etc/netplan/00-installer-config.yaml

network:
  ethernets:
    eno1:
      addresses:
      - 172.16.150.11/24
      gateway4: 172.16.150.1
      nameservers:
        addresses:
        - 172.16.120.20
        search: []
  version: 2

I need to run the daemon in separated ssh always:

Content of the crc config

- consent-telemetry                     : yes
- network-mode                          : user
- skip-check-network-manager-installed  : true
- skip-check-network-manager-running    : true

Output of the deamon while I run crc start

INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if AppArmor is configured
INFO Checking if vsock is correctly configured
INFO listening vsock://:1024
INFO listening /home/crcadmin/.crc/crc-http.sock
@ - - [21/Jul/2021:05:34:13 +0000] "GET /api/version HTTP/1.1" 200 89
@ - - [21/Jul/2021:05:34:13 +0000] "GET /network/services/forwarder/all HTTP/1.1" 200 3
@ - - [21/Jul/2021:05:34:13 +0000] "POST /network/services/forwarder/expose HTTP/1.1" 200 0
@ - - [21/Jul/2021:05:34:13 +0000] "POST /network/services/forwarder/expose HTTP/1.1" 200 0
@ - - [21/Jul/2021:05:34:13 +0000] "POST /network/services/forwarder/expose HTTP/1.1" 200 0
@ - - [21/Jul/2021:05:34:13 +0000] "POST /network/services/forwarder/expose HTTP/1.1" 200 0
2021/07/21 05:34:24 tcpproxy: for incoming conn 127.0.0.1:33268, error dialing "192.168.127.2:22": context deadline exceeded
INFO new connection from vm(3):3111537605 to host(2):1024
INFO assigning 192.168.127.2/24 to vm(3):3111537605
2021/07/21 05:34:36 tcpproxy: for incoming conn 127.0.0.1:33270, error dialing "192.168.127.2:22": context deadline exceeded
192.168.127.2 - - [21/Jul/2021:05:35:59 +0000] "POST /hosts/add HTTP/1.1" 200 0
192.168.127.2 - - [21/Jul/2021:05:35:59 +0000] "POST /hosts/add HTTP/1.1" 200 0
192.168.127.2 - - [21/Jul/2021:05:35:59 +0000] "POST /hosts/add HTTP/1.1" 200 0
192.168.127.2 - - [21/Jul/2021:05:35:59 +0000] "POST /hosts/add HTTP/1.1" 200 0
192.168.127.2 - - [21/Jul/2021:05:35:59 +0000] "POST /hosts/add HTTP/1.1" 200 0
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.34:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.8:8443: connect: connection timed out

Output of crc start --log-level debug

DEBU CodeReady Containers version: 1.29.1+bc5f4409
DEBU OpenShift version: 4.7.18 (embedded in executable)
DEBU Running 'crc start'
DEBU Total memory of system is 16671539200 bytes
DEBU No new version available. The latest version is 1.29.1
DEBU Found binary path at /home/crcadmin/.crc/bin/crc-driver-libvirt
DEBU Launching plugin server for driver libvirt
DEBU Plugin server listening at address 127.0.0.1:38999
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetState
DEBU (crc) DBG | time="2021-07-21T05:34:13Z" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2021-07-21T05:34:13Z" level=debug msg="Fetching VM..."
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
DEBU (crc) DBG | time="2021-07-21T05:34:13Z" level=debug msg="Closing plugin on server side"
DEBU Checking if systemd-resolved.service is running
DEBU Running 'systemctl status systemd-resolved.service'
DEBU systemd-resolved.service is already running
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
DEBU Running '/home/crcadmin/.crc/bin/crc-admin-helper-linux --version'
DEBU Found crc-admin-helper-linux version 0.0.6
DEBU crc-admin-helper executable already cached
INFO Checking for obsolete admin-helper executable
DEBU Checking if an older admin-helper executable is installed
DEBU No older admin-helper executable found
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
DEBU Total memory of system is 16671539200 bytes
INFO Checking if Virtualization is enabled
DEBU Checking if the vmx/svm flags are present in /proc/cpuinfo
DEBU CPU virtualization flags are good
INFO Checking if KVM is enabled
DEBU Checking if /dev/kvm exists
DEBU /dev/kvm was found
INFO Checking if libvirt is installed
DEBU Checking if 'virsh' is available
DEBU 'virsh' was found in /usr/bin/virsh
DEBU Checking 'virsh capabilities' for libvirtd/qemu availability
DEBU Running 'virsh --readonly --connect qemu:///system capabilities'
DEBU Found x86_64 hypervisor with 'hvm' capabilities
INFO Checking if user is part of libvirt group
DEBU Checking if current user is part of the libvirt group
DEBU Current user is already in the libvirt group
INFO Checking if active user/process is currently part of the libvirt group
DEBU libvirt group is active for the current user/process
INFO Checking if libvirt daemon is running
DEBU Checking if libvirtd service is running
DEBU Running 'systemctl status virtqemud.socket'
DEBU Command failed: exit status 4
DEBU stdout:
DEBU stderr: Unit virtqemud.socket could not be found.
DEBU virtqemud.socket is neither running nor listening
DEBU Running 'systemctl status libvirtd.socket'
DEBU libvirtd.socket is running
INFO Checking if a supported libvirt version is installed
DEBU Checking if libvirt version is >=3.4.0
DEBU Running 'virsh -v'
INFO Checking if crc-driver-libvirt is installed
DEBU Checking if crc-driver-libvirt is installed
DEBU Running '/home/crcadmin/.crc/bin/crc-driver-libvirt version'
DEBU Found crc-driver-libvirt version 0.13.1
DEBU crc-driver-libvirt is already installed
INFO Checking if AppArmor is configured
INFO Checking if vsock is correctly configured
DEBU Running 'getcap /home/crcadmin/bin/crc'
DEBU Checking file: /home/crcadmin/.crc/machines/crc/.crc-exist
DEBU Found binary path at /home/crcadmin/.crc/bin/crc-driver-libvirt
DEBU Launching plugin server for driver libvirt
DEBU Plugin server listening at address 127.0.0.1:39461
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetBundleName
DEBU (crc) Calling .GetState
DEBU (crc) DBG | time="2021-07-21T05:34:13Z" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2021-07-21T05:34:13Z" level=debug msg="Fetching VM..."
INFO Starting CodeReady Containers VM for OpenShift 4.7.18...
DEBU Updating CRC VM configuration
DEBU (crc) Calling .GetConfigRaw
DEBU (crc) Calling .Start
DEBU (crc) DBG | time="2021-07-21T05:34:13Z" level=debug msg="Starting VM crc"
DEBU (crc) DBG | time="2021-07-21T05:34:13Z" level=debug msg="Validating storage pool"
DEBU (crc) Calling .GetConfigRaw
DEBU Waiting for machine to be running, this may take a few minutes...
DEBU retry loop: attempt 0
DEBU (crc) Calling .GetState
DEBU (crc) DBG | time="2021-07-21T05:34:14Z" level=debug msg="Getting current state..."
DEBU Machine is up and running!
DEBU (crc) Calling .GetState
DEBU (crc) DBG | time="2021-07-21T05:34:14Z" level=debug msg="Getting current state..."
INFO CodeReady Containers instance is running with IP 127.0.0.1
DEBU Waiting until ssh is available
DEBU retry loop: attempt 0
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/home/crcadmin/.crc/cache/crc_libvirt_4.7.18/id_ecdsa_crc /home/crcadmin/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:33268->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:33268->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/home/crcadmin/.crc/cache/crc_libvirt_4.7.18/id_ecdsa_crc /home/crcadmin/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:33270->127.0.0.1:2222: read: connection reset by peer, output:
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:33270->127.0.0.1:2222: read: connection reset by peer\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: exit 0
DEBU Using ssh private keys: [/home/crcadmin/.crc/cache/crc_libvirt_4.7.18/id_ecdsa_crc /home/crcadmin/.crc/machines/crc/id_ecdsa]
DEBU SSH command results: err: <nil>, output:
INFO CodeReady Containers VM is running
DEBU Running SSH command: cat /home/core/.ssh/authorized_keys
DEBU SSH command results: err: <nil>, output: ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAEo7/IU0jkF5jfF91qAfW9tBCb3Q7aZbziiFlzM+I9/hvL0Mcvp6dsj2WmqSh1r+cjkXjne81nWgSy9ikLCbkfLaACAfYiVd6v2KmgDdH8SRTXLWZYjhXLsgB1fBq/4N9dgbG7owhDlIVrZ7iA/lMn9GEniQvuiEKK11HTlsaInHdxeTw==
DEBU Running SSH command: realpath /dev/disk/by-label/root
DEBU SSH command results: err: <nil>, output: /dev/vda4
DEBU Using root access: Growing /dev/vda4 partition
DEBU Running SSH command: sudo /usr/bin/growpart /dev/vda 4
DEBU SSH command results: err: Process exited with status 1, output: NOCHANGE: partition 4 is size 63961055. it cannot be grown
DEBU No free space after /dev/vda4, nothing to do
DEBU Using root access: make root Podman socket accessible
DEBU Running SSH command: sudo chmod 777 /run/podman/ /run/podman/podman.sock
DEBU SSH command results: err: <nil>, output:
DEBU Running '/home/crcadmin/.crc/bin/crc-admin-helper-linux rm api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing'
DEBU Running '/home/crcadmin/.crc/bin/crc-admin-helper-linux add 127.0.0.1 api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing'
DEBU Creating /etc/resolv.conf with permissions 0644 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU retry loop: attempt 0
DEBU Running SSH command: host -R 3 foo.apps-crc.testing
DEBU SSH command results: err: <nil>, output: foo.apps-crc.testing has address 192.168.127.2
INFO Check internal and public DNS query...
DEBU Running SSH command: host -R 3 quay.io
DEBU SSH command results: err: <nil>, output: quay.io has address 3.213.173.170
quay.io has address 34.224.196.162
quay.io has address 50.16.140.223
quay.io has address 3.216.152.103
quay.io has address 54.197.99.84
quay.io has address 3.233.133.41
quay.io has address 54.156.10.58
quay.io has address 44.193.101.5
INFO Check DNS query from host...
DEBU api.crc.testing resolved to [127.0.0.1]
WARN Wildcard DNS resolution for apps-crc.testing does not appear to be working
DEBU Running SSH command: test -e /var/lib/kubelet/config.json
DEBU SSH command results: err: <nil>, output:
INFO Verifying validity of the kubelet certificates...
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-01T03:53:32+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-server-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-01T03:54:44+00:00
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt -noout -enddate | cut -d= -f 2)" --iso-8601=seconds
DEBU SSH command results: err: <nil>, output: 2021-08-19T19:13:42+00:00
INFO Starting OpenShift kubelet service
DEBU Using root access: Executing systemctl daemon-reload command
DEBU Running SSH command: sudo systemctl daemon-reload
DEBU SSH command results: err: <nil>, output:
DEBU Using root access: Executing systemctl start kubelet
DEBU Running SSH command: sudo systemctl start kubelet
DEBU SSH command results: err: <nil>, output:
INFO Waiting for kube-apiserver availability... [takes around 2min]
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 1
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 2
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port?
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 3
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 4
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 5
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 1, output:
DEBU error: the server doesn't have a resource type "nodes"
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1\n - sleeping 1s
DEBU retry loop: attempt 6
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 7
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: Process exited with status 124, output:
DEBU
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124\n - sleeping 1s
DEBU retry loop: attempt 8
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME                 STATUS   ROLES           AGE   VERSION
crc-4727w-master-0   Ready    master,worker   20d   v1.20.0+87cc9a4
DEBU NAME                 STATUS   ROLES           AGE   VERSION
crc-4727w-master-0   Ready    master,worker   20d   v1.20.0+87cc9a4
DEBU Waiting for availability of resource type 'secret'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get secret --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME                       TYPE                                  DATA   AGE
builder-dockercfg-65hmj    kubernetes.io/dockercfg               1      20d
builder-token-pqzmg        kubernetes.io/service-account-token   4      20d
builder-token-x4xpk        kubernetes.io/service-account-token   4      20d
default-dockercfg-jfdwm    kubernetes.io/dockercfg               1      20d
default-token-jggxq        kubernetes.io/service-account-token   4      20d
default-token-mhbsr        kubernetes.io/service-account-token   4      20d
deployer-dockercfg-q9k6z   kubernetes.io/dockercfg               1      20d
deployer-token-2zcv7       kubernetes.io/service-account-token   4      20d
deployer-token-2zxxt       kubernetes.io/service-account-token   4      20d
DEBU NAME                       TYPE                                  DATA   AGE
builder-dockercfg-65hmj    kubernetes.io/dockercfg               1      20d
builder-token-pqzmg        kubernetes.io/service-account-token   4      20d
builder-token-x4xpk        kubernetes.io/service-account-token   4      20d
default-dockercfg-jfdwm    kubernetes.io/dockercfg               1      20d
default-token-jggxq        kubernetes.io/service-account-token   4      20d
default-token-mhbsr        kubernetes.io/service-account-token   4      20d
deployer-dockercfg-q9k6z   kubernetes.io/dockercfg               1      20d
deployer-token-2zcv7       kubernetes.io/service-account-token   4      20d
deployer-token-2zxxt       kubernetes.io/service-account-token   4      20d
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Waiting for availability of resource type 'clusterversion'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get clusterversion --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.18    True        False         20d     Cluster version is 4.7.18
DEBU NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.18    True        False         20d     Cluster version is 4.7.18
DEBU Running SSH command: timeout 30s oc get clusterversion version -o jsonpath="{['spec']['clusterID']}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: 332acc9d-46da-4327-b823-1301782e720b
DEBU Creating /tmp/routes-controller.json with permissions 0444 in the CRC VM
DEBU Running SSH command: <hidden>
DEBU SSH command succeeded
DEBU Running SSH command: timeout 30s oc apply -f /tmp/routes-controller.json --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: pod/routes-controller configured
DEBU Waiting for availability of resource type 'configmaps'
DEBU retry loop: attempt 0
DEBU Running SSH command: timeout 5s oc get configmaps --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: NAME               DATA   AGE
kube-root-ca.crt   1      20d
DEBU NAME               DATA   AGE
kube-root-ca.crt   1      20d
DEBU Running SSH command: timeout 30s oc get configmaps admin-kubeconfig-client-ca -n openshift-config -o jsonpath="{.data.ca-bundle\.crt}" --context admin --cluster crc --kubeconfig /opt/kubeconfig
DEBU SSH command results: err: <nil>, output: -----BEGIN CERTIFICATE-----
CERTIFICATE GOES HERE
-----END CERTIFICATE-----
INFO Starting OpenShift cluster... [waiting for the cluster to stabilize]
DEBU authentication operator is degraded, Reason: OAuthRouteCheckEndpointAccessibleController_SyncError
DEBU authentication operator is still progressing, Reason: OAuthVersionRoute_WaitingForRoute
DEBU authentication operator not available, Reason: OAuthRouteCheckEndpointAccessibleController_EndpointUnavailable::OAuthVersionRoute_RequestFailed
DEBU console operator is degraded, Reason: RouteHealth_FailedGet
DEBU ingress operator is degraded, Reason: IngressControllersDegraded
INFO Operator authentication is progressing
DEBU authentication operator is degraded, Reason: OAuthRouteCheckEndpointAccessibleController_SyncError
DEBU authentication operator is still progressing, Reason: OAuthVersionRoute_WaitingForRoute
DEBU authentication operator not available, Reason: OAuthRouteCheckEndpointAccessibleController_EndpointUnavailable::OAuthVersionRoute_RequestFailed
DEBU console operator is degraded, Reason: RouteHealth_FailedGet
DEBU ingress operator is degraded, Reason: IngressControllersDegraded
DEBU marketplace operator is still progressing, Reason: OperatorStarting
DEBU marketplace operator not available, Reason: OperatorStarting
DEBU network operator is still progressing, Reason: Deploying
INFO 3 operators are progressing: authentication, marketplace, network
DEBU console operator is degraded, Reason: RouteHealth_FailedGet
INFO Operator console is degraded
INFO All operators are available. Ensuring stability...
INFO Operators are stable (2/3)...
INFO Operators are stable (3/3)...
DEBU Cluster took 2m30.251303791s to stabilize
INFO Adding crc-admin and crc-developer contexts to kubeconfig...
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU (crc) DBG | time="2021-07-21T05:37:58Z" level=debug msg="Closing plugin on server side"
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
Started the OpenShift cluster.

The server is accessible via web console at:
  https://console-openshift-console.apps-crc.testing

Log in as administrator:
  Username: kubeadmin
  Password: PASSWORD

Log in as user:
  Username: developer
  Password: developer

Use the 'oc' command line interface:
  $ eval $(crc oc-env)
  $ oc login -u developer https://api.crc.testing:6443

Answer for your question: crcadmin@openadmin:/etc/NetworkManager$ ssh core@127.0.0.1 -p2222 ssh: connect to host 127.0.0.1 port 2222: Connection refused

If I need to provide any more information please ask it. I am not a Unix expert and mainly nobody knows Ubuntu server well enough in our company but we really want to learn and develop OpenShift skillset. We see the future in this opportunity. I was able to run OpenShift on my local windows PC but it have not enough memory and cpu to run it constantly so we got a dedicated server. Please help us in the learning phase. Any help is appreciated. :)

gbraad commented 3 years ago

It seems the machine is locally able to connect to the daemon as I see:

192.168.127.2 - - [21/Jul/2021:05:35:59 +0000] "POST /hosts/add HTTP/1.1" 200 0

and the actual cluster finishes startup.

What does host api.crc.testing say?

What about oc login -u developer https://127.0.0.1:6443 ?


crcadmin@openadmin:/etc/NetworkManager$ ssh core@127.0.0.1 -p2222
ssh: connect to host 127.0.0.1 port 2222: Connection refused

not sure why you get this, as the startup is able to connect during the startup phase.

gbraad commented 3 years ago

I am not a Unix expert and mainly nobody knows Ubuntu server well enough in our company but we really want to learn and develop OpenShift skillset. We see the future in this opportunity. I was able to run OpenShift on my local windows PC but it have not enough memory and cpu to run it constantly so we got a dedicated server. Please help us in the learning phase. Any help is appreciated. :)

Understood. No worries ...

xgxtbku commented 3 years ago

This disturb me on the crc start:

WARN Wildcard DNS resolution for apps-crc.testing does not appear to be working

And this on the daemon:

2021/07/21 07:17:22 tcpproxy: for incoming conn 127.0.0.1:34624, error dialing "192.168.127.2:22": context deadline exceeded
INFO new connection from vm(3):2932124478 to host(2):1024
INFO assigning 192.168.127.2/24 to vm(3):2932124478
2021/07/21 07:17:33 tcpproxy: for incoming conn 127.0.0.1:34626, error dialing "192.168.127.2:22": context deadline exceeded

ERRO net.Dial() = dial tcp 10.217.0.44:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.41:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.41:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.41:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.44:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.44:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.44:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.41:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.44:8443: connect: connection timed out
ERRO net.Dial() = dial tcp 10.217.0.44:8443: connect: connection timed out

What does host api.crc.testing say?

Host api.crc.testing not found: 3(NXDOMAIN)

oc login -u developer https://127.0.0.1:6443

crcadmin@openadmin:~$ oc login -u developer https://127.0.0.1:6443
The server is using a certificate that does not match its hostname: x509: certificate is valid for 10.217.4.1, not 127.0.0.1
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Authentication required for https://127.0.0.1:6443 (openshift)
Username: developer
Password:
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

nslookup api.crc.testing

Server:         8.8.8.8
Address:        8.8.8.8#53

** server can't find api.crc.testing: NXDOMAIN

nslookup https://console-openshift-console.apps-crc.testing

Server:         8.8.8.8
Address:        8.8.8.8#53

** server can't find https://console-openshift-console.apps-crc.testing: NXDOMAIN

crc status

CRC VM:          Running
OpenShift:       Running (v4.7.18)
Disk Usage:      13.36GB of 32.74GB (Inside the CRC VM)
Cache Usage:     13.34GB
Cache Directory: /home/crcadmin/.crc/cache

Also if i close the terminal where i started crc daemon the daemon stops. How can I run it in the background?

gbraad commented 3 years ago

So this is clearly a DNS/Hosts issue due to the use on Ubuntu. We have only confirmed this scenario to work as expected on (RH)EL and Fedora.

gbraad commented 3 years ago

what is in /etc/hosts ?

$ cat /etc/hosts
xgxtbku commented 3 years ago

etc/hosts/

127.0.0.1 localhost api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing
127.0.1.1 openadmin
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Yup I've read that the Ubuntu could be buggy in some cases.

Ubuntu server is still problematic because it's using systemd-networkd

Or backup plan is to install a CentOS Stream/Fedora tomorrow as a last idea. Which one do you recommend in the case if we cant solve this DNS/Hosts issue?

gbraad commented 3 years ago

The hosts file does not get written; @guillaumerose any idea?

CentOS should work OOTB.

guillaumerose commented 3 years ago

Given previous comments, I understand you run crc on a remote Ubuntu box and you want to use it remotely from your laptop. On the server, it seems the crc VM is working fine.

For getting it working from your laptop, you need to get the IP of the remote server and add in /etc/hosts file of your laptop.

<server-ip-here> api.crc.testing canary-openshift-ingress-canary.apps-crc.testing console-openshift-console.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing downloads-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing

Then you will be able to browse the console or do oc login.

gbraad commented 3 years ago

Ah right, usermode and remote are not a scenario we have worked on.


Note: something we might wanna consider when dealing with a remote setup

xgxtbku commented 3 years ago

@guillaumerose suggestion solved the host issue

Also is there any possibility to run the daemon in the background? Otherwise we migrate to CentOS tomorrow.

gbraad commented 3 years ago

the daemon can run in the background using a simple: crc daemon&, but we are moving to socket activation with #2459 (PR is available). This means the daemon is autostarted when requested.

gbraad commented 3 years ago

Something we wanna document or resolve in future (like in a remote scenario)?

cfergeau commented 3 years ago

I think this issue can be closed?

xgxtbku commented 3 years ago

I think this issue can be closed?

please leave it open for a while I'll try to make a tutorial next week based on the experiences.

gbraad commented 3 years ago

any updates?

I'd rather close this now.

ringerc commented 2 months ago

For others - don't hack /etc/hosts like this.

Use resolvectl to manage DNS resolution on any systemd Linux machine including Ubuntu, RHEL, etc.

You can check how a name is being resolved with:

$ resolvectl query api.crc.testing           
api.crc.testing: 192.168.130.11

I think you can tell the system to use the crc interface for DNS queries relating to domains under crc.testing with something like:

resolvectl domain crc crc.testing apps-crc.testing

See man resolvectl and man 5 systemd.network under Domains= for details. The list of search domains is whitespace-separated.