crc-org / crc

CRC is a tool to help you run containers. It manages a local OpenShift 4.x cluster, Microshift or a Podman VM optimized for testing and development purposes
https://crc.dev
Apache License 2.0
1.26k stars 242 forks source link

[BUG] CRC not starting when increasing the disk-size #4218

Open rmestrum opened 5 months ago

rmestrum commented 5 months ago

General information

CRC version

CRC version: 2.37.1+36d451
OpenShift version: 4.15.14

CRC status

DEBU CRC version: 2.37.1+36d451                   
DEBU OpenShift version: 4.15.14                   
DEBU Running 'crc status'                         
CRC VM:          Running
OpenShift:       Unreachable (v4.15.14)
Disk Usage:      0B of 0B (Inside the CRC VM)
Cache Usage:     38.32GB
Cache Directory: /Users/rmestrum/.crc/cache

CRC config

- consent-telemetry                     : no
- cpus                                  : 6

Host Operating System

ProductName:        macOS
ProductVersion:     14.5
BuildVersion:       23F79

Steps to reproduce

  1. crc setup
  2. crs config set cpus 6
  3. crc start --disk-size 100

Expected

INFO Using bundle path /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64.crcbundle 
INFO Checking if running macOS version >= 13.x    
INFO Checking if running as non-root              
INFO Checking if crc-admin-helper executable is cached 
INFO Checking if running on a supported CPU architecture 
INFO Checking if crc executable symlink exists    
INFO Checking minimum RAM requirements            
INFO Check if Podman binary exists in: /Users/rmestrum/.crc/bin/oc 
INFO Checking if running emulated on Apple silicon 
INFO Checking if vfkit is installed               
INFO Checking if old launchd config for tray and/or daemon exists 
INFO Checking if crc daemon plist file is present and loaded 
INFO Checking SSH port availability               
INFO Loading bundle: crc_vfkit_4.15.14_arm64...   
INFO Creating CRC VM for OpenShift 4.15.14...     
INFO Generating new SSH key pair...               
INFO Generating new password for the kubeadmin user 
INFO Starting CRC VM for openshift 4.15.14...     
INFO CRC instance is running with IP 127.0.0.1    
INFO CRC VM is running                            
INFO Updating authorized keys...                  
INFO Configuring shared directories               
INFO Check internal and public DNS query...       
INFO Check DNS query from host...                 
INFO Verifying validity of the kubelet certificates... 
INFO Starting kubelet service                     
INFO Waiting for kube-apiserver availability... [takes around 2min] 
INFO Adding user's pull secret to the cluster...  
INFO Updating SSH key to machine config resource... 
INFO Waiting until the user's pull secret is written to the instance disk... 
INFO Changing the password for the kubeadmin user 
INFO Updating cluster ID...                  
...

Actual

INFO Using bundle path /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64.crcbundle INFO Checking if running macOS version >= 13.x
INFO Checking if running as non-root
INFO Checking if crc-admin-helper executable is cached INFO Checking if running on a supported CPU architecture INFO Checking if crc executable symlink exists
INFO Checking minimum RAM requirements
INFO Check if Podman binary exists in: /Users/rmestrum/.crc/bin/oc INFO Checking if running emulated on Apple silicon INFO Checking if vfkit is installed
INFO Checking if old launchd config for tray and/or daemon exists INFO Checking if crc daemon plist file is present and loaded INFO Checking SSH port availability
INFO Loading bundle: crc_vfkit_4.15.14_arm64...
INFO Creating CRC VM for OpenShift 4.15.14...
INFO Generating new SSH key pair...
INFO Generating new password for the kubeadmin user INFO Starting CRC VM for openshift 4.15.14...
INFO CRC instance is running with IP 127.0.0.1
INFO CRC VM is running
INFO Updating authorized keys...
INFO Resizing /dev/vda4 filesystem
INFO Configuring shared directories
INFO Check internal and public DNS query...
INFO Check DNS query from host...
INFO Verifying validity of the kubelet certificates... INFO Starting kubelet service
INFO Waiting for kube-apiserver availability... [takes around 2min] INFO Adding user's pull secret to the cluster...
INFO Updating SSH key to machine config resource... INFO Waiting until the user's pull secret is written to the instance disk... Failed to update pull secret on the disk: Temporary error: pull secret not updated to disk (x31) failed to read /var/lib/kubelet/config.json file: ssh command error: command : sudo cat /var/lib/kubelet/config.json err : Process exited with status 132 from signal ILL :

Logs

Before gather the logs try following if that fix your issue

$ crc delete -f
$ crc cleanup
$ crc setup
$ crc start --log-level debug

Please consider posting the output of crc start --log-level debug on http://gist.github.com/ and post the link in the issue.

adrianriobo commented 5 months ago

Hi @rmestrum how did you set your pullsecret?

I just tested and cluster is starting as expected

podmanqe@dev-platform-mac5 misc % crc start --disk-size 100 -p ps 
INFO Using bundle path /Users/podmanqe/.crc/cache/crc_vfkit_4.15.14_arm64.crcbundle 
INFO Checking if running macOS version >= 13.x    
INFO Checking if running as non-root              
INFO Checking if crc-admin-helper executable is cached 
INFO Checking if running on a supported CPU architecture 
INFO Checking if crc executable symlink exists    
INFO Checking minimum RAM requirements            
INFO Check if Podman binary exists in: /Users/podmanqe/.crc/bin/oc 
INFO Checking if running emulated on Apple silicon 
INFO Checking if vfkit is installed               
INFO Checking if old launchd config for tray and/or daemon exists 
INFO Checking if crc daemon plist file is present and loaded 
INFO Checking SSH port availability               
INFO Loading bundle: crc_vfkit_4.15.14_arm64...   
INFO Creating CRC VM for OpenShift 4.15.14...     
INFO Generating new SSH key pair...               
INFO Generating new password for the kubeadmin user 
INFO Starting CRC VM for openshift 4.15.14...     
INFO CRC instance is running with IP 127.0.0.1    
INFO CRC VM is running                            
INFO Updating authorized keys...                  
INFO Resizing /dev/vda4 filesystem                
INFO Configuring shared directories               
INFO Check internal and public DNS query...       
INFO Check DNS query from host...                 
INFO Verifying validity of the kubelet certificates... 
INFO Starting kubelet service                     
INFO Waiting for kube-apiserver availability... [takes around 2min] 
INFO Adding user's pull secret to the cluster...  
INFO Updating SSH key to machine config resource... 
INFO Waiting until the user's pull secret is written to the instance disk... 
INFO Changing the password for the kubeadmin user 
INFO Updating cluster ID...                       
INFO Updating root CA cert to admin-kubeconfig-client-ca configmap... 
INFO Starting openshift instance... [waiting for the cluster to stabilize] 
INFO 6 operators are progressing: authentication, console, dns, image-registry, ingress, ... 
INFO 3 operators are progressing: console, image-registry, network 
INFO Operator console is progressing              
INFO All operators are available. Ensuring stability... 
INFO Operators are stable (2/3)...                
INFO Operators are stable (3/3)...                
INFO Adding crc-admin and crc-developer contexts to kubeconfig... 
Started the OpenShift cluster.

The server is accessible via web console at:
  https://console-openshift-console.apps-crc.testing

Log in as administrator:
  Username: kubeadmin
  Password: eiVhd-Liq3d-cGb2z-AcMzY

Log in as user:
  Username: developer
  Password: developer

Use the 'oc' command line interface:
  $ eval $(crc oc-env)
  $ oc login -u developer https://api.crc.testing:6443
podmanqe@dev-platform-mac5 misc % sw_vers 
ProductName:        macOS
ProductVersion:     14.5
BuildVersion:       23F79
rmestrum commented 5 months ago

I always set the pull secret on the commandline using the -p option of the crc start command. When create a new cluster with the required disksize i'll always the error. When creating a new clusters with the default sizes and then stop the cluster and start the cluster again using the disksize the local cluster is running with the configured size

So the command is crc start -p ~/CRC/pull-secret.txt --disk-size 100

adrianriobo commented 5 months ago

Then that is exactly how I ran, you can see from the log, and it started as expected

podmanqe@dev-platform-mac5 misc % crc start --disk-size 100 -p ps 
rmestrum commented 5 months ago

Yes i saw your command so that is really strange. I'll cleanup my crc environment again and start it with only the disk size als you did and run it in debug mode.

rmestrum commented 5 months ago

Problem still exists.

This is log

crc start -p ~/CRC/pull-secret.txt --disk-size 100 --log-level debug
DEBU CRC version: 2.37.1+36d451                   
DEBU OpenShift version: 4.15.14                   
DEBU Running 'crc start'                          
DEBU Total memory of system is 19327352832 bytes  
DEBU Unable to find out if a new version is available: Error unmarshaling JSON metadata: invalid character '}' looking for beginning of object key string 
DEBU Checking file: /Users/rmestrum/.crc/machines/crc/.crc-exist 
INFO Using bundle path /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64.crcbundle 
INFO Checking if running macOS version >= 13.x    
DEBU Running 'sw_vers -productVersion'            
DEBU sw_vers -productVersion is: 14.5             
INFO Checking if running as non-root              
INFO Checking if crc-admin-helper executable is cached 
INFO Checking if running on a supported CPU architecture 
DEBU GOARCH is arm64 GOOS is darwin               
INFO Checking if crc executable symlink exists    
INFO Checking minimum RAM requirements            
DEBU Total memory of system is 19327352832 bytes  
INFO Check if Podman binary exists in: /Users/rmestrum/.crc/bin/oc 
INFO Checking if running emulated on Apple silicon 
INFO Checking if vfkit is installed               
INFO Checking if old launchd config for tray and/or daemon exists 
INFO Checking if crc daemon plist file is present and loaded 
DEBU Running 'bash -c launchctl list | grep com.redhat.crc.daemon | awk '{print $1}'' 
INFO Checking SSH port availability               
DEBU Checking file: /Users/rmestrum/.crc/machines/crc/.crc-exist 
INFO Loading bundle: crc_vfkit_4.15.14_arm64...   
DEBU Using secret from configuration              
INFO Creating CRC VM for OpenShift 4.15.14...     
DEBU Running pre-create checks...                 
DEBU Creating machine...                          
INFO Generating new SSH key pair...               
INFO Generating new password for the kubeadmin user 
DEBU Created /Users/rmestrum/.crc/machines/crc/.crc-exist 
DEBU Machine successfully created                 
DEBU Checking file: /Users/rmestrum/.crc/machines/crc/.crc-exist 
INFO Starting CRC VM for openshift 4.15.14...     
DEBU Updating CRC VM configuration                
DEBU Running /usr/local/crc/vfkit --cpus 4 --memory 10752 --kernel /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/vmlinuz-5.14.0-284.66.1.el9_2.aarch64 --initrd /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/initramfs-5.14.0-284.66.1.el9_2.aarch64.img --kernel-cmdline console=hvc0 BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-8fbe3374e8c77bfc31d4fdcb11131af464b5f3413b9c344347cd6d92b65f7027/vmlinuz-5.14.0-284.66.1.el9_2.aarch64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/8fbe3374e8c77bfc31d4fdcb11131af464b5f3413b9c344347cd6d92b65f7027/0 root=UUID=2fc971cf-3b2f-4535-b21d-ebce9796b40d rw rootflags=prjquota boot=UUID=17b72149-5288-4c24-af62-cf9989a95bec systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all psi=1 --device virtio-serial,logFilePath=/Users/rmestrum/.crc/machines/crc/vfkit.log --device virtio-fs,sharedDir=/Users/rmestrum,mountTag=dir0 --device virtio-rng --device virtio-blk,path=/Users/rmestrum/.crc/machines/crc/crc.img --device virtio-vsock,port=1024,socketURL=/Users/rmestrum/.crc/tap.sock,listen --timesync vsockPort=1234 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="&{4 10752 /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/vmlinuz-5.14.0-284.66.1.el9_2.aarch64 console=hvc0 BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-8fbe3374e8c77bfc31d4fdcb11131af464b5f3413b9c344347cd6d92b65f7027/vmlinuz-5.14.0-284.66.1.el9_2.aarch64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/8fbe3374e8c77bfc31d4fdcb11131af464b5f3413b9c344347cd6d92b65f7027/0 root=UUID=2fc971cf-3b2f-4535-b21d-ebce9796b40d rw rootflags=prjquota boot=UUID=17b72149-5288-4c24-af62-cf9989a95bec systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all psi=1 /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/initramfs-5.14.0-284.66.1.el9_2.aarch64.img {[] false} vsockPort=1234 [virtio-serial,logFilePath=/Users/rmestrum/.crc/machines/crc/vfkit.log virtio-fs,sharedDir=/Users/rmestrum,mountTag=dir0 virtio-rng virtio-blk,path=/Users/rmestrum/.crc/machines/crc/crc.img virtio-vsock,port=1024,socketURL=/Users/rmestrum/.crc/tap.sock,listen] none://  false}" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="boot parameters: &{VmlinuzPath:/Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/vmlinuz-5.14.0-284.66.1.el9_2.aarch64 KernelCmdLine:console=hvc0 BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-8fbe3374e8c77bfc31d4fdcb11131af464b5f3413b9c344347cd6d92b65f7027/vmlinuz-5.14.0-284.66.1.el9_2.aarch64 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/8fbe3374e8c77bfc31d4fdcb11131af464b5f3413b9c344347cd6d92b65f7027/0 root=UUID=2fc971cf-3b2f-4535-b21d-ebce9796b40d rw rootflags=prjquota boot=UUID=17b72149-5288-4c24-af62-cf9989a95bec systemd.unified_cgroup_hierarchy=1 cgroup_no_v1=all psi=1 InitrdPath:/Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/initramfs-5.14.0-284.66.1.el9_2.aarch64.img}" 
DEBU time="2024-06-11T08:46:48+02:00" level=info  
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="virtual machine parameters:" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="\tvCPUs: 4" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="\tmemory: 10752 MiB" 
DEBU time="2024-06-11T08:46:48+02:00" level=info  
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="Adding virtio-serial device (logFile: /Users/rmestrum/.crc/machines/crc/vfkit.log)" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="Adding virtio-fs device" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="Adding virtio-rng device" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="Adding virtio-blk device (imagePath: /Users/rmestrum/.crc/machines/crc/crc.img)" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="Adding virtio-vsock device" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="virtual machine is running" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="Exposing vsock port 1024 on /Users/rmestrum/.crc/tap.sock (listening)" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="Setting up host/guest time synchronization" 
DEBU time="2024-06-11T08:46:48+02:00" level=info msg="waiting for VM to stop" 
DEBU Waiting for machine to be running, this may take a few minutes... 
DEBU retry loop: attempt 0                        
DEBU Machine is up and running!                   
INFO CRC instance is running with IP 127.0.0.1    
DEBU Waiting until ssh is available               
DEBU retry loop: attempt 0                        
DEBU Running SSH command: exit 0                  
DEBU Using ssh private keys: [/Users/rmestrum/.crc/machines/crc/id_ecdsa /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/id_ecdsa_crc] 
DEBU SSH command results: err: ssh: handshake failed: read tcp 127.0.0.1:61282->127.0.0.1:2222: read: connection reset by peer, output:  
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : ssh: handshake failed: read tcp 127.0.0.1:61282->127.0.0.1:2222: read: connection reset by peer
 - sleeping 1s 
DEBU retry loop: attempt 1                        
DEBU Running SSH command: exit 0                  
DEBU Using ssh private keys: [/Users/rmestrum/.crc/machines/crc/id_ecdsa /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/id_ecdsa_crc] 
DEBU SSH command results: err: <nil>, output:     
INFO CRC VM is running                            
DEBU Using root access: disable core user password 
DEBU Running SSH command: sudo passwd --lock core 
DEBU SSH command results: err: <nil>, output: Locking password for user core.
passwd: Success 
DEBU Running SSH command: cat /home/core/.ssh/authorized_keys 
DEBU SSH command results: err: Process exited with status 1, output:  
INFO Updating authorized keys...                  
DEBU Creating /home/core/.ssh/authorized_keys with permissions 0644 in the CRC VM 
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU Running SSH command: rm /home/core/.ssh/authorized_keys.d/ignition 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Get device id             
DEBU Running SSH command: sudo /usr/sbin/blkid -t TYPE=xfs -o device 
DEBU SSH command results: err: <nil>, output: /dev/vda4 
DEBU Using root access: Growing /dev/vda4 partition 
DEBU Running SSH command: sudo /usr/bin/growpart /dev/vda 4 
DEBU SSH command results: err: <nil>, output: CHANGED: partition=4 start=1050624 old: size=63961055 end=65011678 new: size=208664543 end=209715166 
INFO Resizing /dev/vda4 filesystem                
DEBU Using root access: Remounting /sysroot read/write 
DEBU Running SSH command: sudo mount -o remount,rw /sysroot 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Growing /sysroot filesystem 
DEBU Running SSH command: sudo xfs_growfs /sysroot 
DEBU SSH command results: err: <nil>, output: meta-data=/dev/vda4              isize=512    agcount=42, agsize=193919 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=7995131, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 7995131 to 26083067 
DEBU Configuring shared directories               
INFO Configuring shared directories               
DEBU Using root access: Making / mutable          
DEBU Running SSH command: sudo chattr -i /        
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Creating /Users/rmestrum  
DEBU Running SSH command: sudo mkdir -p /Users/rmestrum 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Making / immutable again  
DEBU Running SSH command: sudo chattr +i /        
DEBU SSH command results: err: <nil>, output:     
DEBU Mounting tag dir0 at /Users/rmestrum         
DEBU Using root access: Mounting /Users/rmestrum  
DEBU Running SSH command: sudo mount -o context="system_u:object_r:container_file_t:s0" -t virtiofs dir0 /Users/rmestrum 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: make root Podman socket accessible 
DEBU Running SSH command: sudo chmod 777 /run/podman/ /run/podman/podman.sock 
DEBU SSH command results: err: <nil>, output:     
DEBU Running '/usr/local/crc/crc-admin-helper-darwin rm api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing' 
DEBU Running '/usr/local/crc/crc-admin-helper-darwin add 127.0.0.1 api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing' 
DEBU Running SSH command: systemctl status ovs-configuration.service 
DEBU SSH command results: err: Process exited with status 3, output: ○ ovs-configuration.service - Configures OVS with proper host networking configuration
     Loaded: loaded (/etc/systemd/system/ovs-configuration.service; enabled; preset: disabled)
     Active: inactive (dead) 
DEBU Using root access: Executing systemctl daemon-reload command 
DEBU Running SSH command: sudo systemctl daemon-reload 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Executing systemctl start ovs-configuration.service 
DEBU Running SSH command: sudo systemctl start ovs-configuration.service 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Update resolv.conf file   
DEBU Running SSH command: sudo nmcli con modify ovs-if-br-ex ipv4.dns 192.168.127.1 ipv4.dns-search crc.testing 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Executing systemctl daemon-reload command 
DEBU Running SSH command: sudo systemctl daemon-reload 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Executing systemctl restart NetworkManager.service 
DEBU Running SSH command: sudo systemctl restart NetworkManager.service 
DEBU SSH command results: err: <nil>, output:     
DEBU retry loop: attempt 0                        
DEBU Running SSH command: host -R 3 foo.apps-crc.testing 
DEBU SSH command results: err: <nil>, output: foo.apps-crc.testing has address 192.168.127.2 
INFO Check internal and public DNS query...       
DEBU Running SSH command: curl --head quay.io     
DEBU SSH command results: err: <nil>, output: HTTP/1.1 301 Moved Permanently
Server: awselb/2.0
Date: Tue, 11 Jun 2024 06:46:56 GMT
Content-Type: text/html
Content-Length: 134
Connection: keep-alive
Location: https://quay.io:443/

INFO Check DNS query from host...                 
DEBU api.crc.testing resolved to [127.0.0.1]      
INFO Verifying validity of the kubelet certificates... 
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds 
DEBU SSH command results: err: <nil>, output: 2025-05-28T12:24:15+00:00 
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /var/lib/kubelet/pki/kubelet-server-current.pem -noout -enddate | cut -d= -f 2)" --iso-8601=seconds 
DEBU SSH command results: err: <nil>, output: 2025-05-28T12:25:20+00:00 
DEBU Running SSH command: date --date="$(sudo openssl x509 -in /etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt -noout -enddate | cut -d= -f 2)" --iso-8601=seconds 
DEBU SSH command results: err: <nil>, output: 2025-05-28T12:27:38+00:00 
INFO Starting kubelet service                     
DEBU Using root access: Executing systemctl daemon-reload command 
DEBU Running SSH command: sudo systemctl daemon-reload 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Executing systemctl start kubelet 
DEBU Running SSH command: sudo systemctl start kubelet 
DEBU SSH command results: err: <nil>, output:     
INFO Waiting for kube-apiserver availability... [takes around 2min] 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: Process exited with status 1, output:  
DEBU E0611 06:46:58.674014    3755 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:58.674538    3755 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:58.676704    3755 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:58.677282    3755 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:58.678810    3755 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port? 
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1
 - sleeping 1s 
DEBU retry loop: attempt 1                        
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: Process exited with status 1, output:  
DEBU E0611 06:46:59.750359    3966 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:59.750927    3966 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:59.752728    3966 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:59.753226    3966 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
E0611 06:46:59.754880    3966 memcache.go:265] couldn't get current server API group list: Get "https://api.crc.testing:6443/api?timeout=32s": dial tcp 192.168.127.2:6443: connect: connection refused
The connection to the server api.crc.testing:6443 was refused - did you specify the right host or port? 
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 1
 - sleeping 1s 
DEBU retry loop: attempt 2                        
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: Process exited with status 124, output:  
DEBU                                              
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124
 - sleeping 1s 
DEBU retry loop: attempt 3                        
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: Process exited with status 124, output:  
DEBU                                              
DEBU error: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig
err     : Process exited with status 124
 - sleeping 1s 
DEBU retry loop: attempt 4                        
DEBU Running SSH command: timeout 5s oc get nodes --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: NAME   STATUS     ROLES                         AGE   VERSION
crc    NotReady   control-plane,master,worker   14d   v1.28.9+416ecaf 
DEBU NAME   STATUS     ROLES                         AGE   VERSION
crc    NotReady   control-plane,master,worker   14d   v1.28.9+416ecaf 
DEBU Waiting for availability of resource type 'configmap' 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: timeout 5s oc get configmap --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: NAME                       DATA   AGE
kube-root-ca.crt           1      14d
openshift-service-ca.crt   1      14d 
DEBU NAME                       DATA   AGE
kube-root-ca.crt           1      14d
openshift-service-ca.crt   1      14d 
DEBU Running SSH command: timeout 30s oc delete -n openshift-machine-config-operator configmap machine-config-controller --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: Process exited with status 1, output:  
DEBU Waiting for availability of resource type 'lease' 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: timeout 5s oc get lease --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output:     
DEBU                                              
DEBU Running SSH command: timeout 30s oc delete -A lease --all --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: lease.coordination.k8s.io "crc" deleted
lease.coordination.k8s.io "apiserver-yz2phfmetulqy4pemhbirhdnii" deleted
lease.coordination.k8s.io "cluster-policy-controller-lock" deleted
lease.coordination.k8s.io "kube-controller-manager" deleted
lease.coordination.k8s.io "kube-scheduler" deleted
lease.coordination.k8s.io "openshift-apiserver-operator-lock" deleted
lease.coordination.k8s.io "cluster-authentication-operator-lock" deleted
lease.coordination.k8s.io "cluster-machine-approver-leader" deleted
lease.coordination.k8s.io "version" deleted
lease.coordination.k8s.io "config-operator-lock" deleted
lease.coordination.k8s.io "console-operator-lock" deleted
lease.coordination.k8s.io "openshift-controller-manager-operator-lock" deleted
lease.coordination.k8s.io "openshift-master-controllers" deleted
lease.coordination.k8s.io "openshift-cluster-etcd-operator-lock" deleted
lease.coordination.k8s.io "openshift-master-controllers" deleted
lease.coordination.k8s.io "kube-apiserver-operator-lock" deleted
lease.coordination.k8s.io "cert-regeneration-controller-lock" deleted
lease.coordination.k8s.io "kube-controller-manager-operator-lock" deleted
lease.coordination.k8s.io "cert-recovery-controller-lock" deleted
lease.coordination.k8s.io "cluster-policy-controller-lock" deleted
lease.coordination.k8s.io "openshift-cluster-kube-scheduler-operator-lock" deleted
lease.coordination.k8s.io "cert-recovery-controller-lock" deleted
lease.coordination.k8s.io "kube-scheduler" deleted
lease.coordination.k8s.io "openshift-kube-storage-version-migrator-operator-lock" deleted
lease.coordination.k8s.io "control-plane-machine-set-leader" deleted
lease.coordination.k8s.io "machine-api-operator" deleted
lease.coordination.k8s.io "machine-config" deleted
lease.coordination.k8s.io "machine-config-controller" deleted
lease.coordination.k8s.io "marketplace-operator-lock" deleted
lease.coordination.k8s.io "ovnkube-identity" deleted
lease.coordination.k8s.io "network-operator-lock" deleted
lease.coordination.k8s.io "packageserver-controller-lock" deleted
lease.coordination.k8s.io "ovn-kubernetes-master" deleted
lease.coordination.k8s.io "openshift-route-controllers" deleted
lease.coordination.k8s.io "service-ca-operator-lock" deleted
lease.coordination.k8s.io "service-ca-controller-lock" deleted 
DEBU Waiting for availability of resource type 'secret' 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: timeout 5s oc get secret --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: NAME                       TYPE                                  DATA   AGE
builder-dockercfg-gclg9    kubernetes.io/dockercfg               1      13d
builder-dockercfg-zg6qx    kubernetes.io/dockercfg               1      14d
builder-token-95hfd        kubernetes.io/service-account-token   4      14d
builder-token-9vrkh        kubernetes.io/service-account-token   4      13d
default-dockercfg-j9vjn    kubernetes.io/dockercfg               1      13d
default-token-6ffpx        kubernetes.io/service-account-token   4      13d
deployer-dockercfg-6s6rq   kubernetes.io/dockercfg               1      13d
deployer-token-s48ld       kubernetes.io/service-account-token   4      13d 
DEBU NAME                       TYPE                                  DATA   AGE
builder-dockercfg-gclg9    kubernetes.io/dockercfg               1      13d
builder-dockercfg-zg6qx    kubernetes.io/dockercfg               1      14d
builder-token-95hfd        kubernetes.io/service-account-token   4      14d
builder-token-9vrkh        kubernetes.io/service-account-token   4      13d
default-dockercfg-j9vjn    kubernetes.io/dockercfg               1      13d
default-token-6ffpx        kubernetes.io/service-account-token   4      13d
deployer-dockercfg-6s6rq   kubernetes.io/dockercfg               1      13d
deployer-token-s48ld       kubernetes.io/service-account-token   4      13d 
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
INFO Adding user's pull secret to the cluster...  
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU Waiting for availability of resource type 'machineconfigs' 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: timeout 5s oc get machineconfigs --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
00-master                                          10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
00-worker                                          10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-master-container-runtime                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-master-kubelet                                  10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-worker-container-runtime                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-worker-kubelet                                  10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
97-master-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
97-worker-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
98-master-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
98-worker-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
99-master-generated-registries                     10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
99-master-ssh                                                                                 3.2.0             14d
99-node-sizing-for-crc                                                                        3.2.0             14d
99-openshift-machineconfig-master-dummy-networks                                              3.2.0             14d
99-worker-generated-registries                     10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
99-worker-ssh                                                                                 3.2.0             14d
rendered-master-c37acf237383db3a8d1b19e574d9917f   10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             13d
rendered-worker-9de95df0876dc027f4d977da7704c468   10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             13d 
DEBU NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
00-master                                          10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
00-worker                                          10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-master-container-runtime                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-master-kubelet                                  10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-worker-container-runtime                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
01-worker-kubelet                                  10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
97-master-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
97-worker-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
98-master-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
98-worker-generated-kubelet                        10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
99-master-generated-registries                     10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
99-master-ssh                                                                                 3.2.0             14d
99-node-sizing-for-crc                                                                        3.2.0             14d
99-openshift-machineconfig-master-dummy-networks                                              3.2.0             14d
99-worker-generated-registries                     10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             14d
99-worker-ssh                                                                                 3.2.0             14d
rendered-master-c37acf237383db3a8d1b19e574d9917f   10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             13d
rendered-worker-9de95df0876dc027f4d977da7704c468   10694c7cc870f32bc7bf1888d1b8199f13119ff4   3.4.0             13d 
DEBU Running SSH command: timeout 30s oc get machineconfigs 99-master-ssh -o jsonpath='{.spec.config.passwd.users[0].sshAuthorizedKeys[0]}' --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAFJj8ezHXTbqZDJkARaBDLnyfX178Kr/G6/jSLRVWunGd+Ziboevn2l5V0CyHJ9B7bwk0hSrDeYBV6Q1aIskw19UgAYfD2Wpi8NaOZCegDmx9LjnEFZM61BrGMK5rpILfRfJ/Ky/EdIrrafwopJHhk469wKPjAzCXv6FSzX4vK3rKO/Fw== core 
INFO Updating SSH key to machine config resource... 
DEBU Running SSH command: timeout 30s oc patch machineconfig 99-master-ssh -p '{"spec": {"config": {"passwd": {"users": [{"name": "core", "sshAuthorizedKeys": ["ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAEaMgVKr3QYfbEnDB1ynbtY+vq9gKRJkauTauu7Fyf1bdfJBkK0sSIhuG1b6MzzIIxyZaexZRLihqqHuHh/YWUicgBWsPPg3EHap7Lz9uNVjgrQoadhI5xQFF5Pi8u7zzH5ERbNLyCL+++XGoilkDIUmoxFUNFDbsAKkCvtscVsVVDS1Q=="]}]}}}}' --type merge --context admin --cluster crc --kubeconfig /opt/kubeconfig 
DEBU SSH command results: err: <nil>, output: machineconfig.machineconfiguration.openshift.io/99-master-ssh patched 
INFO Waiting until the user's pull secret is written to the instance disk... 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 1                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 2                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 3                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 4                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 5                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 6                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 7                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 8                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 9                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 10                       
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 11                       
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 12                       
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 13                       
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 14                       
DEBU Running SSH command: <hidden>                
DEBU Failed to create new ssh session: ssh: unexpected packet in response to channel open: <nil> 
DEBU SSH command failed                           
DEBU non-retriable error: failed to read /var/lib/kubelet/config.json file: ssh command error:
command : sudo cat /var/lib/kubelet/config.json
err     : ssh: unexpected packet in response to channel open: <nil>
:  
Failed to update pull secret on the disk: Temporary error: pull secret not updated to disk (x14)
failed to read /var/lib/kubelet/config.json file: ssh command error:
command : sudo cat /var/lib/kubelet/config.json
err     : ssh: unexpected packet in response to channel open: <nil>
: 
cfergeau commented 5 months ago
DEBU Failed to create new ssh session: ssh: unexpected packet in response to channel open: <nil> 
DEBU SSH command failed    

This is very unexpected, is this always failing with this? Have you tried crc cleanup to restart from an env as clean as possible ?

rmestrum commented 5 months ago

If tried crc cleanup several times.

I just did a crc cleanup followed by a crc setup and then the crc start The error is the same

INFO Waiting until the user's pull secret is written to the instance disk... 
DEBU retry loop: attempt 0                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 1                        
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU error: Temporary error: pull secret not updated to disk - sleeping 2s 
DEBU retry loop: attempt 2                        
DEBU Running SSH command: <hidden>                
DEBU Failed to create new ssh session: ssh: unexpected packet in response to channel open: <nil> 
DEBU SSH command failed                           
DEBU non-retriable error: failed to read /var/lib/kubelet/config.json file: ssh command error:
command : sudo cat /var/lib/kubelet/config.json
err     : ssh: unexpected packet in response to channel open: <nil>
:  
Failed to update pull secret on the disk: Temporary error: pull secret not updated to disk (x2)
failed to read /var/lib/kubelet/config.json file: ssh command error:
command : sudo cat /var/lib/kubelet/config.json
err     : ssh: unexpected packet in response to channel open: <nil>
: 
cfergeau commented 5 months ago

After this failure, is vfkit still running? is crc daemon still running? You can check with ps aux |grep vfkit and ps aux |grep crc Is it possible to ssh into the VM after this failure? https://github.com/crc-org/crc/wiki/Debugging-guide#entering-the-vm

rmestrum commented 5 months ago

Yes these processes are stil running

image

I can't access the VM using ssh

❯ ssh -i ~/.crc/machines/crc/id_ecdsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 2222 core@127.0.0.1 -v
OpenSSH_9.6p1, LibreSSL 3.3.6
debug1: Reading configuration data /Users/rmestrum/.ssh/config
debug1: /Users/rmestrum/.ssh/config line 12: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files
debug1: /etc/ssh/ssh_config line 54: Applying options for *
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: auto-mux: Trying existing master at '/tmp/23c160d8ba11e14a11e1c2d6b737ef337a07fe31'
debug1: Control socket "/tmp/23c160d8ba11e14a11e1c2d6b737ef337a07fe31" does not exist
debug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.
debug1: Connection established.
debug1: identity file /Users/rmestrum/.crc/machines/crc/id_ecdsa type 2
debug1: identity file /Users/rmestrum/.crc/machines/crc/id_ecdsa-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_9.6
debug1: Remote protocol version 2.0, remote software version OpenSSH_8.7
debug1: compat_banner: match: OpenSSH_8.7 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 127.0.0.1:2222 as 'core'
debug1: SSH2_MSG_KEXINIT sent
Connection closed by 127.0.0.1 port 2222
cfergeau commented 5 months ago

Same failure if you use the key from ~/.crc/cache/crc_vfkit_4.15.14.crcbundle/ ? (not 100% about the path :)

rmestrum commented 5 months ago

same problem

❯ ssh -i /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/id_ecdsa_crc -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 2222 core@127.0.0.1 -v
OpenSSH_9.6p1, LibreSSL 3.3.6
debug1: Reading configuration data /Users/rmestrum/.ssh/config
debug1: /Users/rmestrum/.ssh/config line 12: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files
debug1: /etc/ssh/ssh_config line 54: Applying options for *
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: auto-mux: Trying existing master at '/tmp/23c160d8ba11e14a11e1c2d6b737ef337a07fe31'
debug1: Control socket "/tmp/23c160d8ba11e14a11e1c2d6b737ef337a07fe31" does not exist
debug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.
debug1: Connection established.
debug1: identity file /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/id_ecdsa_crc type 2
debug1: identity file /Users/rmestrum/.crc/cache/crc_vfkit_4.15.14_arm64/id_ecdsa_crc-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_9.6
debug1: Remote protocol version 2.0, remote software version OpenSSH_8.7
debug1: compat_banner: match: OpenSSH_8.7 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 127.0.0.1:2222 as 'core'
debug1: SSH2_MSG_KEXINIT sent
Connection closed by 127.0.0.1 port 2222
gbraad commented 1 day ago

@vyasgun can you try to repro this? I have seen an internal message that mentioned the same thing.