crc-org / crc

CRC is a tool to help you run containers. It manages a local OpenShift 4.x cluster, Microshift or a Podman VM optimized for testing and development purposes
https://crc.dev
Apache License 2.0
1.25k stars 236 forks source link

[BUG] crc VM does not start because of encrypted home folder and SElinux #3915

Open VGerris opened 10 months ago

VGerris commented 10 months ago

General information

CRC status

$ crc status --log-level debug
DEBU CRC version: 2.29.0+da5f55                   
DEBU OpenShift version: 4.14.1                    
DEBU Podman version: 4.4.4                        
DEBU Running 'crc status'                         
CRC VM:          Running
OpenShift:       Unreachable (v4.14.0-0.okd-2023-10-28-073550)
RAM Usage:       262MB of 16.36GB
Disk Usage:      19.32GB of 32.68GB (Inside the CRC VM)
Cache Usage:     25.01GB
Cache Directory: /home/vgerris/.crc/cache

CRC config

- consent-telemetry                     : no
- memory                                : 16000
- preset                                : okd

Host Operating System

NAME="Fedora Linux"
VERSION="38 (Workstation Edition)"
ID=fedora
VERSION_ID=38
VERSION_CODENAME=""
PLATFORM_ID="platform:f38"
PRETTY_NAME="Fedora Linux 38 (Workstation Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:38"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=38
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=38
SUPPORT_END=2024-05-14
VARIANT="Workstation Edition"
VARIANT_ID=workstation

Steps to reproduce

  1. clean everything, download latest crc $ crc delete -f $ crc cleanup $ rm -r ~/.crc $ crc setup $ crc start --log-level debug Snippet of log included further on, basically :

    Error running post start: exit status 1

Expected

succesful start

Actual

error 1, machine up but problem finishing setup

Logs

INFO Checking if systemd-networkd is running      
DEBU Checking if systemd-networkd.service is running 
DEBU Running 'systemctl status systemd-networkd.service' 
DEBU Command failed: exit status 3                
DEBU stdout: * systemd-networkd.service - Network Configuration
     Loaded: loaded (/usr/lib/systemd/system/systemd-networkd.service; disabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/service.d
             `-10-timeout-abort.conf
     Active: inactive (dead)
TriggeredBy: * systemd-networkd.socket
       Docs: man:systemd-networkd.service(8)
             man:org.freedesktop.network1(5) 
DEBU stderr:                                      
DEBU systemd-networkd.service is not running      
INFO Checking if NetworkManager is installed      
DEBU Checking if 'nmcli' is available             
DEBU 'nmcli' was found in /usr/bin/nmcli          
INFO Checking if NetworkManager service is running 
DEBU Checking if NetworkManager.service is running 
DEBU Running 'systemctl status NetworkManager.service' 
DEBU NetworkManager.service is already running    
INFO Checking if dnsmasq configurations file exist for NetworkManager 
INFO Checking if the systemd-resolved service is running 
DEBU Checking if systemd-resolved.service is running 
DEBU Running 'systemctl status systemd-resolved.service' 
DEBU systemd-resolved.service is already running  
INFO Checking if /etc/NetworkManager/dispatcher.d/99-crc.sh exists 
DEBU Checking NetworkManager dispatcher file for crc network 
DEBU Dispatcher file has the expected content     
INFO Checking if libvirt 'crc' network is available 
DEBU Checking if libvirt 'crc' network exists     
DEBU Running 'virsh --connect qemu:///system net-info crc' 
DEBU Checking if libvirt 'crc' definition is up to date 
DEBU Running 'virsh --connect qemu:///system net-dumpxml --inactive crc' 
DEBU libvirt 'crc' network has the expected value 
INFO Checking if libvirt 'crc' network is active  
DEBU Checking if libvirt 'crc' network is active  
DEBU Running 'virsh --connect qemu:///system net-info crc' 
DEBU libvirt 'crc' network is already active      
DEBU Checking file: /home/vgerris/.crc/machines/crc/.crc-exist 
INFO Loading bundle: crc_okd_libvirt_4.14.0-0.okd-2023-10-28-073550_amd64... 
DEBU Checking file: /home/vgerris/.crc/machines/crc/.crc-exist 
DEBU Found binary path at /home/vgerris/.crc/bin/crc-driver-libvirt 
DEBU Launching plugin server for driver libvirt   
DEBU Plugin server listening at address 127.0.0.1:42335 
DEBU () Calling .GetVersion                       
DEBU Using API Version 1                          
DEBU () Calling .SetConfigRaw                     
DEBU () Calling .GetMachineName                   
DEBU (crc) Calling .GetBundleName                 
DEBU (crc) Calling .GetState                      
DEBU (crc) DBG | time="2023-11-13T15:17:18+01:00" level=debug msg="Getting current state..." 
DEBU (crc) DBG | time="2023-11-13T15:17:18+01:00" level=debug msg="Fetching VM..." 
INFO Starting CRC VM for okd 4.14.0-0.okd-2023-10-28-073550... 
DEBU Updating CRC VM configuration                
DEBU (crc) Calling .GetConfigRaw                  
DEBU (crc) Calling .Start                         
DEBU (crc) DBG | time="2023-11-13T15:17:18+01:00" level=debug msg="Starting VM crc" 
DEBU (crc) DBG | time="2023-11-13T15:17:18+01:00" level=debug msg="Validating network" 
DEBU (crc) DBG | time="2023-11-13T15:17:18+01:00" level=debug msg="Validating storage pool" 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="GetIP called for crc" 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="Getting current state..." 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="IP address: 192.168.130.11" 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=info msg="Found IP for machine: 192.168.130.11" 
DEBU (crc) Calling .GetConfigRaw                  
DEBU Waiting for machine to be running, this may take a few minutes... 
DEBU retry loop: attempt 0                        
DEBU (crc) Calling .GetState                      
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="Getting current state..." 
DEBU Machine is up and running!                   
DEBU (crc) Calling .GetState                      
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="Getting current state..." 
DEBU (crc) Calling .GetIP                         
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="GetIP called for crc" 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="Getting current state..." 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="IP address: 192.168.130.11" 
INFO CRC instance is running with IP 192.168.130.11 
DEBU (crc) Calling .GetIP                         
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="GetIP called for crc" 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="Getting current state..." 
DEBU (crc) DBG | time="2023-11-13T15:17:24+01:00" level=debug msg="IP address: 192.168.130.11" 
DEBU Waiting until ssh is available               
DEBU retry loop: attempt 0                        
DEBU Running SSH command: exit 0                  
DEBU Using ssh private keys: [/home/vgerris/.crc/machines/crc/id_ecdsa /home/vgerris/.crc/cache/crc_okd_libvirt_4.14.0-0.okd-2023-10-28-073550_amd64/id_ecdsa_crc] 
DEBU SSH command results: err: dial tcp 192.168.130.11:22: connect: no route to host, output:  
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : dial tcp 192.168.130.11:22: connect: no route to host
 - sleeping 1s 
DEBU retry loop: attempt 1                        
DEBU Running SSH command: exit 0                  
DEBU Using ssh private keys: [/home/vgerris/.crc/machines/crc/id_ecdsa /home/vgerris/.crc/cache/crc_okd_libvirt_4.14.0-0.okd-2023-10-28-073550_amd64/id_ecdsa_crc] 
DEBU SSH command results: err: dial tcp 192.168.130.11:22: connect: no route to host, output:  
DEBU error: Temporary error: ssh command error:
command : exit 0
err     : dial tcp 192.168.130.11:22: connect: no route to host
 - sleeping 1s 
DEBU retry loop: attempt 2                        
DEBU Running SSH command: exit 0                  
DEBU Using ssh private keys: [/home/vgerris/.crc/machines/crc/id_ecdsa /home/vgerris/.crc/cache/crc_okd_libvirt_4.14.0-0.okd-2023-10-28-073550_amd64/id_ecdsa_crc] 
DEBU SSH command results: err: <nil>, output:     
INFO CRC VM is running                            
DEBU Using root access: disable core user password 
DEBU Running SSH command: sudo passwd --lock core 
DEBU SSH command results: err: <nil>, output: Locking password for user core.
passwd: Success 
DEBU Running SSH command: cat /home/core/.ssh/authorized_keys 
DEBU SSH command results: err: <nil>, output: ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBAGLYmzr0/3qP8LRklbS/Q8WJIKp9NAg5wZGDQAFwS4v2RyKcJGa4mK2HqMaZPAEDnRh0KLVT8YZ2++0EWzmP9yTsAEiuk6JGO/VIrn6wycpCWfJuzefXiQ1yM1ANqogf/cJhTc6LPTVrxAtydmep3kiUWGkUM2FSVh++3FQxa5CdtTBoA== 
DEBU Using root access: Get device id             
DEBU Running SSH command: sudo /usr/sbin/blkid -t TYPE=xfs -o device 
DEBU SSH command results: err: <nil>, output: /dev/vda4 
DEBU Using root access: Growing /dev/vda4 partition 
DEBU Running SSH command: sudo /usr/bin/growpart /dev/vda 4 
DEBU SSH command results: err: Process exited with status 1, output: NOCHANGE: partition 4 is size 63961055. it cannot be grown 
DEBU No free space after /dev/vda4, nothing to do 
DEBU Configuring shared directories               
DEBU (crc) Calling .GetSharedDirs                 
DEBU (crc) DBG | time="2023-11-13T15:17:36+01:00" level=debug msg="Found x86_64 hypervisor with 'hvm' capabilities" 
DEBU (crc) DBG | time="2023-11-13T15:17:36+01:00" level=debug msg="Found q35 machine type" 
INFO Configuring shared directories               
DEBU Using root access: Creating /home/vgerris    
DEBU Running SSH command: sudo mkdir -p /home/vgerris 
DEBU SSH command results: err: <nil>, output:     
DEBU Mounting tag dir0 at /home/vgerris           
DEBU Using root access: Mounting /home/vgerris    
DEBU Running SSH command: sudo mount -o context="system_u:object_r:container_file_t:s0" -t virtiofs dir0 /home/vgerris 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: remove /etc/containers/containers.conf to disable machine_enabled 
DEBU Running SSH command: sudo rm -fr /etc/containers/containers.conf 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: remove /etc/containers/podman-machine marker file to disable machine_enabled 
DEBU Running SSH command: sudo rm -fr /etc/containers/podman-machine 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: make root Podman socket accessible 
DEBU Running SSH command: sudo chmod 777 /run/podman/ /run/podman/podman.sock 
DEBU SSH command results: err: <nil>, output:     
DEBU Creating /var/srv/dnsmasq.conf with permissions 0644 in the CRC VM 
DEBU Running SSH command: <hidden>                
DEBU SSH command succeeded                        
DEBU Running SSH command: systemctl status crc-dnsmasq.service 
DEBU SSH command results: err: <nil>, output: ● crc-dnsmasq.service - Podman container-04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173.service
     Loaded: loaded (/etc/systemd/system/crc-dnsmasq.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: active (running) since Mon 2023-11-13 14:17:36 UTC; 234ms ago
       Docs: man:podman-generate-systemd(1)
    Process: 1580 ExecStart=/usr/bin/podman start 04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173 (code=exited, status=0/SUCCESS)
   Main PID: 1884 (conmon)
      Tasks: 1 (limit: 18611)
     Memory: 38.5M
        CPU: 201ms
     CGroup: /system.slice/crc-dnsmasq.service
             └─1884 /usr/bin/conmon --api-version 1 -c 04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173 -u 04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173/userdata -p /var/run/containers/storage/overlay-containers/04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173/userdata/pidfile -n crc-dnsmasq --exit-dir /run/libpod/exits --full-attach -s -l journald --log-level warning --syslog --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/var/run/containers/storage/overlay-containers/04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173/userdata/oci-log --conmon-pidfile /var/run/containers/storage/overlay-containers/04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg "" --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.skip_mount_home=true --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173

Nov 13 14:17:35 crc-d5q9b-master-0 systemd[1]: Starting crc-dnsmasq.service - Podman container-04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173.service...
Nov 13 14:17:35 crc-d5q9b-master-0 podman[1580]: 2023-11-13 14:17:35.99841121 +0000 UTC m=+0.340824438 system refresh
Nov 13 14:17:36 crc-d5q9b-master-0 podman[1580]: 2023-11-13 14:17:36.303146273 +0000 UTC m=+0.645559462 container init 04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173 (image=quay.io/crcont/dnsmasq:latest, name=crc-dnsmasq, vendor=Red Hat, Inc., io.openshift.tags=crc, io.k8s.description=This image runs in the crc VM and is used to provide DNS resolution for *.crc.testing/*.apps-crc.testing, maintainer=Red Hat, Inc., io.openshift.build.commit.id=f08cee3389569256aa288c33bad701a2704ce9db, name=crc/dnsmasq, distribution-scope=public, io.openshift.build.source-location=https://github.com/openshift/images, version=1, architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, com.redhat.build-host=cpt-1001.osbs.prod.upshift.rdu2.redhat.com, release=31, description=This image runs in the crc VM and is used to provide DNS resolution for *.crc.testing/*.apps-crc.testing, url=https://access.redhat.com/containers/#/registry.access.redhat.com/crc/dnsmasq/images/1-31, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.maintainer.component=Release, io.openshift.maintainer.project=OCPBUGS, io.openshift.build.commit.url=https://github.com/openshift/images/commit/f08cee3389569256aa288c33bad701a2704ce9db, io.openshift.expose-services=, io.k8s.display-name=crc' dnsmasq image, build-date=2023-10-16T10:13:09, License=GPLv2+, com.redhat.component=crc-dnsmasq-container, summary=crc dnsmasq image, vcs-ref=eee145f0410026deebd077785f1667303082cab3)
Nov 13 14:17:36 crc-d5q9b-master-0 podman[1580]: 2023-11-13 14:17:36.306066479 +0000 UTC m=+0.648479668 container start 04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173 (image=quay.io/crcont/dnsmasq:latest, name=crc-dnsmasq, io.k8s.display-name=crc' dnsmasq image, License=GPLv2+, com.redhat.component=crc-dnsmasq-container, io.k8s.description=This image runs in the crc VM and is used to provide DNS resolution for *.crc.testing/*.apps-crc.testing, maintainer=Red Hat, Inc., io.openshift.maintainer.project=OCPBUGS, name=crc/dnsmasq, io.openshift.build.commit.id=f08cee3389569256aa288c33bad701a2704ce9db, io.openshift.tags=crc, io.openshift.build.source-location=https://github.com/openshift/images, io.buildah.version=1.29.0, vcs-type=git, io.openshift.build.commit.url=https://github.com/openshift/images/commit/f08cee3389569256aa288c33bad701a2704ce9db, version=1, architecture=x86_64, io.openshift.maintainer.component=Release, vcs-ref=eee145f0410026deebd077785f1667303082cab3, release=31, io.openshift.expose-services=, description=This image runs in the crc VM and is used to provide DNS resolution for *.crc.testing/*.apps-crc.testing, build-date=2023-10-16T10:13:09, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.build-host=cpt-1001.osbs.prod.upshift.rdu2.redhat.com, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/crc/dnsmasq/images/1-31, summary=crc dnsmasq image)
Nov 13 14:17:36 crc-d5q9b-master-0 podman[1580]: 04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173
Nov 13 14:17:36 crc-d5q9b-master-0 systemd[1]: Started crc-dnsmasq.service - Podman container-04f32c294cd02e2dfd0f9a78e9d592a7094486783bf99b6ac0debc6271199173.service. 
DEBU Using root access: Executing systemctl daemon-reload command 
DEBU Running SSH command: sudo systemctl daemon-reload 
DEBU SSH command results: err: <nil>, output:     
DEBU Using root access: Executing systemctl start crc-dnsmasq.service 
DEBU Running SSH command: sudo systemctl start crc-dnsmasq.service 
DEBU SSH command results: err: <nil>, output:     
DEBU Running '/home/vgerris/.crc/bin/crc-admin-helper-linux rm api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing' 
DEBU Command failed: exit status 1                
DEBU stdout:                                      
DEBU stderr: Error: host file not writable, try running with elevated privileges
host file not writable, try running with elevated privileges 
DEBU Making call to close driver server           
DEBU (crc) Calling .Close                         
DEBU Successfully made call to close driver server 
DEBU Making call to close connection to plugin binary 
DEBU (crc) DBG | time="2023-11-13T15:17:36+01:00" level=debug msg="Closing plugin on server side" 
Error running post start: exit status 1

Before gather the logs try following if that fix your issue

$ crc delete -f
$ crc cleanup
$ crc setup
$ crc start --log-level debug

Please consider posting the output of crc start --log-level debug on http://gist.github.com/ and post the link in the issue.

VGerris commented 10 months ago

note : my homefolder is encrypted using ecrytfs, I had to apply a policy to overcome another issue that I reported here : https://bugzilla.redhat.com/show_bug.cgi?id=2249481

gbraad commented 10 months ago

DEBU stderr: Error: host file not writable, try running with elevated privileges

can the user add entries to the hosts file? or using sudo ?

VGerris commented 10 months ago

hi, what do you mean? That log output applies to actions within the VM, correct ? Shouldn't that 'just work' ? Can you write what precise commands I could run to test anything? thank you

praveenkumar commented 10 months ago

hi, what do you mean? That log output applies to actions within the VM, correct ? Shouldn't that 'just work' ? Can you write what precise commands I could run to test anything? thank you

Try to open this file and add some content and save it, does it able to save or have permission issue?

sudo vi /etc/hosts
VGerris commented 10 months ago

Hi, I am perfectly cable of changing and saving my host file with sudo, no saving issues.

praveenkumar commented 10 months ago

@VGerris can you try following without sudo?

$ ls -lZ  /home/vgerris/.crc/bin/crc-admin-helper-linux
$ /home/vgerris/.crc/bin/crc-admin-helper-linux rm api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing
VGerris commented 10 months ago

Hi, sure here we go:

$ ls -lZ  /home/vgerris/.crc/bin/crc-admin-helper-linux
-r-s--x---. 1 root vgerris system_u:object_r:ecryptfs_t:s0 5212680 Nov 13 10:31 /home/vgerris/.crc/bin/crc-admin-helper-linux

$ /home/vgerris/.crc/bin/crc-admin-helper-linux rm api.crc.testing oauth-openshift.apps-crc.testing console-openshift-console.apps-crc.testing downloads-openshift-console.apps-crc.testing canary-openshift-ingress-canary.apps-crc.testing default-route-openshift-image-registry.apps-crc.testing
Error: host file not writable, try running with elevated privileges
host file not writable, try running with elevated privileges

That seems to be an issue. I tried restorecon on it but get :

$ restorecon -Rv /home/vgerris/.crc/bin/crc-admin-helper-linux
restorecon: Could not set context for /home/vgerris/.crc/bin/crc-admin-helper-linux:  Operation not supported

I tried setting 755 as here : https://github.com/crc-org/admin-helper/blob/master/crc-admin-helper.spec.in but that didn't help.

praveenkumar commented 10 months ago

@VGerris is it something because of system_u:object_r:ecryptfs_t context because I can see the permission is as expected but context is different than usual like in my case unconfined_u:object_r:user_home_t but may be you have some restriction for home directory. try with sudo also I am surprised that with root user and suid bit set it is not able to write the file. Also run the restorecon with sudo.

praveenkumar commented 10 months ago

Also https://github.com/crc-org/crc/issues/2119 looks like dup of this.

VGerris commented 10 months ago

Can you be more elaborate ? Running restorecon with sudo I tried :

$ sudo restorecon -Rv /home/vgerris/.crc/bin/crc-admin-helper-linux
[sudo] password for vgerris: 
restorecon: Could not set context for /home/vgerris/.crc/bin/crc-admin-helper-linux:  Operation not supported

Same as root. This one seems a duplicate indeed and I also have nosuid set in the mount options.

To change the rights to 666 does not work for me and there seems to be check on some of the permissions that requires running crc setup, which restores the sticky bit and other rights.

Is there any issue with using sudo at setup, so one just runs the helper like that, or is it used at start and other occasions? Otherwise to place it outside the home dir seems to be the only option.

This workaround seems to work : sudo mount -o remount,suid /home/vgerris so moving the helper outside home sounds like a good idea as @cfergeau suggests here : https://github.com/crc-org/crc/issues/2119#issuecomment-1558795768

gbraad commented 10 months ago

This is a very specific selinux + encrypted home folder setup. We can try to reproduce it, but this is not a scenario we see often.

The solution could be to move crc-admin-helper to /usr/local/sbin or similar.

VGerris commented 10 months ago

Strictly, this is not because of an encrypted home folder or selinux, it is because the partition is mounted nosuid, correct? ecryptfs seems to be a common way to encrypt file systems if not using LUKS and is particularly useful on a multiuser computer, where one account can be encrypted and another not.

Seems like a good idea to move that file.

cfergeau commented 10 months ago

Yes the problem is the nosuid mount.

VGerris commented 8 months ago

I have reported this here too: https://bugzilla.redhat.com/show_bug.cgi?id=2249481 Will the file be moved in a coming version?