89luca89 / distrobox

Use any linux distribution inside your terminal. Enable both backward and forward compatibility with software and freedom to use whatever distribution you’re more comfortable with. Mirror available at: https://gitlab.com/89luca89/distrobox
https://distrobox.it/
GNU General Public License v3.0
9.43k stars 385 forks source link

Phantom container named NAME when performing operations with --all #1350

Closed filippo-martini closed 2 months ago

filippo-martini commented 2 months ago

Describe the bug While trying to perform any operation using --all, distrobox tries to manage a non existing container named NAME, inevitably failing, (also appears when using tab to autocomplete), however this "phantom" container is not listed neither by podman container list, nor distrobox-list. I have also tried, without success to reset podman with podman system reset, and actually create a container named NAME and then deleting it.

To Reproduce Simply try to perform an operation with the --all flag

Expected behavior There should not be a container named NAME

Logs

localuser:filippo being added to access control list
+ case "${container_manager}" in
+ command -v podman
+ container_manager=podman
+ command -v podman
+ '[' 1 -ne 0 ']'
+ container_manager='podman --log-level debug'
+ '[' 0 -ne 0 ']'
+ '[' 0 -ne 0 ']'
+ '[' 1 -ne 0 ']'
++ /usr/bin/distrobox-list --no-color
++ tail -n +2
++ cut '-d|' -f2
++ tr -d ' '
++ tr '\n' ' '
+ container_name_list='NAME '
+ '[' -z 'NAME ' ']'
+ '[' -z 'NAME ' ']'
+ '[' 0 -eq 0 ']'
+ '[' 0 -eq 0 ']'
+ printf 'Do you really want to delete containers:%s? [Y/n]: ' 'NAME '
Do you really want to delete containers:NAME ? [Y/n]: + read -r response
y
+ response=y
+ case "${response}" in
+ for container in ${container_name_list}
+ delete_container NAME
+ container_name=NAME
++ podman --log-level debug inspect --type container --format '{{.State.Status}}' NAME
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called inspect.PersistentPreRunE(podman --log-level debug inspect --type container --format {{.State.Status}} NAME) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
INFO[0000] Using sqlite as database backend             
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/filippo/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /var/home/filippo/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/filippo/.local/share/containers/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is not being used 
DEBU[0000] Cached value indicated that native-diff is usable 
DEBU[0000] backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument 
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
INFO[0000] Setting parallel job count to 37             
Error: no such container NAME
DEBU[0000] Shutting down engines                        
++ :
+ container_status=
+ '[' -z '' ']'
+ printf 'Cannot find container %s.\n' NAME
Cannot find container NAME.
+ return

Desktop (please complete the following information): I am using podman (4.9.4), distrobox (1.7.1.0) overlayed via rpm-ostree in Fedora silverblue 39

Additional context Screenshot from 2024-04-21 19-03-19

89luca89 commented 2 months ago

Hi @filippo-martini

sadly I can't reproduce this:

image image image

filippo-martini commented 2 months ago

My bad, the other day I managed to find the culprit and somehow completely forgot to report back. The root of the issue was adding xhost +si:localuser:$USER in .distroboxrc, as suggested in the wiki, resulting in the line localuser:user being added to access control list being printed after every execution of any distrobox command, so when the script tries to access the second line to retrieve the boxes names, it selects the wrong one. I quickly fixed it by appending >dev/null at the end of the command so it doesn't get printed on the console.

As it was caused by a badly written config file on my end, I think that the issue can be closed, but perhaps adding a quick mention of the possible problem in the relevant page of the wiki (or straight up replacing xhost +si:localuser:$USER with xhost +si:localuser:$USER >dev/null) could help casual users who just want to make graphical applications work to not stumble on the same problem.

89luca89 commented 2 months ago

Thanks added this to the wiki then :)