Open kon-foo opened 2 months ago
@spowelljr Can I work on this?
@PyAgni Yup, comment /assign
and it will assign the issue to you
/assign
@PyAgni are you still working on this ?
@spowelljr I did some RnD on this issue and the problem lies in this block. If the image is not removed it is added into the failed array and the loop continues. It does not error out. I believe this behavior exists to handle multi-node situations. Please do recommend the best way to resolve this. https://github.com/kubernetes/minikube/blob/94d78a95a06c9999601cfcd056d414b89969c85f/pkg/minikube/machine/cache_images.go#L678
What Happened?
Description
When running
minikube image rm
, the exit code is 0 even though the command fails to remove the image. A failure should result in a non-zero exit code, allowing scripts or CI processes to handle the failure properly.Steps to reproduce
Load an image and start a pod.
Attempt to remove the image while being in use.
Check for the error output: You should see an error similar to:
stderr: Error response from daemon: conflict: unable to remove repository reference "alpine:latest" (must force) - container dc9b6c3f872e is using its referenced image 91ef0af61f39
Expected Behavior
When
minikube image rm
encounters an error (such as an image being in use), it should exit with a non-zero status code (e.g. 1), allowing proper error handling in scripts or pipelines.Actual Behavior
The command exits with a status code of 0, even though it reports an internal failure, which leads to downstream commands executing when they shouldn't.
Suggested Fix
Ensure that
minikube image rm
propagates the exit status of the underlying image removal process.\Environment
Attach the log file
log.txt
Operating System
Ubuntu
Driver
Docker