Closed xellsys closed 2 years ago
I've put together a helper that wraps up a workflow for this problem: https://github.com/puppetlabs/kubectl-ran
It's reasonably complete, although I plan to add an ability to customize the pod spec a bit to add sidecars.
You can add a sidecar container that will run infinitely with the restart policy Never
and both the containers can share the empty volumes
Example:
apiVersion: v1
kind: Pod
metadata:
labels:
app: test-automation
instance: testautomation-sample
name: testautomation-sample
namespace: default
spec:
containers:
- args:
- make
- component_test
image: <image>:<tag>
imagePullPolicy: Always
name: component-test
resources: {}
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /opt/app/test-automation/reports/
name: reports
- args:
- /usr/bin/tail
- -f
- /dev/null
image: <image>:<tag>
imagePullPolicy: IfNotPresent
name: wait-for-report
volumeMounts:
- mountPath: /opt/app/test-automation/reports/
name: reports
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-fxqcc
readOnly: true
imagePullSecrets:
- name: kubelet-pull-secret
restartPolicy: Never
securityContext:
runAsUser: 0
volumes:
- emptyDir: {}
name: reports
This will give you control over running kubectl cp ...
followed by kubectl delete ...
Dear community.
Almost four years later this topic is still open. I’m really afraid that such thinks like ergonomic and API friendly usage of Kubernetes are not mentioned with the attention they should be. Of course there are workarounds to solve this problem. But this workarounds are really hard to implement and error prone. From the architecture point of view it’s also sub-optimal to have circular references or use shared file system in some cases. Putting the main function into a init routine is a absolute no-go.
Why it is not permitted to access the container after the job finished or after a pod failed to start!? It’s not only getting the data. It’s also necessary for analyzing purpose to attach to the container.
Please increase the priority.
Thank you very much.
This functionality (keeping a pod alive after completion) is beyond the scope of kubectl
. Suggest closing this issue. If it is addressed, then it needs to be addressed more holistically including other areas (e.g. kubelet
).
The use case is definitely understandable: wanting to run something in a pod, and after it is finished to retrieve the output of what was run.
The reason this issue has been open since 2018 is that this isn't functionality that kubectl is able to provide without changes first being made deeper in the Kubernetes stack (kubelet, etc).
There are some options for you today though:
Save you pod's output to a persistent volume. This is the recommended way to do it.
If you don't want to use a PV for some reason, there are some workarounds in the comments above involving multi-container pods.
However, If you still feel very strongly about wanting this particular feature, to be able to copy files from stopped pods, then my suggestion is to open an issue on the kubernetes/kubernetes repo where it can be addressed by another SIG who is able to make the necessary changes to support this.
I realize this is a popular issue, but I'm going to go ahead and close it based on what I said above. If Kubernetes is changed in the future to support this server-side, then we can make the necessary changes in kubectl to provide this functionality.
/close
@brianpursley: Closing this issue.
Hi there. I have created a new issue: https://github.com/kubernetes/kubernetes/issues/111045 Please support me and give this issue a weight. Not less important please help me to find the persons who are able to bring this topic to success.
any updates on this ticket?
any updates on this ticket?
Nope. I would say there is zero progress.
Status Quo: The current implementation of
kubectl cp
requires the container to be running by encapsulating the exec command using the tar binary.Requirement: Allow copying files from (possibly also to) a stopped container.
Additional info: This relates to #58512. It would also more closely align to the
docker cp
functionality. No need for the tar binary in the container anymore.Background: My current use case is running
helm test
with a more sophisticated test set (0 and 1 results do not suffice for analysis) and having a simple way of persisting test results at the end (Jenkins or w/e). I know there are other solutions but I want to keep my test pod simple (not actively pushing test results to some endpoint or keeping it alive) and would like to avoid extensive configuration of a persistent storage.