Closed livelace closed 8 years ago
just to confirm @livelace , the label referenced is in the metadata of the template within deployment config, not in the metadata of the deployment config, correct?
I see the deletes work for labels within the deployment config's metadata, but nothing is deleted when labels from the underlying template are specified.
By the way, oc delete -l name=test
works the same way. If the label is specified at the template level, nothing is deleted. If specified at the DC level, it is deleted.
Given that, I'm inclined to not address this in the plugin, and stay consistent with oc
.
@bparees - agreed?
@livelace - can you define a label at the DC metadata level and confirm if the delete occurs for you as well?
@gabemontero @livelace i'm not sure i understand the scenario yet, but I think @gabemontero is asking the same question i'm about to ask:
Templates can have their own labels (like any resource), and templates also allow you to define a set of labels that will be applied to all objects the template creates when the template is instantiated. (So your DC should have the labels from that second set, but not the first set). You can also explicitly define a label on the DC within the template, of course.
Which label are you referencing when you do the delete action?
yes @bparees, I was asking the same question. Trying to confirm where the label was defined. Based on how I read his description, I think he specified it at the template level. And that would correlate with the results of my various attempts at reproducing.
On Wed, Jul 27, 2016 at 11:08 AM, Ben Parees notifications@github.com wrote:
@gabemontero https://github.com/gabemontero @livelace https://github.com/livelace i'm not sure i understand the scenario yet, but I think @gabemontero https://github.com/gabemontero is asking the same question i'm about to ask:
Templates can have their own labels (like any resource), and templates also allow you to define a set of labels that will be applied to all objects the template creates when the template is instantiated. (So your DC should have the labels from that second set, but not the first set). You can also explicitly define a label on the DC within the template, of course.
Which label are you referencing when you do the delete action?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openshift/jenkins-plugin/issues/55#issuecomment-235615749, or mute the thread https://github.com/notifications/unsubscribe-auth/ADbadIOoxcEZnPLqiyhkHfH5c2TG24voks5qZ3RngaJpZM4JV_dh .
Yes, guys, you are completely right. Labels in the metadata are working now. I think that it should be added to documentation in more clear manner (for those who (like me) doesn't know product well), because all examples which I saw don't contain labels in the metadata.
https://docs.openshift.org/latest/dev_guide/deployments.html http://kubernetes.io/docs/user-guide/deploying-applications/
Thanks a lot!
Case1 - I can delete all deployments configuration/services/replication controllers with name "nossl":
Case2 - I tried to delete Pods also by this configuration:
Case2 problems:
Hmm @livelace , I was able delete multiple dcs, svcs, rcs, and pods with labels:
Starting "Delete OpenShift Resource(s) using Labels" with the project "test".
Deleted a "DeploymentConfig" with key "frontend"
Deleted a "DeploymentConfig" with key "frontend-prod"
Deleted a "Service" with key "frontend"
Deleted a "Service" with key "frontend-prod"
Deleted a "ReplicationController" with key "frontend-1"
Deleted a "ReplicationController" with key "frontend-prod-1"
Deleted a "Pod" with key "frontend-1-df0nh"
Deleted a "Pod" with key "frontend-prod-1-kmpb4"
Exiting "Delete OpenShift Resource(s) using Labels" successfully, with 2 resource(s) deleted.
Finished: SUCCESS
Could you re-run your test with verbose logging, as well as running oc get dc -o json
, oc get svc -o json
, oc get rc -o json
, and oc get pods -o json` after the job runs? If that is too much, could you at least re-run with verbose logging?
@gabemontero Hello, of course, but only on Monday.
Sounds great @livelace
On Saturday, July 30, 2016, Oleg Popov notifications@github.com wrote:
@gabemontero https://github.com/gabemontero Hello, of course, but only on Monday.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openshift/jenkins-plugin/issues/55#issuecomment-236352784, or mute the thread https://github.com/notifications/unsubscribe-auth/ADbadJ0bZwZ28yY2wukxNHdzH-zK2F2fks5qawlagaJpZM4JV_dh .
for i in dc svc rc pod; do oc get $i | grep -v "deploy|NAME" | awk '{print $1}' | xargs oc describe $i > $i.describe;done
for i in dc svc rc pod; do oc get $i | grep -v "deploy|NAME" | awk '{print $1}' | xargs oc get $i -o yaml > $i.yaml;done
But I found "WA":
hey @livelace - I don't see the plugin's verbose output in the log.zip you provided, on the describe/yaml output for the dc/rc/pod/svc objects.
gmontero ~/PLUGIN-55 $ unzip -t log.zip
Archive: log.zip
testing: log/ OK
testing: log/rc.describe OK
testing: log/dc.describe OK
testing: log/pod.describe OK
testing: log/pod.yaml OK
testing: log/rc.yaml OK
testing: log/svc.yaml OK
testing: log/dc.yaml OK
testing: log/svc.describe OK
No errors detected in compressed data of log.zip.
gmontero ~/PLUGIN-55 $
I'll see what I uncover for those, but could you provide the verbose output from the step that coincided with the describe/yaml output.
Also, I wonder how much of the delay in Pod deletion that was discussed with issue 57 is coming into play here ... the plugin step logs may reveal more there. Have you been able to account for or correlate those two items?
The yaml did not reveal any problems. Taking the "conf" label that @livelace cited, I did a grep "conf:" *.yaml"
and did not get any hits. Similarly, a grep "name:" *.yaml
turned up a bunch of hits as expected, including some of the labels I saw from visual inspection.
Assuming the yaml's were captured after the build step ran, that would tell me there were not any resources with a label key of "conf" and associated label value of "nossl".
Thoughts or corrections on my assumption on when the yaml was gathered ?
Otherwise, let's get the verbose data from the delete step, and we'll go from there.
Ok, let's start at the beginning :)
Types: DeploymentConfig,Service,ReplicationController,Pod Keys: name,name,name,name Values: nossl,testing-11.0-drweb-dss-nossl-peer1,testing-11.0-drweb-dss-nossl-peer2,testing-11.0-drweb-dss-nossl-peer3
I expected that all types of objects should was done iteration with pair "key=value":
dc/rc/svc/pod should be deleted with "name": nossl, testing-11.0-drweb-dss-nossl-peer1, testing-11.0-drweb-dss-nossl-peer2, testing-11.0-drweb-dss-nossl-peer3
But as result I got only:
All deployments configuration/services/replication controllers with name "nossl" weren't deleted Only pod with name "testing-11.0-drweb-dss-nossl-peer3" was deleted
Why I used "testing-11.0-drweb-dss-nossl-peer1" as value for the pod recognition ? Because it sits inside Labels settings:
[root@openshift-master1 ~]# oc describe pod testing-11.0-drweb-dss-nossl-peer1-1-oaav2 Name: testing-11.0-drweb-dss-nossl-peer1-1-oaav2 Namespace: drweb-netcheck Node: openshift-node1.i.drweb.ru/10.4.0.207 Start Time: Thu, 28 Jul 2016 10:44:11 +0300 Labels: deployment=testing-11.0-drweb-dss-nossl-peer1-1,deploymentconfig=testing-11.0-drweb-dss-nossl-peer1,name=testing-11.0-drweb-dss-nossl-peer1 Status: Running IP: 10.208.0.3 Controllers: ReplicationController/testing-11.0-drweb-dss-nossl-peer1-1 Containers:
About "conf" label - it works and I just didn't published it. It's simple:
Types: DeploymentConfig,Service,ReplicationController,Pod Keys: conf Values: nossl
template: metadata: labels: name: "${BRANCH}-${PRODUCT_VERSION}-${PRODUCT_NAME}-nossl-peer2" conf: "nossl"
task log verbose.txt.zip
OK ... I've figured it out with the clarification / additional data. Will update when I have a pre-release fix ready.
OK, I believe this is fixed with commit https://github.com/openshift/jenkins-plugin/commit/e4b9abd182fbae8a369041c82478617b4c186956 and this pre-release version
@livelace give it a try when you get the chance, and re-open if something seems amiss.
I tried to delete this deployment configuration:
apiVersion: "v1" kind: "DeploymentConfig" metadata: name: "test" spec: template: metadata: labels: name: "test"
With Jenkins step:
The type(s) of OpenShift resource(s) to delete: DeploymentConfig The key(s) of labels on the OpenShift resource(s) to delete: name The value(s) of labels on the OpenShift resource(s) to delete: test
Logs:
Exiting "Delete OpenShift Resource(s) using Labels" successfully, with 0 resource(s) deleted. Finished: SUCCESS