d2iq-archive / kubernetes-mesos

A Kubernetes Framework for Apache Mesos
636 stars 92 forks source link

kubernetes-scheduler had long pause before being able to schedule a task #276

Open tweakmy opened 9 years ago

tweakmy commented 9 years ago

I am seeing a observation where the kubernetes scheduler is only scheduling task after mesos master have forward the resource offers to the scheduler. Is this the expected behaviour? For a rolling update, there is momentary of a long wait before the first pod could be updated (it felt almost like 7-8sec). I thought it should be more responsive and should already roll update within 1 sec as the image is already loaded on the slave node.

Any thoughts how can I change it to be more responsive?

jdef commented 9 years ago

The kubernetes-mesos scheduler will scheduler pods using resources offered by the master. It may be several seconds between when the scheduler starts and when it first receives offers from the mesos master. This is not unexpected.

What do you mean by a rolling update?

On Thu, May 14, 2015 at 10:40 AM, tweakmy notifications@github.com wrote:

I am seeing a observation where the kubernetes scheduler is only scheduling task after mesos master have forward the resource offers to the scheduler. Is this the expected behaviour? For a rolling update, there is momentary of a long wait before the first pod could be updated (it felt almost like 7-8sec). I thought it should be more responsive and should already roll update within 1 sec as the image is already loaded on the slave node.

Any thoughts how can I change it to be more responsive?

— Reply to this email directly or view it on GitHub https://github.com/mesosphere/kubernetes-mesos/issues/276.

tweakmy commented 9 years ago

Not sure you have seen this before.

So, I was doing a kubectl rollingupdate command. I would expect the rollingupdate behaviour to be much faster, so the first thought was it would naturally kill instantly the initial pods and then bring up the new pods.

After looking in the code and digging in the log. I found that some how after I have fire the kubectl rollingupdate command it took a long while before the scheduler decides to kill the initial pods.

scheduler_1  | I0515 06:52:59.359787       1 plugin.go:718] Attempting to schedule: &{{ } {nginxcontroller2-fm4rh nginxcontroller2- default /api/v1beta1/pods/nginxcontroller2-fm4rh?namespace=default 05dc909f-facf-11e4-8448-52540022216c 2039 2015-05-15 06:52:59 +0000 UTC <nil> map[env:test name:nginx version:2] map[]} {[] [{nginx nginx:joee []  [{ 31002 80 TCP }] [] {map[] map[]} [] <nil> <nil> <nil> /dev/termination-log false IfNotPresent {[] []}}] Always ClusterFirst map[]  false} {Pending []     []}}
scheduler_1  | I0515 06:52:59.363440       1 plugin.go:225] Try to schedule pod nginxcontroller2-fm4rh
scheduler_1  | I0515 06:52:59.376157       1 fcfs.go:39] failed to find a fit for pod: default/nginxcontroller2-fm4rh
apiserver_1  | I0515 06:55:40.287947       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (21.61316ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:40.512166       1 handlers.go:109] GET /api/v1beta1/services?namespace=: (1.840685ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41341]
apiserver_1  | I0515 06:55:40.574192       1 handlers.go:109] PUT /api/v1beta1/events/nginxcontroller2-fm4rh.13de52e0edcad856?namespace=default: (17.745656ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:41.023027       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.619559ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:41.039212       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (14.206319ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:41.303129       1 handlers.go:109] GET /api/v1beta1/minions: (5.339037ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41341]
apiserver_1  | I0515 06:55:42.296378       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.46986ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:42.297765       1 handlers.go:109] GET /api/v1beta1/services/k8sm-scheduler?namespace=default: (7.151671ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340]
apiserver_1  | I0515 06:55:42.302192       1 handlers.go:109] GET /api/v1beta1/endpoints/k8sm-scheduler?namespace=default: (3.554533ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340]
apiserver_1  | I0515 06:55:42.318673       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (19.416724ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:42.831871       1 handlers.go:109] GET /api/v1beta1/replicationControllers?namespace=: (4.829957ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:42.839053       1 handlers.go:109] GET /api/v1beta1/pods?labels=env%3Dtest%2Cname%3Dnginx%2Cversion%3D2&namespace=default: (3.670725ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:43.045416       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.835206ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:43.065472       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (17.306297ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:43.413641       1 handlers.go:109] GET /api/v1beta1/minions: (4.7268ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:43.713587       1 handlers.go:109] GET /api/v1beta1/replicationControllers?namespace=: (4.349872ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:43.718285       1 handlers.go:109] GET /api/v1beta1/pods?labels=name%3Dnginx%2Cversion%3D2%2Cenv%3Dtest&namespace=default: (2.973344ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:44.324446       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.082667ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:44.339889       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (13.237151ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:45.082100       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (11.30462ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:45.099135       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (14.703012ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:46.344458       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (2.39474ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:46.372294       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (25.660798ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:47.105099       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (3.134789ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:47.121114       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (14.051498ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:48.379002       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.13377ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:48.397225       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (15.291334ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:48.420732       1 handlers.go:109] GET /api/v1beta1/minions: (4.827763ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:48.713477       1 handlers.go:109] GET /api/v1beta1/replicationControllers?namespace=: (4.170784ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:48.719526       1 handlers.go:109] GET /api/v1beta1/pods?labels=env%3Dtest%2Cname%3Dnginx%2Cversion%3D2&namespace=default: (2.652007ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:49.127232       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.966266ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:49.149082       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (18.625108ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:50.246533       1 handlers.go:109] GET /api/v1beta1/resourceQuotas?namespace=: (2.123327ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:50.409154       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.048702ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:50.429433       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (17.019185ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:50.517554       1 handlers.go:109] GET /api/v1beta1/services?namespace=: (2.5692ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:50.966705       1 handlers.go:109] PUT /api/v1beta1/events/nginxcontroller2-fm4rh.13de52e0edcad856?namespace=default: (18.23795ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:51.153767       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.078059ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:51.177724       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (21.493609ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
apiserver_1  | I0515 06:55:51.320056       1 handlers.go:109] GET /api/v1beta1/minions: (5.141768ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321]
apiserver_1  | I0515 06:55:52.306128       1 handlers.go:109] GET /api/v1beta1/services/k8sm-scheduler?namespace=default: (2.260783ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340]
apiserver_1  | I0515 06:55:52.311108       1 handlers.go:109] GET /api/v1beta1/endpoints/k8sm-scheduler?namespace=default: (2.392046ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340]
apiserver_1  | I0515 06:55:52.435192       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (2.872386ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:52.456693       1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (18.46568ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954]
apiserver_1  | I0515 06:55:53.183719       1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.923246ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478]
scheduler_1  | I0515 06:52:59.376735       1 plugin.go:721] Failed to schedule: &{{ } {nginxcontroller2-fm4rh nginxcontroller2- default /api/v1beta1/pods/nginxcontroller2-fm4rh?namespace=default 05dc909f-facf-11e4-8448-52540022216c 2039 2015-05-15 06:52:59 +0000 UTC <nil> map[env:test name:nginx version:2] map[]} {[] [{nginx nginx:joee []  [{ 31002 80 TCP }] [] {map[] map[]} [] <nil> <nil> <nil> /dev/termination-log false IfNotPresent {[] []}}] Always ClusterFirst map[]  false} {Pending []     []}}
scheduler_1  | I0515 06:52:59.380849       1 plugin.go:494] Call from /home/joee/cloud/kuber/k8sm/_build/src/github.com/mesosphere/kubernetes-mesos/pkg/scheduler/plugin.go: 723
scheduler_1  | I0515 06:52:59.381041       1 plugin.go:496] Error scheduling nginxcontroller2-fm4rh: No suitable offers for pod/task; retrying
scheduler_1  | I0515 06:52:59.381765       1 plugin.go:524] adding backoff breakout handler for pod /pods/default/nginxcontroller2-fm4rh
scheduler_1  | I0515 06:52:59.382881       1 offers.go:407] Registering offer listener /pods/default/nginxcontroller2-fm4rh

then sometime later

[33mscheduler_1  | I0515 06:54:05.205986       1 plugin.go:576] pod deleted: /pods/default/nginxcontroller-637f7
scheduler_1  | I0515 06:54:05.206458       1 messenger.go:166] Sending message mesos.internal.KillTaskMessage to master@192.168.121.214:5050
scheduler_1  | I0515 06:54:05.208018       1 http_transporter.go:107] Sending message to master@192.168.121.214:5050 via http
scheduler_1  | I0515 06:54:05.208160       1 http_transporter.go:343] libproc target URL http://192.168.121.214:5050/master/mesos.internal.KillTaskMessage
scheduler_1  | I0515 06:54:05.269646       1 http_transporter.go:328] Receiving message from master@192.168.121.214:5050, length 291
scheduler_1  | I0515 06:54:05.270205       1 messenger.go:343] Receiving message mesos.internal.StatusUpdateMessage from master@192.168.121.214:5050
scheduler_1  | I0515 06:54:05.270893       1 scheduler.go:567] Received status update from  master@192.168.121.214:5050  status source: slave(1)@192.168.121.252:5051
scheduler_1  | I0515 06:54:05.271002       1 scheduler.go:586] Sending status update ACK to  master@192.168.121.214:5050
scheduler_1  | I0515 06:54:05.271331       1 messenger.go:166] Sending message mesos.internal.StatusUpdateAcknowledgementMessage to master@192.168.121.214:5050
scheduler_1  | I0515 06:54:05.271722       1 scheduler.go:385] task status update "TASK_KILLED" from "SOURCE_EXECUTOR" for task "pod.7d707e7e-faca-11e4-ad1b-52540022216c" on slave "20150515-061655-3598297280-5050-1-S1" executor "" for reason "none"
scheduler_1  | I0515 06:54:05.276803       1 registry.go:301] task killed: &TaskStatus{TaskId:&TaskID{Value:*pod.7d707e7e-faca-11e4-ad1b-52540022216c,XXX_unrecognized:[],},State:*TASK_KILLED,Message:*task-killed,Source:*SOURCE_EXECUTOR,Reason:nil,Data:nil,SlaveId:&SlaveID{Value:*20150515-061655-3598297280-5050-1-S1,XXX_unrecognized:[],},ExecutorId:nil,Timestamp:*1.431672845e+09,Healthy:nil,XXX_unrecognized:[],}, task &{ID:pod.7d707e7e-faca-11e4-ad1b-52540022216c Pod:{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:nginxcontroller-637f7 GenerateName:nginxcontroller- Namespace:default SelfLink:/api/v1beta1/pods/nginxcontroller-637f7?namespace=default UID:7d3bd79d-faca-11e4-8448-52540022216c ResourceVersion:26 CreationTimestamp:2015-05-15 06:20:31 +0000 UTC DeletionTimestamp:<nil> Labels:map[name:nginx version:1 env:test] Annotations:map[]} Spec:{Volumes:[] Containers:[{Name:nginx Image:nginx:latest Command:[] WorkingDir: Ports:[{Name: HostPort:31002 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log Privileged:false ImagePullPolicy:Always Capabilities:{Add:[] Drop:[]}}] RestartPolicy:Always DNSPolicy:ClusterFirst NodeSelector:map[] Host: HostNetwork:false} Status:{Phase:Pending Conditions:[] Message: Host: HostIP: PodIP: ContainerStatuses:[]}} Spec:{SlaveID:20150515-061655-3598297280-5050-1-S1 CPU:0.25 Memory:64 PortMap:[{ContainerIdx:0 PortIdx:0 OfferPort:31002}] Ports:[31002] Data:[123 34 107 105 110 100 34 58 34 80 111 100 34 44 34 109 101 116 97 100 97 116 97 34 58 123 34 110 97 109 101 34 58 34 110 103 105 110 120 99 111 110 116 114 111 108 108 101 114 45 54 51 55 102 55 34 44 34 103 101 110 101 114 97 116 101 78 97 109 101 34 58 34 110 103 105 110 120 99 111 110 116 114 111 108 108 101 114 45 34 44 34 110 97 109 101 115 112 97 99 101 34 58 34 100 101 102 97 117 108 116 34 44 34 115 101 108 102 76 105 110 107 34 58 34 47 97 112 105 47 118 49 98 101 116 97 49 47 112 111 100 115 47 110 103 105 110 120 99 111 110 116 114 111 108 108 101 114 45 54 51 55 102 55 63 110 97 109 101 115 112 97 99 101 61 100 101 102 97 117 108 116 34 44 34 117 105 100 34 58 34 55 100 51 98 100 55 57 100 45 102 97 99 97 45 49 49 101 52 45 56 52 52 56 45 53 50 53 52 48 48 50 50 50 49 54 99 34 44 34 114 101 115 111 117 114 99 101 86 101 114 115 105 111 110 34 58 34 50 54 34 44 34 99 114 101 97 116 105 111 110 84 105 109 101 115 116 97 109 112 34 58 34 50 48 49 53 45 48 53 45 49 53 84 48 54 58 50 48 58 51 49 90 34 44 34 108 97 98 101 108 115 34 58 123 34 101 110 118 34 58 34 116 101 115 116 34 44 34 110 97 109 101 34 58 34 110 103 105 110 120 34 44 34 118 101 114 115 105 111 110 34 58 34 49 34 125 44 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 98 105 110 100 105 110 103 72 111 115 116 34 58 34 49 57 50 46 49 54 56 46 49 50 49 46 50 53 50 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 101 120 101 99 117 116 111 114 73 100 34 58 34 54 100 57 98 50 102 99 99 56 51 97 100 98 97 48 55 95 107 56 115 109 45 101 120 101 99 117 116 111 114 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 111 102 102 101 114 73 100 34 58 34 50 48 49 53 48 53 49 53 45 48 54 49 54 53 53 45 51 53 57 56 50 57 55 50 56 48 45 53 48 53 48 45 49 45 79 49 55 54 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 112 111 114 116 95 84 67 80 95 56 48 34 58 34 51 49 48 48 50 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 115 108 97 118 101 73 100 34 58 34 50 48 49 53 48 53 49 53 45 48 54 49 54 53 53 45 51 53 57 56 50 57 55 50 56 48 45 53 48 53 48 45 49 45 83 49 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 116 97 115 107 73 100 34 58 34 112 111 100 46 55 100 55 48 55 101 55 101 45 102 97 99 97 45 49 49 101 52 45 97 100 49 98 45 53 50 53 52 48 48 50 50 50 49 54 99 34 125 125 44 34 115 112 101 99 34 58 123 34 118 111 108 117 109 101 115 34 58 110 117 108 108 44 34 99 111 110 116 97 105 110 101 114 115 34 58 91 123 34 110 97 109 101 34 58 34 110 103 105 110 120 34 44 34 105 109 97 103 101 34 58 34 110 103 105 110 120 58 108 97 116 101 115 116 34 44 34 112 111 114 116 115 34 58 91 123 34 104 111 115 116 80 111 114 116 34 58 51 49 48 48 50 44 34 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 56 48 44 34 112 114 111 116 111 99 111 108 34 58 34 84 67 80 34 125 93 44 34 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 34 47 100 101 118 47 116 101 114 109 105 110 97 116 105 111 110 45 108 111 103 34 44 34 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 34 65 108 119 97 121 115 34 44 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 123 125 125 93 44 34 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 34 65 108 119 97 121 115 34 44 34 100 110 115 80 111 108 105 99 121 34 58 34 67 108 117 115 116 101 114 70 105 114 115 116 34 125 44 34 115 116 97 116 117 115 34 58 123 34 112 104 97 115 101 34 58 34 80 101 110 100 105 110 103 34 125 125]} Offer:&Offer{Id:&OfferID{Value:*20150515-061655-3598297280-5050-1-O176,XXX_unrecognized:[],},FrameworkId:&FrameworkID{Value:*20150515-061655-3598297280-5050-1-0000,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:*20150515-061655-3598297280-5050-1-S1,XXX_unrecognized:[],},Hostname:*192.168.121.252,Resources:[&Resource{Name:*cpus,Type:*SCALAR,Scalar:&Value_Scalar{Value:*1,XXX_unrecognized:[],},Ranges:nil,Set:nil,Role:**,XXX_unrecognized:[],} &Resource{Name:*mem,Type:*SCALAR,Scalar:&Value_Scalar{Value:*386,XXX_unrecognized:[],},Ranges:nil,Set:nil,Role:**,XXX_unrecognized:[],} &Resource{Name:*disk,Type:*SCALAR,Scalar:&Value_Scalar{Value:*35164,XXX_unrecognized:[],},Ranges:nil,Set:nil,Role:**,XXX_unrecognized:[],} &Resource{Name:*ports,Type:*RANGES,Scalar:nil,Ranges:&Value_Ranges{Range:[&Value_Range{Begin:*31000,End:*32000,XXX_unrecognized:[],}],XXX_unrecognized:[],},Set:nil,Role:**,XXX_unrecognized:[],}],Attributes:[],ExecutorIds:[],XXX_unrecognized:[],} State:1 Flags:map[launched:{} bound:{} deleted:{}] CreateTime:2015-05-15 06:20:32.28224516 +0000 UTC UpdatedTime:2015-05-15 06:50:25.133368515 +0000 UTC podStatus:{Phase:Running Conditions:[{Type:Ready Status:False}] Message: Host:192.168.121.252 HostIP: PodIP:172.17.0.1 ContainerStatuses:[{Name:nginx State:{Waiting:<nil> Running:0xc2081ed900 Termination:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Termination:<nil>} Ready:false RestartCount:0 Image:nginx:latest ImageID:docker://42a3cf88f3f0cce2b4bfb2ed714eec5ee937525b4c7e0a0f70daff18c3f2ee92 ContainerID:docker://bd90eac20233cf42878e0be2a0365a52f37008d851592619fd675cd36437318e}]} executor:0xc208248f00 podKey:/pods/default/nginxcontroller-637f7 launchTime:{sec:0 nsec:0 loc:<nil>} bindTime:{sec:63567268625 nsec:122484162 loc:0x1da4540} mapper:wildcard}
scheduler_1  | I0515 06:54:05.278593       1 http_transporter.go:107] Sending message to master@192.168.121.214:5050 via http
scheduler_1  | I0515 06:54:05.278770       1 http_transporter.go:343] libproc target URL http://192.168.121.214:5050/master/mesos.internal.StatusUpdateAcknowledgementMessage
scheduler_1  | I0515 06:54:06.043030       1 offers.go:143] Delete lingering offer: 20150515-061655-3598297280-5050-1-O343
scheduler_1  | I0515 06:54:06.182013       1 http_transporter.go:328] Receiving message from master@192.168.121.214:5050, length 306

Then only then the new pods could kick in after killing the inital pods. And then just the 1 out of the 2 replicas.

jdef commented 9 years ago

Thanks for reporting this. I don't regularly test rollingupdate. Will investigate.

On Fri, May 15, 2015 at 3:40 AM, tweakmy notifications@github.com wrote:

Not sure you have seen this before.

So, I was doing a kubectl rollingupdate command. I would expect the rollingupdate behaviour to be much faster, so the first thought was it would naturally kill instantly the initial pods and then bring up the new pods.

After looking in the code and digging in the log. I found that some how after you had fire the kubectl rollingupdate command it took a long while before the scheduler decides to kill the initial pods.

�[33mscheduler_1 | �[0mI0515 06:52:59.359787 1 plugin.go:718] Attempting to schedule: &{{ } {nginxcontroller2-fm4rh nginxcontroller2- default /api/v1beta1/pods/nginxcontroller2-fm4rh?namespace=default 05dc909f-facf-11e4-8448-52540022216c 2039 2015-05-15 06:52:59 +0000 UTC map[env:test name:nginx version:2] map[]} {[] [{nginx nginx:joee [] [{ 31002 80 TCP }] [] {map[] map[]} [] /dev/termination-log false IfNotPresent {[] []}}] Always ClusterFirst map[] false} {Pending [] []}} �[33mscheduler_1 | �[0mI0515 06:52:59.363440 1 plugin.go:225] Try to schedule pod nginxcontroller2-fm4rh �[33mscheduler_1 | �[0mI0515 06:52:59.376157 1 fcfs.go:39] failed to find a fit for pod: default/nginxcontroller2-fm4rh �[32mapiserver_1 | �[0mI0515 06:55:40.287947 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (21.61316ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:40.512166 1 handlers.go:109] GET /api/v1beta1/services?namespace=: (1.840685ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41341] �[32mapiserver_1 | �[0mI0515 06:55:40.574192 1 handlers.go:109] PUT /api/v1beta1/events/nginxcontroller2-fm4rh.13de52e0edcad856?namespace=default: (17.745656ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:41.023027 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.619559ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:41.039212 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (14.206319ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:41.303129 1 handlers.go:109] GET /api/v1beta1/minions: (5.339037ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41341] �[32mapiserver_1 | �[0mI0515 06:55:42.296378 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.46986ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:42.297765 1 handlers.go:109] GET /api/v1beta1/services/k8sm-scheduler?namespace=default: (7.151671ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340] �[32mapiserver_1 | �[0mI0515 06:55:42.302192 1 handlers.go:109] GET /api/v1beta1/endpoints/k8sm-scheduler?namespace=default: (3.554533ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340] �[32mapiserver_1 | �[0mI0515 06:55:42.318673 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (19.416724ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:42.831871 1 handlers.go:109] GET /api/v1beta1/replicationControllers?namespace=: (4.829957ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:42.839053 1 handlers.go:109] GET /api/v1beta1/pods?labels=env%3Dtest%2Cname%3Dnginx%2Cversion%3D2&namespace=default: (3.670725ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:43.045416 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.835206ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:43.065472 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (17.306297ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:43.413641 1 handlers.go:109] GET /api/v1beta1/minions: (4.7268ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:43.713587 1 handlers.go:109] GET /api/v1beta1/replicationControllers?namespace=: (4.349872ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:43.718285 1 handlers.go:109] GET /api/v1beta1/pods?labels=name%3Dnginx%2Cversion%3D2%2Cenv%3Dtest&namespace=default: (2.973344ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:44.324446 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.082667ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:44.339889 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (13.237151ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:45.082100 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (11.30462ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:45.099135 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (14.703012ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:46.344458 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (2.39474ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:46.372294 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (25.660798ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:47.105099 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (3.134789ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:47.121114 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (14.051498ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:48.379002 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.13377ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:48.397225 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (15.291334ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:48.420732 1 handlers.go:109] GET /api/v1beta1/minions: (4.827763ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:48.713477 1 handlers.go:109] GET /api/v1beta1/replicationControllers?namespace=: (4.170784ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:48.719526 1 handlers.go:109] GET /api/v1beta1/pods?labels=env%3Dtest%2Cname%3Dnginx%2Cversion%3D2&namespace=default: (2.652007ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:49.127232 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.966266ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:49.149082 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (18.625108ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:50.246533 1 handlers.go:109] GET /api/v1beta1/resourceQuotas?namespace=: (2.123327ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:50.409154 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (3.048702ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:50.429433 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (17.019185ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:50.517554 1 handlers.go:109] GET /api/v1beta1/services?namespace=: (2.5692ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:50.966705 1 handlers.go:109] PUT /api/v1beta1/events/nginxcontroller2-fm4rh.13de52e0edcad856?namespace=default: (18.23795ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:51.153767 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.078059ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:51.177724 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.124: (21.493609ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[32mapiserver_1 | �[0mI0515 06:55:51.320056 1 handlers.go:109] GET /api/v1beta1/minions: (5.141768ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41321] �[32mapiserver_1 | �[0mI0515 06:55:52.306128 1 handlers.go:109] GET /api/v1beta1/services/k8sm-scheduler?namespace=default: (2.260783ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340] �[32mapiserver_1 | �[0mI0515 06:55:52.311108 1 handlers.go:109] GET /api/v1beta1/endpoints/k8sm-scheduler?namespace=default: (2.392046ms) 200 [[km/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.214:41340] �[32mapiserver_1 | �[0mI0515 06:55:52.435192 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.252: (2.872386ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:52.456693 1 handlers.go:109] PUT /api/v1beta1/minions/192.168.121.252: (18.46568ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.252:44954] �[32mapiserver_1 | �[0mI0515 06:55:53.183719 1 handlers.go:109] GET /api/v1beta1/minions/192.168.121.124: (2.923246ms) 200 [[executor/v0.14.2 (linux/amd64) kubernetes/unknown] 192.168.121.124:42478] �[33mscheduler_1 | �[0mI0515 06:52:59.376735 1 plugin.go:721] Failed to schedule: &{{ } {nginxcontroller2-fm4rh nginxcontroller2- default /api/v1beta1/pods/nginxcontroller2-fm4rh?namespace=default 05dc909f-facf-11e4-8448-52540022216c 2039 2015-05-15 06:52:59 +0000 UTC map[env:test name:nginx version:2] map[]} {[] [{nginx nginx:joee [] [{ 31002 80 TCP }] [] {map[] map[]} [] /dev/termination-log false IfNotPresent {[] []}}] Always ClusterFirst map[] false} {Pending [] []}} �[33mscheduler_1 | �[0mI0515 06:52:59.380849 1 plugin.go:494] Call from /home/joee/cloud/kuber/k8sm/_build/src/github.com/mesosphere/kubernetes-mesos/pkg/scheduler/plugin.go: 723 �[33mscheduler_1 | �[0mI0515 06:52:59.381041 1 plugin.go:496] Error scheduling nginxcontroller2-fm4rh: No suitable offers for pod/task; retrying �[33mscheduler_1 | �[0mI0515 06:52:59.381765 1 plugin.go:524] adding backoff breakout handler for pod /pods/default/nginxcontroller2-fm4rh �[33mscheduler_1 | �[0mI0515 06:52:59.382881 1 offers.go:407] Registering offer listener /pods/default/nginxcontroller2-fm4rh

then sometime later

[33mscheduler_1 | �[0mI0515 06:54:05.205986 1 plugin.go:576] pod deleted: /pods/default/nginxcontroller-637f7 �[33mscheduler_1 | �[0mI0515 06:54:05.206458 1 messenger.go:166] Sending message mesos.internal.KillTaskMessage to master@192.168.121.214:5050 �[33mscheduler_1 | �[0mI0515 06:54:05.208018 1 http_transporter.go:107] Sending message to master@192.168.121.214:5050 via http �[33mscheduler_1 | �[0mI0515 06:54:05.208160 1 http_transporter.go:343] libproc target URL http://192.168.121.214:5050/master/mesos.internal.KillTaskMessage �[33mscheduler_1 | �[0mI0515 06:54:05.269646 1 http_transporter.go:328] Receiving message from master@192.168.121.214:5050, length 291 �[33mscheduler_1 | �[0mI0515 06:54:05.270205 1 messenger.go:343] Receiving message mesos.internal.StatusUpdateMessage from master@192.168.121.214:5050 �[33mscheduler_1 | �[0mI0515 06:54:05.270893 1 scheduler.go:567] Received status update from master@192.168.121.214:5050 status source: slave(1)@192.168.121.252:5051 �[33mscheduler_1 | �[0mI0515 06:54:05.271002 1 scheduler.go:586] Sending status update ACK to master@192.168.121.214:5050 �[33mscheduler_1 | �[0mI0515 06:54:05.271331 1 messenger.go:166] Sending message mesos.internal.StatusUpdateAcknowledgementMessage to master@192.168.121.214:5050 �[33mscheduler_1 | �[0mI0515 06:54:05.271722 1 scheduler.go:385] task status update "TASK_KILLED" from "SOURCE_EXECUTOR" for task "pod.7d707e7e-faca-11e4-ad1b-52540022216c" on slave "20150515-061655-3598297280-5050-1-S1" executor "" for reason "none" �[33mscheduler_1 | �[0mI0515 06:54:05.276803 1 registry.go:301] task killed: &TaskStatus{TaskId:&TaskID{Value:_pod.7d707e7e-faca-11e4-ad1b-52540022216c,XXX_unrecognized:[],},State:_TASK_KILLED,Message:_task-killed,Source:_SOURCE_EXECUTOR,Reason:nil,Data:nil,SlaveId:&SlaveID{Value:_20150515-061655-3598297280-5050-1-S1,XXX_unrecognized:[],},ExecutorId:nil,Timestamp:_1.431672845e+09,Healthy:nil,XXX_unrecognized:[],}, task &{ID:pod.7d707e7e-faca-11e4-ad1b-52540022216c Pod:{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:nginxcontroller-637f7 GenerateName:nginxcontroller- Namespace:default SelfLink:/api/v1beta1/pods/nginxcontroller-637f7?namespace=default UID:7d3bd79d-faca-11e4-8448-52540022216c ResourceVersion:26 CreationTimestamp:2015-05-15 06:20:31 +0000 UTC DeletionTimestamp: Labels:map[name:nginx version:1 env:test] Annotations:map[]} Spec:{Volumes:[] Containers:[{Name:nginx Image:nginx:latest Command:[] WorkingDir: Ports:[{Name: HostPort:31002 ContainerPort:80 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[] LivenessProbe: ReadinessProbe: Lifecycle: TerminationMessagePath:/dev/termination-log Privileged:false ImagePullPolicy:Always Capabilities:{Add:[] Drop:[]}}] RestartPolicy:Always DNSPolicy:ClusterFirst NodeSelector:map[] Host: HostNetwork:false} Status:{Phase:Pending Conditions:[] Message: Host: HostIP: PodIP: ContainerStatuses:[]}} Spec:{SlaveID:20150515-061655-3598297280-5050-1-S1 CPU:0.25 Memory:64 PortMap:[{ContainerIdx:0 PortIdx:0 OfferPort:31002}] Ports:[31002] Data:[123 34 107 105 110 100 34 58 34 80 111 100 34 44 34 109 101 116 97 100 97 116 97 34 58 123 34 110 97 109 101 34 58 34 110 103 105 110 120 99 111 110 116 114 111 108 108 101 114 45 54 51 55 102 55 34 44 34 103 101 110 101 114 97 116 101 78 97 109 101 34 58 34 110 103 105 110 120 99 111 110 116 114 111 108 108 101 114 45 34 44 34 110 97 109 101 115 112 97 99 101 34 58 34 100 101 102 97 117 108 116 34 44 34 115 101 108 102 76 105 110 107 34 58 34 47 97 112 105 47 118 49 98 101 116 97 49 47 112 111 100 115 47 110 103 105 110 120 99 111 110 116 114 111 108 108 101 114 45 54 51 55 102 55 63 110 97 109 101 115 112 97 99 101 61 100 101 102 97 117 108 116 34 44 34 117 105 100 34 58 34 55 100 51 98 100 55 57 100 45 102 97 99 97 45 49 49 101 52 45 56 52 52 56 45 53 50 53 52 48 48 50 50 50 49 54 99 34 44 34 114 101 115 111 117 114 99 101 86 101 114 115 105 111 110 34 58 34 50 54 34 44 34 99 114 101 97 116 105 111 110 84 105 109 101 115 116 97 109 112 34 58 34 50 48 49 53 45 48 53 45 49 53 84 48 54 58 50 48 58 51 49 90 34 44 34 108 97 98 101 108 115 34 58 123 34 101 110 118 34 58 34 116 101 115 116 34 44 34 110 97 109 101 34 58 34 110 103 105 110 120 34 44 34 118 101 114 115 105 111 110 34 58 34 49 34 125 44 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 98 105 110 100 105 110 103 72 111 115 116 34 58 34 49 57 50 46 49 54 56 46 49 50 49 46 50 53 50 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 101 120 101 99 117 116 111 114 73 100 34 58 34 54 100 57 98 50 102 99 99 56 51 97 100 98 97 48 55 95 107 56 115 109 45 101 120 101 99 117 116 111 114 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 111 102 102 101 114 73 100 34 58 34 50 48 49 53 48 53 49 53 45 48 54 49 54 53 53 45 51 53 57 56 50 57 55 50 56 48 45 53 48 53 48 45 49 45 79 49 55 54 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 112 111 114 116 95 84 67 80 95 56 48 34 58 34 51 49 48 48 50 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 115 108 97 118 101 73 100 34 58 34 50 48 49 53 48 53 49 53 45 48 54 49 54 53 53 45 51 53 57 56 50 57 55 50 56 48 45 53 48 53 48 45 49 45 83 49 34 44 34 107 56 115 46 109 101 115 111 115 112 104 101 114 101 46 105 111 47 116 97 115 107 73 100 34 58 34 112 111 100 46 55 100 55 48 55 101 55 101 45 102 97 99 97 45 49 49 101 52 45 97 100 49 98 45 53 50 53 52 48 48 50 50 50 49 54 99 34 125 125 44 34 115 112 101 99 34 58 123 34 118 111 108 117 109 101 115 34 58 110 117 108 108 44 34 99 111 110 116 97 105 110 101 114 115 34 58 91 123 34 110 97 109 101 34 58 34 110 103 105 110 120 34 44 34 105 109 97 103 101 34 58 34 110 103 105 110 120 58 108 97 116 101 115 116 34 44 34 112 111 114 116 115 34 58 91 123 34 104 111 115 116 80 111 114 116 34 58 51 49 48 48 50 44 34 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 56 48 44 34 112 114 111 116 111 99 111 108 34 58 34 84 67 80 34 125 93 44 34 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 34 47 100 101 118 47 116 101 114 109 105 110 97 116 105 111 110 45 108 111 103 34 44 34 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 34 65 108 119 97 121 115 34 44 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 123 125 125 93 44 34 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 34 65 108 119 97 121 115 34 44 34 100 110 115 80 111 108 105 99 121 34 58 34 67 108 117 115 116 101 114 70 105 114 115 116 34 125 44 34 115 116 97 116 117 115 34 58 123 34 112 104 97 115 101 34 58 34 80 101 110 100 105 110 103 34 125 125]} Offer:&Offer{Id:&OfferID{Value:_20150515-061655-3598297280-5050-1-O176,XXX_unrecognized:[],},FrameworkId:&FrameworkID{Value:_20150515-061655-3598297280-5050-1-0000,XXX_unrecognized:[],},SlaveId:&SlaveID{Value:_20150515-061655-3598297280-5050-1-S1,XXX_unrecognized:[],},Hostname:_192.168.121.252,Resources:[&Resource{Name:_cpus,Type:_SCALAR,Scalar:&Value_Scalar{Value:_1,XXX_unrecognized:[],},Ranges:nil,Set:nil,Role:__,XXX_unrecognized:[],} &Resource{Name:_mem,Type:_SCALAR,Scalar:&Value_Scalar{Value:_386,XXX_unrecognized:[],},Ranges:nil,Set:nil,Role:*,XXX_unrecognized:[],} &Resource{Name:_disk,Type:_SCALAR,Scalar:&Value_Scalar{Value:35164,XXX_unrecognized:[],},Ranges:nil,Set:nil,Role:,XXX_unrecognized:[],} &Resource{Name:_ports,Type:_RANGES,Scalar:nil,Ranges:&Value_Ranges{Range:[&Value_Range{Begin:_31000,End:_32000,XXX_unrecognized:[],}],XXX_unrecognized:[],},Set:nil,Role:,XXX_unrecognized:[],}],Attributes:[],ExecutorIds:[],XXX_unrecognized:[],} State:1 Flags:map[launched:{} bound:{} deleted:{}] CreateTime:2015-05-15 06:20:32.28224516 +0000 UTC UpdatedTime:2015-05-15 06:50:25.133368515 +0000 UTC podStatus:{Phase:Running Conditions:[{Type:Ready Status:False}] Message: Host:192.168.121.252 HostIP: PodIP:172.17.0.1 ContainerStatuses:[{Name:nginx State:{Waiting: Running:0xc2081ed900 Termination:} LastTerminationState:{Waiting: Running: Termination:} Ready:false RestartCount:0 Image:nginx:latest ImageID:docker://42a3cf88f3f0cce2b4bfb2ed714eec5ee937525b4c7e0a0f70daff18c3f2ee92 ContainerID:docker://bd90eac20233cf42878e0be2a0365a52f37008d851592619fd675cd36437318e}]} executor:0xc208248f00 podKey:/pods/default/nginxcontroller-637f7 launchTime:{sec:0 nsec:0 loc:} bindTime:{sec:63567268625 nsec:122484162 loc:0x1da4540} mapper:wildcard} �[33mscheduler_1 | �[0mI0515 06:54:05.278593 1 http_transporter.go:107] Sending message to master@192.168.121.214:5050 via http �[33mscheduler_1 | �[0mI0515 06:54:05.278770 1 http_transporter.go:343] libproc target URL http://192.168.121.214:5050/master/mesos.internal.StatusUpdateAcknowledgementMessage �[33mscheduler_1 | �[0mI0515 06:54:06.043030 1 offers.go:143] Delete lingering offer: 20150515-061655-3598297280-5050-1-O343 �[33mscheduler_1 | �[0mI0515 06:54:06.182013 1 http_transporter.go:328] Receiving message from master@192.168.121.214:5050, length 306

Then only then the new pods could kick in after killing the inital pods. And then just the 1 out of the 2 replicas.

— Reply to this email directly or view it on GitHub https://github.com/mesosphere/kubernetes-mesos/issues/276#issuecomment-102300536 .

jdef commented 8 years ago

TODO: investigate whether rolling upgrades are tested as part of [Conformance] test runs (they should be, but I just want to double-check)