Closed eladidan closed 8 years ago
schedulere logs from kubernetes task:
W0209 22:50:07.925220 667 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:07.925432 667 service.go:438] prepared executor command "./km" with args '[minion --run-proxy=true --proxy-bindall=false --proxy-logv=1 --proxy-mode=userspace --path-override=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --max-log-size=10Mi --max-log-backups=5 --max-log-age=7 --api-servers=http://kubernetes.marathon.mesos:25503 --v=1 --allow-privileged=false --suicide-timeout=20m0s --mesos-launch-grace-period=5m0s --mesos-cgroup-prefix=/mesos --cadvisor-port=4194 --sync-frequency=10s --contain-pod-resources=false --cluster-dns=10.10.10.10 --cluster-domain=cluster.local]' W0209 22:50:07.925506 667 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:07.925708 667 ha.go:132] scheduler process entered standby stage I0209 22:50:07.925912 667 service.go:586] self-electing in non-HA mode I0209 22:50:07.925969 667 service.go:575] Starting HTTP interface I0209 22:50:07.926103 667 ha.go:154] scheduler process entered master stage I0209 22:50:07.926121 667 service.go:804] performing deferred initialization I0209 22:50:07.926134 667 framework.go:175] initializing kubernetes mesos scheduler I0209 22:50:07.928205 667 framework.go:733] failed to recover pod registry, madness may ensue: Get http://kubernetes.marathon.mesos:25503/api/v1/pods: dial tcp: lookup kubernetes.marathon.mesos: no such host E0209 22:50:07.928233 667 ha.go:157] failed to fetch scheduler driver: failed to initialize pod scheduler: Get http://kubernetes.marathon.mesos:25503/api/v1/pods: dial tcp: lookup kubernetes.marathon.mesos: no such host I0209 22:50:07.928245 667 ha.go:141] scheduler process entered fin stage I0209 22:50:07.928283 667 service.go:623] exiting scheduler I0209 22:50:07.999999 0 respawn.xx:0] sleeping 3s before respawning scheduler W0209 22:50:10.959482 733 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:10.959669 733 service.go:438] prepared executor command "./km" with args '[minion --run-proxy=true --proxy-bindall=false --proxy-logv=1 --proxy-mode=userspace --path-override=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --max-log-size=10Mi --max-log-backups=5 --max-log-age=7 --api-servers=http://kubernetes.marathon.mesos:25503 --v=1 --allow-privileged=false --suicide-timeout=20m0s --mesos-launch-grace-period=5m0s --mesos-cgroup-prefix=/mesos --cadvisor-port=4194 --sync-frequency=10s --contain-pod-resources=false --cluster-dns=10.10.10.10 --cluster-domain=cluster.local]' W0209 22:50:10.959761 733 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:10.959926 733 ha.go:132] scheduler process entered standby stage I0209 22:50:10.960104 733 service.go:586] self-electing in non-HA mode I0209 22:50:10.960141 733 service.go:575] Starting HTTP interface I0209 22:50:10.960229 733 ha.go:154] scheduler process entered master stage I0209 22:50:10.960249 733 service.go:804] performing deferred initialization I0209 22:50:10.960276 733 framework.go:175] initializing kubernetes mesos scheduler I0209 22:50:10.962420 733 framework.go:733] failed to recover pod registry, madness may ensue: Get http://kubernetes.marathon.mesos:25503/api/v1/pods: dial tcp: lookup kubernetes.marathon.mesos: no such host E0209 22:50:10.962455 733 ha.go:157] failed to fetch scheduler driver: failed to initialize pod scheduler: Get http://kubernetes.marathon.mesos:25503/api/v1/pods: dial tcp: lookup kubernetes.marathon.mesos: no such host I0209 22:50:10.962467 733 ha.go:141] scheduler process entered fin stage I0209 22:50:10.962495 733 service.go:623] exiting scheduler I0209 22:50:10.999999 0 respawn.xx:0] sleeping 3s before respawning scheduler W0209 22:50:13.990720 781 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:13.990895 781 service.go:438] prepared executor command "./km" with args '[minion --run-proxy=true --proxy-bindall=false --proxy-logv=1 --proxy-mode=userspace --path-override=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --max-log-size=10Mi --max-log-backups=5 --max-log-age=7 --api-servers=http://kubernetes.marathon.mesos:25503 --v=1 --allow-privileged=false --suicide-timeout=20m0s --mesos-launch-grace-period=5m0s --mesos-cgroup-prefix=/mesos --cadvisor-port=4194 --sync-frequency=10s --contain-pod-resources=false --cluster-dns=10.10.10.10 --cluster-domain=cluster.local]' W0209 22:50:13.990968 781 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:13.991226 781 ha.go:132] scheduler process entered standby stage I0209 22:50:13.991310 781 service.go:586] self-electing in non-HA mode I0209 22:50:13.991327 781 service.go:575] Starting HTTP interface I0209 22:50:13.991394 781 ha.go:154] scheduler process entered master stage I0209 22:50:13.991421 781 service.go:804] performing deferred initialization I0209 22:50:13.991437 781 framework.go:175] initializing kubernetes mesos scheduler I0209 22:50:13.993625 781 framework.go:733] failed to recover pod registry, madness may ensue: Get http://kubernetes.marathon.mesos:25503/api/v1/pods: dial tcp: lookup kubernetes.marathon.mesos: no such host E0209 22:50:13.993654 781 ha.go:157] failed to fetch scheduler driver: failed to initialize pod scheduler: Get http://kubernetes.marathon.mesos:25503/api/v1/pods: dial tcp: lookup kubernetes.marathon.mesos: no such host I0209 22:50:13.993663 781 ha.go:141] scheduler process entered fin stage I0209 22:50:13.993684 781 service.go:623] exiting scheduler I0209 22:50:13.999999 0 respawn.xx:0] sleeping 3s before respawning scheduler W0209 22:50:17.022193 827 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:17.022391 827 service.go:438] prepared executor command "./km" with args '[minion --run-proxy=true --proxy-bindall=false --proxy-logv=1 --proxy-mode=userspace --path-override=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --max-log-size=10Mi --max-log-backups=5 --max-log-age=7 --api-servers=http://kubernetes.marathon.mesos:25503 --v=1 --allow-privileged=false --suicide-timeout=20m0s --mesos-launch-grace-period=5m0s --mesos-cgroup-prefix=/mesos --cadvisor-port=4194 --sync-frequency=10s --contain-pod-resources=false --cluster-dns=10.10.10.10 --cluster-domain=cluster.local]' W0209 22:50:17.022479 827 service.go:506] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0209 22:50:17.022661 827 ha.go:132] scheduler process entered standby stage I0209 22:50:17.022857 827 service.go:586] self-electing in non-HA mode I0209 22:50:17.022872 827 service.go:575] Starting HTTP interface I0209 22:50:17.022992 827 ha.go:154] scheduler process entered master stage I0209 22:50:17.023026 827 service.go:804] performing deferred initialization I0209 22:50:17.023055 827 framework.go:175] initializing kubernetes mesos scheduler I0209 22:50:17.025866 827 service.go:808] deferred init complete I0209 22:50:17.026512 827 service.go:938] did not find framework ID in etcd I0209 22:50:17.026533 827 service.go:815] constructing mesos scheduler driver I0209 22:50:17.028585 827 scheduler.go:210] found failover_timeout = 168h0m0s I0209 22:50:17.028664 827 scheduler.go:324] Initializing mesos scheduler driver I0209 22:50:17.028731 827 service.go:820] constructed mesos scheduler driver: &{
0xc208390000 0xc20833c360 1 0xc208390100 0xc20833c540 false [] 6.048e+14 false 0xc20833eab0 map[] map[] false false 0x4ba4d0 0xa7f030 0xc20833c3c0 {{0 0} 0 0 0 0} 0xa7ee10 0xc20833c420} I0209 22:50:17.028781 827 ha.go:161] starting driver... I0209 22:50:17.028787 827 scheduler.go:793] Starting the scheduler driver... I0209 22:50:17.028826 827 http_transporter.go:407] listening on 10.0.3.163 port 25501 I0209 22:50:17.028852 827 scheduler.go:810] Mesos scheduler driver started with PID=scheduler(1)@10.0.3.163:25501 I0209 22:50:17.028870 827 scheduler.go:822] starting master detector _zoo.MasterDetector: &{client: leaderNode: bootstrapLock:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} bootstrapFunc:0xd537d0 ignoreInstalled:0 minDetectorCyclePeriod:1000000000 done:0xc20833c4e0 cancel:0xd537c0} I0209 22:50:17.028931 827 ha.go:164] driver started successfully and is running I0209 22:50:17.035141 827 scheduler.go:375] New master master@10.0.7.42:5050 detected I0209 22:50:17.035166 827 scheduler.go:436] No credentials were provided. Attempting to register scheduler without authentication. I0209 22:50:17.035202 827 scheduler.go:929] Registering with master: master@10.0.7.42:5050 I0209 22:50:17.035310 827 scheduler.go:882] will retry registration in 1.302634817s if necessary I0209 22:50:17.036731 827 scheduler.go:536] Framework registered with ID=9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-0004 I0209 22:50:17.036921 827 framework.go:307] Scheduler registered with the master: &MasterInfo{Id:_9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1,Ip:_705101834,Port:_5050,Pid:_master@10.0.7.42:5050,Hostname:_10.0.7.42,Version:_0.25.0,Address:&Address{Hostname:_10.0.7.42,Ip:_10.0.7.42,Port:_5050,XXX_unrecognized:[],},XXX_unrecognized:[],} with frameworkId: &FrameworkID{Value:_9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-0004,XXX_unrecognized:[],} I0209 22:50:17.037004 827 framework.go:355] will perform implicit task reconciliation at interval: 5m0s after 15s I0209 22:50:17.037237 827 queuer.go:120] Watching for newly created pods I0209 22:50:17.037272 827 framework.go:680] explicit reconcile tasks I0209 22:50:17.049161 827 publish.go:115] setting endpoints for master service "k8sm-scheduler" to &api.Endpoints{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"k8sm-scheduler", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(_time.Location)(nil)}}, DeletionTimestamp:(_unversioned.Time)(nil), DeletionGracePeriodSeconds:(_int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, Subsets:[]api.EndpointSubset{api.EndpointSubset{Addresses:[]api.EndpointAddress{api.EndpointAddress{IP:"10.0.3.163", TargetRef:(*api.ObjectReference)(nil)}}, NotReadyAddresses:[]api.EndpointAddress(nil), Ports:[]api.EndpointPort{api.EndpointPort{Name:"", Port:25504, Protocol:"TCP"}}}}} I0209 22:50:18.038524 827 framework.go:680] explicit reconcile tasks I0209 22:50:18.338131 827 scheduler.go:911] skipping registration request: stopped=false, connected=true, authenticated=true I0209 22:50:27.213607 827 algorithm.go:79] Try to schedule pod kube-ui-v5-onp7z I0209 22:50:27.315554 827 algorithm.go:79] Try to schedule pod kube-dns-v9-rq2yw I0209 22:50:28.713134 827 framework.go:427] task status update "TASK_STARTING" from "SOURCE_EXECUTOR" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "dae48f455e7457bd_k8sm-executor" for reason "none" with message "create-binding-success" I0209 22:50:29.142239 827 framework.go:427] task status update "TASK_STARTING" from "SOURCE_EXECUTOR" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "dae48f455e7457bd_k8sm-executor" for reason "none" with message "create-binding-success" I0209 22:50:30.891760 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_EXECUTOR" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "dae48f455e7457bd_k8sm-executor" for reason "none" with message "pod-running:kube-dns-v9-rq2yw_kube-system" I0209 22:50:30.891804 827 registry.go:231] Received running status for pending task: pod.832be7ff-cf7f-11e5-b501-06b46e8c160d I0209 22:50:30.892015 827 registry.go:263] received pod status for task pod.832be7ff-cf7f-11e5-b501-06b46e8c160d: {Phase:Running Conditions:[{Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.0.3.166 PodIP:172.17.0.2 StartTime:2016-02-09 22:50:28 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting: Running:0xc20892dec0 Terminated: } LastTerminationState:{Waiting: Running: Terminated: } Ready:false RestartCount:0 Image:gcr.io/google_containers/etcd:2.0.9 ImageID:docker://b6b9a86dc06aa1361357ca1b105feba961f6a4145adca6c54e142c0be0fe87b0 ContainerID:docker://42ce5cab54522f44480557be5dc8871e612be1bcf5815f6065d21e7852453cdd} {Name:healthz State:{Waiting: Running:0xc20892df00 Terminated: } LastTerminationState:{Waiting: Running: Terminated: } Ready:false RestartCount:0 Image:gcr.io/google_containers/exechealthz:1.0 ImageID:docker://4f3d04b1d47b64834d494f9416d1f17a5f93a3e2035ad604fee47cfbba62be60 ContainerID:docker://f69c6eb05cc10dbc979b1aade850a137626e55369af21ee916a49c7155c65634} {Name:kube2sky State:{Waiting: Running:0xc20892df40 Terminated: } LastTerminationState:{Waiting: Running: Terminated: } Ready:false RestartCount:0 Image:gcr.io/google_containers/kube2sky:1.11 ImageID:docker://e52a547dca17cd83e8b6022e8ae1c1883d0855bce2d1c30071ffa0dcb8a8caf6 ContainerID:docker://6262befec1de52643c97ac8e20d43575514d94417903b80b2801f8418ba9838a} {Name:skydns State:{Waiting: Running:0xc20892df80 Terminated: } LastTerminationState:{Waiting: Running: Terminated: } Ready:false RestartCount:0 Image:gcr.io/google_containers/skydns:2015-10-13-8c72f8c ImageID:docker://763c92e53f311c40a922628a34daf0be4397463589a7d148cea8291f02c12a5d ContainerID:docker://10210f49cfcac64a3e71cd70f989cfc1c99a189f89ad6df13a978255ba930269}]} I0209 22:50:32.037380 827 tasksreconciler.go:123] implicit reconcile tasks I0209 22:50:32.038564 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 22:50:32.038769 827 framework.go:427] task status update "TASK_STARTING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 22:50:41.933562 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_EXECUTOR" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "dae48f455e7457bd_k8sm-executor" for reason "none" with message "pod-running:kube-ui-v5-onp7z_kube-system" I0209 22:50:41.933607 827 registry.go:231] Received running status for pending task: pod.831c5735-cf7f-11e5-b501-06b46e8c160d I0209 22:50:41.933718 827 registry.go:263] received pod status for task pod.831c5735-cf7f-11e5-b501-06b46e8c160d: {Phase:Running Conditions:[{Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:0001-01-01 00:00:00 +0000 UTC Reason: Message:}] Message: Reason: HostIP:10.0.3.165 PodIP:172.17.0.1 StartTime:2016-02-09 22:50:30 +0000 UTC ContainerStatuses:[{Name:kube-ui State:{Waiting: Running:0xc2089926a0 Terminated: } LastTerminationState:{Waiting: Running: Terminated: } Ready:false RestartCount:0 Image:mesosphere/k8sm-ui:v5 ImageID:docker://b38ac7e043a3c3ddc69434b4c42bfcc0db75a7c60160bb0d958e6e2d05c17360 ContainerID:docker://c80815cf6415e34a470ab75f340c9c128cf3dd980e2c1355d4aa3ed2c6e126c0}]} I0209 22:55:32.037389 827 tasksreconciler.go:123] implicit reconcile tasks I0209 22:55:32.038602 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 22:55:32.038750 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:00:32.037576 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:00:32.038622 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:00:32.038805 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:05:32.037754 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:05:32.038866 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:05:32.039026 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:10:32.037932 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:10:32.039012 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:10:32.039196 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:15:32.038094 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:15:32.039098 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:15:32.039206 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:20:32.038272 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:20:32.039217 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:20:32.039341 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:25:32.038544 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:25:32.039836 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:25:32.040023 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:30:32.038750 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:30:32.040001 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:30:32.040177 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:35:32.038933 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:35:32.039886 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:35:32.040080 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:40:32.039179 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:40:32.040581 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:40:32.040645 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:45:32.039421 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:45:32.040453 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:45:32.040570 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:50:32.039659 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:50:32.040724 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:50:32.040888 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:55:32.039847 827 tasksreconciler.go:123] implicit reconcile tasks I0209 23:55:32.040853 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0209 23:55:32.040975 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:00:32.040070 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:00:32.041252 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:00:32.041441 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:05:32.040242 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:05:32.041222 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:05:32.041415 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:10:32.040463 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:10:32.041493 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:10:32.041661 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:15:32.040646 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:15:32.041694 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:15:32.041830 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:20:32.040839 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:20:32.042165 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:20:32.042277 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:25:32.041097 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:25:32.042189 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:25:32.042362 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:30:32.041337 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:30:32.042644 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:30:32.042756 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:35:32.041554 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:35:32.042572 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:35:32.042789 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:40:32.041799 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:40:32.042853 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:40:32.042996 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:45:32.042021 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:45:32.043050 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:45:32.043173 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:50:32.042181 827 tasksreconciler.go:123] implicit reconcile tasks I0210 00:50:32.043163 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.832be7ff-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S4" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state" I0210 00:50:32.043392 827 framework.go:427] task status update "TASK_RUNNING" from "SOURCE_MASTER" for task "pod.831c5735-cf7f-11e5-b501-06b46e8c160d" on slave "9f70cf65-7576-47b5-9eb5-9b73ebe9aeb1-S5" executor "" for reason "REASON_RECONCILIATION" with message "Reconciliation: Latest task state"
also, the etcd stderr is full of these messages:
I0210 03:12:11.653925 8 healthcheck.go:66] Leader stats response:{"leader":"fc0ec01bc73f2a4d","followers":{"c236be3a29d4b382":{"latency":{"current":0.002069,"average":0.0028046851298702356,"standardDeviation":0.00182621317336912,"minimum":0.001557,"maximum":0.112434},"counts":{"fail":0,"success":400400}},"c66f0c7e51d8b212":{"latency":{"current":0.002243,"average":0.003137766455885183,"standardDeviation":0.0018811175909761862,"minimum":0.001531,"maximum":0.203037},"counts":{"fail":0,"success":401680}}}}
I0210 03:12:17.091848 8 scheduler.go:846] Skipping launch attempt for now.
W0210 03:12:24.478763 8 scheduler.go:608] launchChan is full!
@sttts why was this issue closed?
sorry, not sure how that happened. must have fat fingered something.
On Wed, Feb 10, 2016 at 7:12 PM, Idan Elad notifications@github.com wrote:
@sttts https://github.com/sttts why was this issue closed?
— Reply to this email directly or view it on GitHub https://github.com/mesosphere/kubernetes-mesos/issues/777#issuecomment-182647184 .
I accidently closed the ticket when I intented to cancel a comment and immediately re-opened. Perhaps you have a mechanism that prevents outside users from re-opening issues (kinda makes sense)? Anyway, anything I can try to get around this? From where I'm sitting the recommended installation process is broken on a completely vanilla Mesosphere cluster in AWS. Things I tried:
All with the same results
If you look at the sandbox for the kubernetes task (the one that runs the scheduler, et al.) look at stderr
. You'll probably see this:
I0212 01:20:12.191212 1653 exec.cpp:341] Executor received status update acknowledgement 618fc4e9-6267-4b2a-8329-a4676afee9a4 for task kubernetes.be048dbc-d126-11e5-82b9-067ed482922b of framework f71ca05f-11d6-4fa2-9ef3-cb472bca8c05-0000
ABORT: (/pkg/src/mesos/3rdparty/libprocess/src/subprocess.cpp:177): Failed to os::execvpe in childMain: No such file or directory*** Aborted at 1455240012 (unix time) try "date -d @1455240012" if you are using GNU date ***
PC: @ 0x7f2ea9470f0b (unknown)
*** SIGABRT (@0x89d) received by PID 2205 (TID 0x7f2ea609c700) from PID 2205; stack trace: ***
@ 0x7f2ea97f9b40 (unknown)
@ 0x7f2ea9470f0b (unknown)
@ 0x7f2ea94724d1 (unknown)
@ 0x419bd2 _Abort()
@ 0x419c0c _Abort()
@ 0x7f2eab782c3b process::childMain()
@ 0x7f2eab784c6d std::_Function_handler<>::_M_invoke()
@ 0x7f2eab782adc process::defaultClone()
@ 0x7f2eab783972 process::subprocess()
@ 0x43ca2b mesos::internal::docker::DockerExecutorProcess::launchHealthCheck()
@ 0x7f2eab756af1 process::ProcessManager::resume()
@ 0x7f2eab756def process::internal::schedule()
@ 0x7f2ea9ff3d73 (unknown)
@ 0x7f2ea97f04cc (unknown)
@ 0x7f2ea95327fd (unknown)
... which illustrates a bug in this version of (1.4) DCOS CE. Should be fixed in the next version (1.5.x) of DCOS CE, which will land very soon. I'll update this bug report when it's available.
yes, pretty much exactly that:
ABORT: (/pkg/src/mesos/3rdparty/libprocess/src/subprocess.cpp:177): Failed to os::execvpe in childMain: No such file or directory*** Aborted at 1455243287 (unix time) try "date -d @1455243287" if you are using GNU date ***
PC: @ 0x7f4e0d701f0b (unknown)
*** SIGABRT (@0x7834) received by PID 30772 (TID 0x7f4e0932b700) from PID 30772; stack trace: ***
@ 0x7f4e0da8ab40 (unknown)
@ 0x7f4e0d701f0b (unknown)
@ 0x7f4e0d7034d1 (unknown)
@ 0x419bd2 _Abort()
@ 0x419c0c _Abort()
@ 0x7f4e0fa13c3b process::childMain()
@ 0x7f4e0fa15c6d std::_Function_handler<>::_M_invoke()
@ 0x7f4e0fa13adc process::defaultClone()
@ 0x7f4e0fa14972 process::subprocess()
@ 0x43ca2b mesos::internal::docker::DockerExecutorProcess::launchHealthCheck()
@ 0x7f4e0f9e7af1 process::ProcessManager::resume()
@ 0x7f4e0f9e7def process::internal::schedule()
@ 0x7f4e0e284d73 (unknown)
@ 0x7f4e0da814cc (unknown)
@ 0x7f4e0d7c37fd (unknown)
Is there anything I can do in the meantime? Revert to a previous version of DCOS or use an unstable build to test it?
.. and DCOS CE 1.5.2 is now available. should clear this problem up for you
That was soon xD Will install later tonight and close if fixed. Thanks for the prompt response
incidently, where can I find out about new releases, release notes etc. regarding DCOS CE/EE? The mesosphere blog reveal nothing
The latest version fixed it. Thanks!
However, the main dcos dashboard still says that it's running version 1.4 and not 1.5.X. Not sure where to report it, so putting it here
Our DCOS team tells me that an email about the release will probably go out today or Monday. The emails are sent to people that have signed up for notifications through Intercom: https://docs.mesosphere.com/support/
oh, that's perfect. I'll join the Slack channel
I'm trying to install the k8s package (v0.7.2-v1.1.5-alpha) on a dcos cluster running mesos 0.25 following the documentation to the letter: https://docs.mesosphere.com/manage-service/kubernetes/
but it seems to be stuck in Marathon on the ScaleApplication action.
etcd installed successfully and is healthy.
Looking at mesos, there are 3 different tasks running: kube-ui-v5-onp7z.kube-system.pod kube-dns-v9-rq2yw.kube-system.pod kubernetes
And indeed running "kubectl get pods" confirms that there are 2 pods running: NAME READY STATUS RESTARTS AGE kube-dns-v9-rq2yw 4/4 Running 0 1h kube-ui-v5-onp7z 1/1 Running 0 1h
Looking in the kubernetes task I could not see any clear indication in the logs of a task failing.
However, both the Marathon task is listing as "Deploying" and the k8s service is listed as "Idle" in the mesosphere UI (see attached images). Additionaly, the k8s does not load.
I've tried following the uninstallation guide and re-installing, to arrive at the same result. I've also tried deleting the entire cluster and starting from scratch (although I did not think to validate whether the s3 bucket was deleted as part of that process), which also resulted in the same behaviour.