Closed rahulwa closed 6 years ago
Can you show the output of kubectl version
?
Also exec into the voyager operator pod and run voyager version
and show its output.
I0316 09:39:21.925556 92 http.go:97] [Running http server provider...]
I0316 09:39:21.931926 92 logs.go:19] FLAG: --alsologtostderr="false"
I0316 09:39:21.931939 92 logs.go:19] FLAG: --analytics="true"
I0316 09:39:21.931944 92 logs.go:19] FLAG: --help="false"
I0316 09:39:21.931952 92 logs.go:19] FLAG: --log.format="\"logger:stderr\""
I0316 09:39:21.931958 92 logs.go:19] FLAG: --log.level="\"info\""
I0316 09:39:21.931963 92 logs.go:19] FLAG: --log_backtrace_at=":0"
I0316 09:39:21.931968 92 logs.go:19] FLAG: --log_dir=""
I0316 09:39:21.931973 92 logs.go:19] FLAG: --logtostderr="false"
I0316 09:39:21.931978 92 logs.go:19] FLAG: --stderrthreshold="0"
I0316 09:39:21.931984 92 logs.go:19] FLAG: --v="0"
I0316 09:39:21.931989 92 logs.go:19] FLAG: --vmodule=""
Version = 6.0.0-rc.0
VersionStrategy = tag
Os = alpine
Arch = amd64
CommitHash = 807c8dec3eac532da76b7c496c7d4ccf1a9b7152
GitBranch = release-6.0
GitTag = 6.0.0-rc.0
CommitTimestamp = 2018-02-14T22:15:58
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:54Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
So, you are using 6.0.0-rc.0 . Try the new 6.0.0 and let me know if you still see the issue.
You need to update the chart cached on your machine. Run $ helm repo update
The detailed instruction is here: https://github.com/appscode/voyager/blob/master/docs/setup/install.md#using-helm
I upgraded it to latest release and again scaled up service from 3 to 5 but haproxy is only serving for 3.
kubectl exec voyager-splash-voyager-66d9d95f6f-zxpqv -- voyager version
I0316 09:56:30.859028 47 http.go:97] [Running http server provider...]
I0316 09:56:30.866404 47 logs.go:19] FLAG: --alsologtostderr="false"
I0316 09:56:30.866421 47 logs.go:19] FLAG: --analytics="true"
I0316 09:56:30.866433 47 logs.go:19] FLAG: --help="false"
I0316 09:56:30.866454 47 logs.go:19] FLAG: --log.format="\"logger:stderr\""
I0316 09:56:30.866463 47 logs.go:19] FLAG: --log.level="\"info\""
I0316 09:56:30.866476 47 logs.go:19] FLAG: --log_backtrace_at=":0"
I0316 09:56:30.866486 47 logs.go:19] FLAG: --log_dir=""
I0316 09:56:30.866493 47 logs.go:19] FLAG: --logtostderr="false"
I0316 09:56:30.866546 47 logs.go:19] FLAG: --short="false"
I0316 09:56:30.866557 47 logs.go:19] FLAG: --stderrthreshold="0"
I0316 09:56:30.866564 47 logs.go:19] FLAG: --v="0"
I0316 09:56:30.866570 47 logs.go:19] FLAG: --vmodule=""
Version = 6.0.0
VersionStrategy = tag
Os = alpine
Arch = amd64
CommitHash = c9ba7ed0319649d561fcf27ef935dfb0727cfa3e
GitBranch = release-6.0
GitTag = 6.0.0
CommitTimestamp = 2018-03-13T17:58:49
kubectl describe svc splash-service
Name: splash-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=splash,component=batman,product=scantool
Type: ClusterIP
IP: 100.70.218.46
Port: <unset> 8050/TCP
TargetPort: splash-http/TCP
Endpoints: 100.96.3.11:8050,100.96.4.23:8050,100.96.4.25:8050 + 2 more...
Session Affinity: None
Events: <none>
kubectl exec voyager-splash-ingress1-7b87b48d99-hkgj9 -- cat /etc/haproxy/haproxy.cfg
# HAProxy configuration generated by https://github.com/appscode/voyager
# DO NOT EDIT!
global
daemon
stats socket /tmp/haproxy
server-state-file global
server-state-base /var/state/haproxy/
# log using a syslog socket
log /dev/log local0 info
log /dev/log local0 notice
tune.ssl.default-dh-param 2048
ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
defaults
log global
# https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-option%20abortonclose
# https://github.com/appscode/voyager/pull/403
option dontlognull
option http-server-close
# Timeout values
timeout client 50s
timeout client-fin 50s
timeout connect 50s
timeout server 50s
timeout tunnel 50s
# Configure error files
# default traffic mode is http
# mode is overwritten in case of tcp services
mode http
frontend http-0_0_0_0-80
bind *:80
mode http
option httplog
option forwardfor
acl is_proxy_https hdr(X-Forwarded-Proto) https
acl acl_: path_beg /
use_backend splash-service.default:8050 if acl_:
backend splash-service.default:8050
server pod-splash-deployment-77cbb5f85d-42zl8 100.96.3.11:8050
server pod-splash-deployment-77cbb5f85d-5hhsz 100.96.4.23:8050
server pod-splash-deployment-77cbb5f85d-b7pqt 100.96.5.21:8050
kubectl get cm voyager-splash-ingress1 -o yaml
apiVersion: v1
data:
haproxy.cfg: "# HAProxy configuration generated by https://github.com/appscode/voyager\n#
DO NOT EDIT!\nglobal\n\tdaemon\n\tstats socket /tmp/haproxy\n\tserver-state-file
global\n\tserver-state-base /var/state/haproxy/\n\t# log using a syslog socket\n\tlog
/dev/log local0 info\n\tlog /dev/log local0 notice\n\ttune.ssl.default-dh-param
2048\n\tssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK\ndefaults\n\tlog
global\n\t# https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-option%20abortonclose\n\t#
https://github.com/appscode/voyager/pull/403\n\toption dontlognull\n\toption http-server-close\n\t#
Timeout values\n\ttimeout client 50s\n\ttimeout client-fin 50s\n\ttimeout connect
50s\n\ttimeout server 50s\n\ttimeout tunnel 50s\n\t# Configure error files\n\t#
default traffic mode is http\n\t# mode is overwritten in case of tcp services\n\tmode
http\nfrontend http-0_0_0_0-80\n\tbind *:80 \n\tmode http\n\toption httplog\n\toption
forwardfor\n\tacl is_proxy_https hdr(X-Forwarded-Proto) https\n\tacl acl_: path_beg
/\n\tuse_backend splash-service.default:8050 if acl_:\nbackend splash-service.default:8050\n\tserver
pod-splash-deployment-77cbb5f85d-42zl8 100.96.3.11:8050 \n\tserver pod-splash-deployment-77cbb5f85d-5hhsz
100.96.4.23:8050 \n\tserver pod-splash-deployment-77cbb5f85d-b7pqt 100.96.5.21:8050
\ "
kind: ConfigMap
metadata:
annotations:
ingress.appscode.com/origin-api-schema: extension/v1beta1
ingress.appscode.com/origin-name: splash-ingress1
creationTimestamp: 2018-03-16T06:36:51Z
name: voyager-splash-ingress1
namespace: default
ownerReferences:
- apiVersion: extension/v1beta1
blockOwnerDeletion: true
kind: Ingress
name: splash-ingress1
uid: 694263c5-28e4-11e8-98af-064e22fbf1ce
resourceVersion: "830905"
selfLink: /api/v1/namespaces/default/configmaps/voyager-splash-ingress1
uid: 69444136-28e4-11e8-844d-0288c0e1a888
Please let me know how can i troubleshoot it.
Can you share the logs for voyager-operator pod (the pod that uses appcode/voyager:6.0.0.) as the container image?
kubectl logs voyager-splash-ingress1-7b87b48d99-hkgj9 -f
Syncing HAProxy controller ...
voyager haproxy-controller --init-only --analytics=true --burst=1000000 --cloud-provider=aws --ingress-api-version=extension/v1beta1 --ingress-name=splash-ingress1 --qps=1e+06 --reload-cmd=/etc/sv/haproxy/reload --logtostderr=false --alsologtostderr=false --v=3 --stderrthreshold=0
I0316 09:47:03.606944 8 http.go:97] [Running http server provider...]
I0316 09:47:03.614685 8 logs.go:19] FLAG: --alsologtostderr="false"
I0316 09:47:03.614702 8 logs.go:19] FLAG: --analytics="true"
I0316 09:47:03.614710 8 logs.go:19] FLAG: --burst="1000000"
I0316 09:47:03.614718 8 logs.go:19] FLAG: --cert-dir="/etc/ssl/private/haproxy"
I0316 09:47:03.614724 8 logs.go:19] FLAG: --cloud-provider="aws"
I0316 09:47:03.614730 8 logs.go:19] FLAG: --help="false"
I0316 09:47:03.614736 8 logs.go:19] FLAG: --ingress-api-version="extension/v1beta1"
I0316 09:47:03.614742 8 logs.go:19] FLAG: --ingress-name="splash-ingress1"
I0316 09:47:03.614748 8 logs.go:19] FLAG: --init-only="true"
I0316 09:47:03.614754 8 logs.go:19] FLAG: --kubeconfig=""
I0316 09:47:03.614763 8 logs.go:19] FLAG: --log.format="\"logger:stderr\""
I0316 09:47:03.614770 8 logs.go:19] FLAG: --log.level="\"info\""
I0316 09:47:03.614777 8 logs.go:19] FLAG: --log_backtrace_at=":0"
I0316 09:47:03.614784 8 logs.go:19] FLAG: --log_dir=""
I0316 09:47:03.614790 8 logs.go:19] FLAG: --logtostderr="false"
I0316 09:47:03.614795 8 logs.go:19] FLAG: --master=""
I0316 09:47:03.614804 8 logs.go:19] FLAG: --qps="1e+06"
I0316 09:47:03.614811 8 logs.go:19] FLAG: --reload-cmd="/etc/sv/haproxy/reload"
I0316 09:47:03.614819 8 logs.go:19] FLAG: --resync-period="10m0s"
I0316 09:47:03.614825 8 logs.go:19] FLAG: --stderrthreshold="0"
I0316 09:47:03.614831 8 logs.go:19] FLAG: --v="3"
I0316 09:47:03.614838 8 logs.go:19] FLAG: --vmodule=""
W0316 09:47:03.647110 8 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
Starting runit...
I0316 09:47:03.664938 8 metrics.go:17] [config changed: 1]
I0316 09:47:03.665045 8 metrics.go:22] [cert changed: 1]
listening on /dev/log, gid=65534, uid=65534, starting.
local0.notice: Mar 16 09:47:04 haproxy[32]: Proxy http-0_0_0_0-80 started.
local0.notice: Mar 16 09:47:04 haproxy[32]: Proxy http-0_0_0_0-80 started.
local0.notice: Mar 16 09:47:04 haproxy[32]: Proxy splash-service.default:8050 started.
local0.notice: Mar 16 09:47:04 haproxy[32]: Proxy splash-service.default:8050 started.
daemon.info: Mar 16 09:47:04 haproxy-controller: Starting HAProxy controller ...
daemon.info: Mar 16 09:47:04 haproxy-controller: exec voyager haproxy-controller --analytics=true --burst=1000000 --cloud-provider=aws --ingress-api-version=extension/v1beta1 --ingress-name=splash-ingress1 --qps=1e+06 --reload-cmd=/etc/sv/haproxy/reload --logtostderr=false --alsologtostderr=false --v=3 --stderrthreshold=0
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.742696 21 http.go:97] [Running http server provider...]
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750780 21 logs.go:19] FLAG: --alsologtostderr="false"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750796 21 logs.go:19] FLAG: --analytics="true"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750804 21 logs.go:19] FLAG: --burst="1000000"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750812 21 logs.go:19] FLAG: --cert-dir="/etc/ssl/private/haproxy"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750818 21 logs.go:19] FLAG: --cloud-provider="aws"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750823 21 logs.go:19] FLAG: --help="false"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750830 21 logs.go:19] FLAG: --ingress-api-version="extension/v1beta1"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750836 21 logs.go:19] FLAG: --ingress-name="splash-ingress1"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750842 21 logs.go:19] FLAG: --init-only="false"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750848 21 logs.go:19] FLAG: --kubeconfig=""
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750857 21 logs.go:19] FLAG: --log.format="\"logger:stderr\""
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750864 21 logs.go:19] FLAG: --log.level="\"info\""
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750871 21 logs.go:19] FLAG: --log_backtrace_at=":0"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750878 21 logs.go:19] FLAG: --log_dir=""
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750884 21 logs.go:19] FLAG: --logtostderr="false"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750889 21 logs.go:19] FLAG: --master=""
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750897 21 logs.go:19] FLAG: --qps="1e+06"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750903 21 logs.go:19] FLAG: --reload-cmd="/etc/sv/haproxy/reload"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750910 21 logs.go:19] FLAG: --resync-period="10m0s"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750916 21 logs.go:19] FLAG: --stderrthreshold="0"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750922 21 logs.go:19] FLAG: --v="3"
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.750927 21 logs.go:19] FLAG: --vmodule=""
daemon.err: Mar 16 09:47:04 haproxy-controller: W0316 09:47:04.780533 21 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.792225 21 controller.go:244] Starting haproxy-controller
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.792535 21 reflector.go:202] Starting reflector *v1beta1.Ingress (10m0s) from github.com/appscode/voyager/vendor/k8s.io/client-go/informers/factory.go:86
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.792889 21 reflector.go:240] Listing and watching *v1beta1.Ingress from github.com/appscode/voyager/vendor/k8s.io/client-go/informers/factory.go:86
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.793034 21 reflector.go:202] Starting reflector *v1.Secret (10m0s) from github.com/appscode/voyager/vendor/k8s.io/client-go/informers/factory.go:86
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.793293 21 reflector.go:240] Listing and watching *v1.Secret from github.com/appscode/voyager/vendor/k8s.io/client-go/informers/factory.go:86
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.792878 21 reflector.go:202] Starting reflector *v1.ConfigMap (10m0s) from github.com/appscode/voyager/vendor/k8s.io/client-go/informers/factory.go:86
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.793369 21 reflector.go:240] Listing and watching *v1.ConfigMap from github.com/appscode/voyager/vendor/k8s.io/client-go/informers/factory.go:86
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.792711 21 reflector.go:202] Starting reflector *v1beta1.Certificate (10m0s) from github.com/appscode/voyager/client/informers/externalversions/factory.go:74
daemon.err: Mar 16 09:47:04 haproxy-controller: I0316 09:47:04.793438 21 reflector.go:240] Listing and watching *v1beta1.Certificate from github.com/appscode/voyager/client/informers/externalversions/factory.go:74
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
daemon.info: Mar 16 09:47:04 haproxy-controller: Sync/Add/Update for Ingress splash-ingress1
kubectl logs voyager-splash-voyager-66d9d95f6f-zxpqv -f
I0316 10:26:25.974026 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.06094ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:generic-garbage-collector] 172.17.65.123:37998]
I0316 10:26:30.984937 1 wrap.go:42] GET /healthz: (80.681µs) 200 [[kube-probe/1.9] 100.96.5.1:56164]
I0316 10:26:32.753385 1 wrap.go:42] GET /: (2.850169ms) 403 [[Go-http-client/2.0] 172.17.65.123:37982]
I0316 10:26:35.284529 1 wrap.go:42] GET /swagger.json: (1.394809ms) 404 [[] 172.17.65.123:37998]
I0316 10:26:37.502186 1 wrap.go:42] GET /: (126.621µs) 403 [[Go-http-client/2.0] 172.17.62.91:42188]
I0316 10:26:40.071747 1 wrap.go:42] GET /swagger.json: (259.763µs) 404 [[] 172.17.62.91:42196]
I0316 10:26:40.985980 1 wrap.go:42] GET /healthz: (1.248997ms) 200 [[kube-probe/1.9] 100.96.5.1:56180]
I0316 10:26:50.576826 1 wrap.go:42] GET /: (5.744087ms) 403 [[Go-http-client/2.0] 172.17.102.138:44572]
I0316 10:26:50.985878 1 wrap.go:42] GET /healthz: (86.454µs) 200 [[kube-probe/1.9] 100.96.5.1:56196]
I0316 10:26:54.571209 1 services.go:41] Sync/Add/Update for Service voyager-splash-ingress1
I0316 10:26:54.571239 1 services.go:41] Sync/Add/Update for Service kube-dns
I0316 10:26:54.571531 1 services.go:41] Sync/Add/Update for Service splash-service
I0316 10:26:54.571744 1 services.go:41] Sync/Add/Update for Service voyager-splash-voyager
I0316 10:26:54.571841 1 services.go:41] Sync/Add/Update for Service voyager-splash-ingress
I0316 10:26:54.571932 1 services.go:41] Sync/Add/Update for Service kubernetes
I0316 10:26:54.572011 1 services.go:41] Sync/Add/Update for Service kubernetes-dashboard
I0316 10:26:54.572096 1 services.go:41] Sync/Add/Update for Service tiller-deploy
I0316 10:26:54.572205 1 services.go:83] Add/Delete/Update of backend service default/splash-service, Ingress default/splash-ingress re-queued for update
I0316 10:26:54.572217 1 services.go:83] Add/Delete/Update of backend service default/splash-service, Ingress default/splash-ingress1 re-queued for update
W0316 10:26:54.572229 1 ingress_crds.go:76] Engress default/splash-ingress does not exist anymore
W0316 10:26:54.572237 1 ingress_crds.go:76] Engress default/splash-ingress1 does not exist anymore
I0316 10:26:55.998770 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.437925ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:resourcequota-controller] 172.17.65.123:37998]
I0316 10:26:55.998826 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.037603ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:generic-garbage-collector] 172.17.65.123:37998]
I0316 10:27:00.987839 1 wrap.go:42] GET /healthz: (2.957015ms) 200 [[kube-probe/1.9] 100.96.5.1:56212]
I0316 10:27:02.751745 1 wrap.go:42] GET /: (1.495453ms) 403 [[Go-http-client/2.0] 172.17.65.123:37982]
I0316 10:27:07.502383 1 wrap.go:42] GET /: (125.4µs) 403 [[Go-http-client/2.0] 172.17.62.91:42188]
I0316 10:27:10.984850 1 wrap.go:42] GET /healthz: (101.017µs) 200 [[kube-probe/1.9] 100.96.5.1:56228]
I0316 10:27:16.568733 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (232.861µs) 200 [[kubectl/v1.9.3 (darwin/amd64) kubernetes/d283541] 172.17.65.123:37998]
I0316 10:27:20.578058 1 wrap.go:42] GET /: (4.191915ms) 403 [[Go-http-client/2.0] 172.17.102.138:44572]
I0316 10:27:20.986829 1 wrap.go:42] GET /healthz: (1.278525ms) 200 [[kube-probe/1.9] 100.96.5.1:56244]
I0316 10:27:23.096100 1 wrap.go:42] GET /swagger.json: (15.242229ms) 404 [[] 172.17.102.138:44578]
I0316 10:27:26.024361 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.564263ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:generic-garbage-collector] 172.17.65.123:37998]
I0316 10:27:26.024569 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.361073ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:resourcequota-controller] 172.17.65.123:37998]
I0316 10:27:30.985589 1 wrap.go:42] GET /healthz: (131.483µs) 200 [[kube-probe/1.9] 100.96.5.1:56260]
I0316 10:27:32.755841 1 wrap.go:42] GET /: (2.927504ms) 403 [[Go-http-client/2.0] 172.17.65.123:37982]
I0316 10:27:35.287355 1 wrap.go:42] GET /swagger.json: (1.408086ms) 404 [[] 172.17.65.123:37998]
I0316 10:27:37.511496 1 wrap.go:42] GET /: (129.251µs) 403 [[Go-http-client/2.0] 172.17.62.91:42188]
I0316 10:27:40.072893 1 wrap.go:42] GET /swagger.json: (221.579µs) 404 [[] 172.17.62.91:42196]
I0316 10:27:40.986659 1 wrap.go:42] GET /healthz: (1.276862ms) 200 [[kube-probe/1.9] 100.96.5.1:56276]
I0316 10:27:50.571150 1 wrap.go:42] GET /: (4.625375ms) 403 [[Go-http-client/2.0] 172.17.102.138:44572]
I0316 10:27:50.985335 1 wrap.go:42] GET /healthz: (87.166µs) 200 [[kube-probe/1.9] 100.96.5.1:56292]
I0316 10:27:56.053396 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (3.58293ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:resourcequota-controller] 172.17.65.123:37998]
I0316 10:27:56.053396 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (4.294278ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:generic-garbage-collector] 172.17.65.123:37998]
I0316 10:28:00.988161 1 wrap.go:42] GET /healthz: (2.948506ms) 200 [[kube-probe/1.9] 100.96.5.1:56308]
I0316 10:28:02.744546 1 wrap.go:42] GET /: (1.3477ms) 403 [[Go-http-client/2.0] 172.17.65.123:37982]
I0316 10:28:07.508657 1 wrap.go:42] GET /: (124.374µs) 403 [[Go-http-client/2.0] 172.17.62.91:42188]
I0316 10:28:10.985155 1 wrap.go:42] GET /healthz: (93.05µs) 200 [[kube-probe/1.9] 100.96.5.1:56324]
I0316 10:28:20.576922 1 wrap.go:42] GET /: (4.835255ms) 403 [[Go-http-client/2.0] 172.17.102.138:44572]
I0316 10:28:20.986335 1 wrap.go:42] GET /healthz: (1.305239ms) 200 [[kube-probe/1.9] 100.96.5.1:56340]
I0316 10:28:26.078716 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.413603ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:generic-garbage-collector] 172.17.65.123:37998]
I0316 10:28:26.079271 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.165587ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:resourcequota-controller] 172.17.65.123:37998]
I0316 10:28:30.985062 1 wrap.go:42] GET /healthz: (94.719µs) 200 [[kube-probe/1.9] 100.96.5.1:56356]
I0316 10:28:32.735920 1 wrap.go:42] GET /: (2.740468ms) 403 [[Go-http-client/2.0] 172.17.65.123:37982]
I0316 10:28:37.508099 1 wrap.go:42] GET /: (143.075µs) 403 [[Go-http-client/2.0] 172.17.62.91:42188]
I0316 10:28:40.986221 1 wrap.go:42] GET /healthz: (1.332988ms) 200 [[kube-probe/1.9] 100.96.5.1:56372]
I0316 10:28:50.573019 1 wrap.go:42] GET /: (1.532456ms) 403 [[Go-http-client/2.0] 172.17.102.138:44572]
I0316 10:28:50.984881 1 wrap.go:42] GET /healthz: (96.021µs) 200 [[kube-probe/1.9] 100.96.5.1:56388]
I0316 10:28:56.106031 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (3.558597ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:resourcequota-controller] 172.17.65.123:37998]
I0316 10:28:56.106031 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (3.651973ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:generic-garbage-collector] 172.17.65.123:37998]
I0316 10:29:00.985975 1 wrap.go:42] GET /healthz: (1.230671ms) 200 [[kube-probe/1.9] 100.96.5.1:56404]
I0316 10:29:02.743796 1 wrap.go:42] GET /: (1.313498ms) 403 [[Go-http-client/2.0] 172.17.65.123:37982]
I0316 10:29:07.517962 1 wrap.go:42] GET /: (125.764µs) 403 [[Go-http-client/2.0] 172.17.62.91:42188]
I0316 10:29:10.984735 1 wrap.go:42] GET /healthz: (85.461µs) 200 [[kube-probe/1.9] 100.96.5.1:56420]
I0316 10:29:20.582359 1 wrap.go:42] GET /: (4.934787ms) 403 [[Go-http-client/2.0] 172.17.102.138:44572]
I0316 10:29:20.986372 1 wrap.go:42] GET /healthz: (1.413141ms) 200 [[kube-probe/1.9] 100.96.5.1:56436]
I0316 10:29:23.098623 1 wrap.go:42] GET /swagger.json: (1.355543ms) 404 [[] 172.17.102.138:44578]
I0316 10:29:26.132093 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.376191ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:resourcequota-controller] 172.17.65.123:37998]
I0316 10:29:26.133145 1 wrap.go:42] GET /apis/admission.voyager.appscode.com/v1beta1: (1.118685ms) 200 [[kube-controller-manager/v1.9.3 (linux/amd64) kubernetes/d283541/system:serviceaccount:kube-system:generic-garbage-collector] 172.17.65.123:37998]
I0316 10:29:30.984780 1 wrap.go:42] GET /healthz: (89.139µs) 200 [[kube-probe/1.9] 100.96.5.1:56452]
I0316 10:29:32.749116 1 wrap.go:42] GET /: (3.015224ms) 403 [[Go-http-client/2.0] 172.17.65.123:37982]
I0316 10:29:35.290144 1 wrap.go:42] GET /swagger.json: (1.392577ms) 404 [[] 172.17.65.123:37998]
You have found a bug. As an workaround, change the
apiVersion: extensions/v1beta1
to
apiVersion: voyager.appscode.com/v1beta1
This should fix the issue.
@rahulwa, I have retagged 6.0.0 image. If you user imagePullPolicy: Always
, you should get the updated images.
@tamalsaha Thank you very much for everything, especially for prompt fix.
I am using voyager 6.0.0 and it is not able to update HAProxy (controller) to scaled pods. As you can see i have scaled
splash-service
service from 1 to 3 pods but voyager is routing request to only 1 (older one). I am not able to find any relevant logs for failing of this on controller pod.I have installed it through helm:
helm install --name splash-voyager stable/voyager --set rbac.create=true --set cloudProvider=aws
HA-Proxy version 1.7.9 2017/08/18