Closed rubenhak closed 4 years ago
Thanks for the report! Could you proivde us the kubectl logs
from the hubble-ui pod?
$ kubectl get cep --all-namespaces
NAMESPACE NAME ENDPOINT ID IDENTITY ID INGRESS ENFORCEMENT EGRESS ENFORCEMENT VISIBILITY POLICY ENDPOINT STATE IPV4 IPV6
addr gprod-addr-main-app-7799f79c9-vbkpq 229 30742 false false ready 10.8.2.70
addr gprod-addr-main-web-66f66876d-spbc6 1342 12997 false false ready 10.8.1.228
berlioz gprod-berlioz-main-ctlr-6664d965cf-rjqgd 1354 62716 false false ready 10.8.3.119
hubble hubble-7tc7r 1379 47720 false false ready 10.8.3.164
hubble hubble-gs28r 3166 47720 false false ready 10.8.2.209
hubble hubble-nz5ww 2102 47720 false false ready 10.8.0.251
hubble hubble-ui-6956f9fb48-r9d2d 3360 16054 false false ready 10.8.3.88
hubble hubble-z6m68 2470 47720 false false ready 10.8.1.95
kube-system kube-dns-79868f54c5-94fk7 1231 102 false false ready 10.8.2.195
kube-system kube-dns-79868f54c5-bgfsf 1492 102 false false ready 10.8.0.125
sprt gprod-sprt-main-dtrace-5f66947d67-lqwb2 3912 27592 false false ready 10.8.0.232
sprt gprod-sprt-main-grfna-5fb5786d56-w57n4 1454 34591 false false ready 10.8.2.83
sprt gprod-sprt-main-prmts-65665cc8d-8spkc 1261 55518 false false ready 10.8.1.20
$ kubectl logs hubble-ui-6956f9fb48-r9d2d --namespace hubble
> @ start /workspace/server
> node ./build/src/main.js
{"name":"frontend","hostname":"hubble-ui-6956f9fb48-r9d2d","pid":18,"level":30,"msg":"Starting Hubble UI ðŸ”","time":"2019-11-27T19:44:06.874Z","v":0}
{"name":"frontend","hostname":"hubble-ui-6956f9fb48-r9d2d","pid":18,"level":30,"msg":"Initialized DBs in 7 ms","time":"2019-11-27T19:44:06.885Z","v":0}
{"name":"frontend","hostname":"hubble-ui-6956f9fb48-r9d2d","pid":18,"level":30,"msg":"Listening on port 12000 keep-alive timeout 0 ms headers timeout 120000 ms","time":"2019-11-27T19:44:07.152Z","v":0}
Logs are identical across Hubble pods:
$ kubectl logs hubble-z6m68 --namespace hubble
{"level":"info","ts":1574883827.544142,"caller":"cmd/server.go:166","msg":"Started server with args","max-flows":131071,"duration":0}
{"level":"info","ts":1574883827.5445023,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"dns","status":""}
{"level":"info","ts":1574883827.5446415,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"drop","status":""}
{"level":"info","ts":1574883827.544724,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"tcp","status":""}
{"level":"info","ts":1574883827.5448017,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"flow","status":""}
{"level":"info","ts":1574883827.5448763,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"port-distribution","status":""}
{"level":"info","ts":1574883827.5449479,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"icmp","status":""}
{"level":"info","ts":1574883827.5450273,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"http","status":""}
Press Ctrl-C to quit
{"level":"info","ts":1574883827.548772,"caller":"cmd/server.go:286","msg":"Starting gRPC server on client-listener","client-listener":"0.0.0.0:50051"}
{"level":"info","ts":1574883827.5491214,"caller":"cmd/server.go:286","msg":"Starting gRPC server on client-listener","client-listener":"unix:///var/run/hubble.sock"}
Logs from cilium.
Egress node. Caller service is gprod-addr-main-web
level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=daemon
level=info msg=" --access-log=''" subsys=daemon
level=info msg=" --agent-labels=''" subsys=daemon
level=info msg=" --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg=" --allow-localhost='auto'" subsys=daemon
level=info msg=" --annotate-k8s-node='true'" subsys=daemon
level=info msg=" --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg=" --auto-direct-node-routes='false'" subsys=daemon
level=info msg=" --aws-instance-limit-mapping='map[]'" subsys=daemon
level=info msg=" --blacklist-conflicting-routes='true'" subsys=daemon
level=info msg=" --bpf-compile-debug='false'" subsys=daemon
level=info msg=" --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg=" --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
level=info msg=" --bpf-nat-global-max='841429'" subsys=daemon
level=info msg=" --bpf-policy-map-max='16384'" subsys=daemon
level=info msg=" --bpf-root=''" subsys=daemon
level=info msg=" --cgroup-root=''" subsys=daemon
level=info msg=" --cluster-id='0'" subsys=daemon
level=info msg=" --cluster-name='default'" subsys=daemon
level=info msg=" --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg=" --cmdref=''" subsys=daemon
level=info msg=" --config=''" subsys=daemon
level=info msg=" --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg=" --conntrack-garbage-collector-interval='0'" subsys=daemon
level=info msg=" --conntrack-gc-interval='0s'" subsys=daemon
level=info msg=" --container-runtime=''" subsys=daemon
level=info msg=" --container-runtime-endpoint='map[]'" subsys=daemon
level=info msg=" --datapath-mode='veth'" subsys=daemon
level=info msg=" --debug='false'" subsys=daemon
level=info msg=" --debug-verbose=''" subsys=daemon
level=info msg=" --device='undefined'" subsys=daemon
level=info msg=" --disable-cnp-status-updates='false'" subsys=daemon
level=info msg=" --disable-conntrack='false'" subsys=daemon
level=info msg=" --disable-endpoint-crd='false'" subsys=daemon
level=info msg=" --disable-envoy-version-check='false'" subsys=daemon
level=info msg=" --disable-ipv4='false'" subsys=daemon
level=info msg=" --disable-k8s-services='false'" subsys=daemon
level=info msg=" --egress-masquerade-interfaces=''" subsys=daemon
level=info msg=" --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg=" --enable-endpoint-routes='false'" subsys=daemon
level=info msg=" --enable-health-checking='true'" subsys=daemon
level=info msg=" --enable-host-reachable-services='false'" subsys=daemon
level=info msg=" --enable-ipsec='false'" subsys=daemon
level=info msg=" --enable-ipv4='true'" subsys=daemon
level=info msg=" --enable-ipv6='false'" subsys=daemon
level=info msg=" --enable-k8s-event-handover='false'" subsys=daemon
level=info msg=" --enable-k8s-external-ips='true'" subsys=daemon
level=info msg=" --enable-l7-proxy='true'" subsys=daemon
level=info msg=" --enable-legacy-services='false'" subsys=daemon
level=info msg=" --enable-local-node-route='true'" subsys=daemon
level=info msg=" --enable-node-port='false'" subsys=daemon
level=info msg=" --enable-policy='default'" subsys=daemon
level=info msg=" --enable-selective-regeneration='true'" subsys=daemon
level=info msg=" --enable-tracing='false'" subsys=daemon
level=info msg=" --encrypt-interface=''" subsys=daemon
level=info msg=" --encrypt-node='false'" subsys=daemon
level=info msg=" --endpoint-interface-name-prefix='lxc+'" subsys=daemon
level=info msg=" --endpoint-queue-size='25'" subsys=daemon
level=info msg=" --envoy-log=''" subsys=daemon
level=info msg=" --exclude-local-address=''" subsys=daemon
level=info msg=" --fixed-identity-mapping='map[]'" subsys=daemon
level=info msg=" --flannel-manage-existing-containers='false'" subsys=daemon
level=info msg=" --flannel-master-device=''" subsys=daemon
level=info msg=" --flannel-uninstall-on-exit='false'" subsys=daemon
level=info msg=" --force-local-policy-eval-at-source='true'" subsys=daemon
level=info msg=" --host-reachable-services-protos=''" subsys=daemon
level=info msg=" --http-403-msg=''" subsys=daemon
level=info msg=" --http-idle-timeout='0'" subsys=daemon
level=info msg=" --http-max-grpc-timeout='0'" subsys=daemon
level=info msg=" --http-request-timeout='3600'" subsys=daemon
level=info msg=" --http-retry-count='3'" subsys=daemon
level=info msg=" --http-retry-timeout='0'" subsys=daemon
level=info msg=" --identity-allocation-mode='crd'" subsys=daemon
level=info msg=" --identity-change-grace-period='5s'" subsys=daemon
level=info msg=" --install-iptables-rules='true'" subsys=daemon
level=info msg=" --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg=" --ipam=''" subsys=daemon
level=info msg=" --ipsec-key-file=''" subsys=daemon
level=info msg=" --ipv4-cluster-cidr-mask-size='8'" subsys=daemon
level=info msg=" --ipv4-node='auto'" subsys=daemon
level=info msg=" --ipv4-pod-subnets=''" subsys=daemon
level=info msg=" --ipv4-range='auto'" subsys=daemon
level=info msg=" --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg=" --ipv4-service-range='auto'" subsys=daemon
level=info msg=" --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg=" --ipv6-node='auto'" subsys=daemon
level=info msg=" --ipv6-pod-subnets=''" subsys=daemon
level=info msg=" --ipv6-range='auto'" subsys=daemon
level=info msg=" --ipv6-service-range='auto'" subsys=daemon
level=info msg=" --ipvlan-master-device='undefined'" subsys=daemon
level=info msg=" --k8s-api-server=''" subsys=daemon
level=info msg=" --k8s-force-json-patch='false'" subsys=daemon
level=info msg=" --k8s-kubeconfig-path=''" subsys=daemon
level=info msg=" --k8s-namespace='cilium'" subsys=daemon
level=info msg=" --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-service-cache-size='128'" subsys=daemon
level=info msg=" --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg=" --k8s-watcher-queue-size='1024'" subsys=daemon
level=info msg=" --keep-bpf-templates='false'" subsys=daemon
level=info msg=" --keep-config='false'" subsys=daemon
level=info msg=" --kvstore=''" subsys=daemon
level=info msg=" --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg=" --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg=" --kvstore-opt='map[]'" subsys=daemon
level=info msg=" --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg=" --label-prefix-file=''" subsys=daemon
level=info msg=" --labels=''" subsys=daemon
level=info msg=" --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg=" --log-driver=''" subsys=daemon
level=info msg=" --log-opt='map[level:info]'" subsys=daemon
level=info msg=" --log-system-load='false'" subsys=daemon
level=info msg=" --masquerade='true'" subsys=daemon
level=info msg=" --max-controller-interval='0'" subsys=daemon
level=info msg=" --metrics=''" subsys=daemon
level=info msg=" --monitor-aggregation='medium'" subsys=daemon
level=info msg=" --monitor-aggregation-flags='all'" subsys=daemon
level=info msg=" --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg=" --monitor-queue-size='0'" subsys=daemon
level=info msg=" --mtu='0'" subsys=daemon
level=info msg=" --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
level=info msg=" --node-port-range=''" subsys=daemon
level=info msg=" --policy-queue-size='100'" subsys=daemon
level=info msg=" --policy-trigger-interval='1s'" subsys=daemon
level=info msg=" --pprof='false'" subsys=daemon
level=info msg=" --preallocate-bpf-maps='false'" subsys=daemon
level=info msg=" --prefilter-device='undefined'" subsys=daemon
level=info msg=" --prefilter-mode='native'" subsys=daemon
level=info msg=" --prepend-iptables-chains='true'" subsys=daemon
level=info msg=" --prometheus-serve-addr=''" subsys=daemon
level=info msg=" --proxy-connect-timeout='1'" subsys=daemon
level=info msg=" --read-cni-conf=''" subsys=daemon
level=info msg=" --restore='true'" subsys=daemon
level=info msg=" --sidecar-http-proxy='false'" subsys=daemon
level=info msg=" --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg=" --single-cluster-route='false'" subsys=daemon
level=info msg=" --skip-crd-creation='false'" subsys=daemon
level=info msg=" --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg=" --sockops-enable='false'" subsys=daemon
level=info msg=" --state-dir='/var/run/cilium'" subsys=daemon
level=info msg=" --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg=" --tofqdns-enable-poller='false'" subsys=daemon
level=info msg=" --tofqdns-enable-poller-events='true'" subsys=daemon
level=info msg=" --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg=" --tofqdns-min-ttl='0'" subsys=daemon
level=info msg=" --tofqdns-pre-cache=''" subsys=daemon
level=info msg=" --tofqdns-proxy-port='0'" subsys=daemon
level=info msg=" --tofqdns-proxy-response-max-delay='50ms'" subsys=daemon
level=info msg=" --trace-payloadlen='128'" subsys=daemon
level=info msg=" --tunnel='vxlan'" subsys=daemon
level=info msg=" --version='false'" subsys=daemon
level=info msg=" --write-cni-conf-when-ready=''" subsys=daemon
level=info msg=" _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="| _| | | | | | |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.6.90 ca7f68526 2019-11-19T08:00:31-08:00 go version go1.13.4 linux/amd64" subsys=daemon
level=info msg="cilium-envoy version: 7f6cab51ea2f4692a3e1067e1060f42818324bc2/1.12.1/Modified/RELEASE/BoringSSL" subsys=daemon
level=info msg="clang (7.0.0) and kernel (4.14.138) versions: OK!" subsys=linux-datapath
level=info msg="linking environment: OK!" subsys=linux-datapath
level=info msg="bpf_requirements check: OK!" subsys=linux-datapath
level=info msg="bpf_features check: OK!" subsys=linux-datapath
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Valid label prefix configuration:" subsys=labels-filter
level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
level=info msg=" - :app.kubernetes.io" subsys=labels-filter
level=info msg=" - !:io.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes.io" subsys=labels-filter
level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
level=info msg=" - !:k8s.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=info msg="Initializing daemon" subsys=daemon
level=warning msg="xt_socket kernel module could not be loaded" error="could not load module xt_socket: exit status 1" subsys=iptables
level=info msg="Detected MTU 1460" subsys=mtu
level=info msg="Restored services from maps" failed=0 restored=0 subsys=service
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=info msg="Establishing connection to apiserver" host="https://10.0.0.1:443" subsys=k8s
level=info msg="Connected to apiserver" subsys=k8s
level=info msg="Retrieved node information from kubernetes" nodeName=gke-gprod-uswest1c-default-pool-3a82d9d8-r9zn subsys=k8s
level=info msg="Received own node information from API server" ipAddr.ipv4=10.138.0.37 ipAddr.ipv6="<nil>" nodeName=gke-gprod-uswest1c-default-pool-3a82d9d8-r9zn subsys=k8s v4Prefix=10.8.1.0/24 v6Prefix="<nil>"
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=CiliumNetworkPolicy/v2 subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=CiliumClusterwideNetworkPolicy/v2 subsys=k8s
level=info msg="Updating CRD (CustomResourceDefinition)..." name=v2.CiliumEndpoint subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumEndpoint subsys=k8s
level=info msg="Updating CRD (CustomResourceDefinition)..." name=v2.CiliumNode subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumNode subsys=k8s
level=info msg="Updating CRD (CustomResourceDefinition)..." name=v2.CiliumIdentity subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumIdentity subsys=k8s
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing hostscope IPAM" subsys=ipam v4Prefix=10.8.1.0/24 v6Prefix="<nil>"
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="No old endpoints found." subsys=daemon
level=info msg="Addressing information:" subsys=daemon
level=info msg=" Cluster-Name: default" subsys=daemon
level=info msg=" Cluster-ID: 0" subsys=daemon
level=info msg=" Local node-name: gke-gprod-uswest1c-default-pool-3a82d9d8-r9zn" subsys=daemon
level=info msg=" Node-IPv6: <nil>" subsys=daemon
level=info msg=" External-Node IPv4: 10.138.0.37" subsys=daemon
level=info msg=" Internal-Node IPv4: 10.8.1.229" subsys=daemon
level=info msg=" Cluster IPv4 prefix: 10.0.0.0/8" subsys=daemon
level=info msg=" IPv4 allocation prefix: 10.8.1.0/24" subsys=daemon
level=info msg=" Loopback IPv4: 169.254.42.1" subsys=daemon
level=info msg=" Local IPv4 addresses:" subsys=daemon
level=info msg=" - 10.138.0.37" subsys=daemon
level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=10.8.1.229 v4Prefix=10.8.1.0/24 v4healthIP.IPv4=10.8.1.130 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
level=info msg="Adding local node to cluster" subsys=nodediscovery
level=info msg="Setting up base BPF datapath" subsys=datapath-loader
level=info msg="Setting sysctl net.core.bpf_jit_enable=1" subsys=datapath-loader
level=info msg="Setting sysctl net.ipv4.conf.all.rp_filter=0" subsys=datapath-loader
level=info msg="Setting sysctl kernel.unprivileged_bpf_disabled=1" subsys=datapath-loader
level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
level=info msg="Blacklisting local route as no-alloc" route=10.138.0.1/32 subsys=ipam
level=info msg="Blacklisting local route as no-alloc" route=169.254.123.0/24 subsys=ipam
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -F CILIUM_INPUT]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -X CILIUM_INPUT]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -F CILIUM_OUTPUT]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -X CILIUM_OUTPUT]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -F CILIUM_OUTPUT_raw]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -X CILIUM_OUTPUT_raw]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -F CILIUM_POST_nat]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -X CILIUM_POST_nat]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -F CILIUM_OUTPUT_nat]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -X CILIUM_OUTPUT_nat]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -F CILIUM_PRE_nat]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -X CILIUM_PRE_nat]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -F CILIUM_POST_mangle]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -X CILIUM_POST_mangle]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -F CILIUM_PRE_mangle]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -X CILIUM_PRE_mangle]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -F CILIUM_PRE_raw]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -X CILIUM_PRE_raw]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -F CILIUM_FORWARD]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -X CILIUM_FORWARD]" subsys=iptables
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Starting IP identity watcher" subsys=ipcache
level=info msg="Validating configured node address ranges" subsys=daemon
level=info msg="Starting connection tracking garbage collector" subsys=daemon
level=info msg="Datapath signal listener running" subsys=signal
level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
level=info msg="Launching Cilium health daemon" subsys=daemon
level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
level=info msg="Launching Cilium health endpoint" subsys=daemon
level=info msg="Initializing Cilium API" subsys=daemon
level=info msg="Daemon initialization completed" bootstrapTime=10.610385416s subsys=daemon
level=info msg="Serving cilium at unix:///var/run/cilium/cilium.sock" subsys=daemon
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3314 ipv4=10.8.1.130 ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3314 identityLabels="reserved:health" ipv4=10.8.1.130 ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3314 identity=4 identityLabels="reserved:health" ipv4=10.8.1.130 ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Compiled new BPF template" BPFCompilationTime=733.108969ms file-path=/var/run/cilium/state/templates/17b6f779487c779f289dec34fd5bff683551cc83/bpf_lxc.o subsys=datapath-loader
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3314 identity=4 ipv4=10.8.1.130 ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Serving cilium health at unix:///var/run/cilium/health.sock" subsys=health-server
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="New endpoint" containerID=af64e878a1 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2470 ipv4=10.8.1.95 ipv6= k8sPodName=hubble/hubble-z6m68 subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID=af64e878a1 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2470 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=hubble,k8s:io.kubernetes.pod.namespace=hubble,k8s:k8s-app=hubble" ipv4=10.8.1.95 ipv6= k8sPodName=hubble/hubble-z6m68 subsys=endpoint
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=hubble;k8s:io.kubernetes.pod.namespace=hubble;k8s:k8s-app=hubble;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID=af64e878a1 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2470 identity=47720 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=hubble,k8s:io.kubernetes.pod.namespace=hubble,k8s:k8s-app=hubble" ipv4=10.8.1.95 ipv6= k8sPodName=hubble/hubble-z6m68 oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID=af64e878a1 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=2470 identity=47720 ipv4=10.8.1.95 ipv6= k8sPodName=hubble/hubble-z6m68 subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID=af64e878a1 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=2470 identity=47720 ipv4=10.8.1.95 ipv6= k8sPodName=hubble/hubble-z6m68 subsys=endpoint
level=info msg="Successful endpoint creation" containerID=af64e878a1 datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=2470 identity=47720 ipv4=10.8.1.95 ipv6= k8sPodName=hubble/hubble-z6m68 subsys=daemon
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Beginning to read perf buffer" startTime="2019-11-27 19:43:47.547789223 +0000 UTC m=+42.749612650" subsys=monitor-agent
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=6.103515625e-05 newInterval=7m30s subsys=map-ct
level=info msg="New endpoint" containerID=ea6ec2c81e datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1261 ipv4=10.8.1.20 ipv6= k8sPodName=sprt/gprod-sprt-main-prmts-65665cc8d-8spkc subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID=ea6ec2c81e datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1261 identityLabels="k8s:berlioz_managed=true,k8s:cluster=sprt,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=sprt,k8s:name=gprod-sprt-main-prmts,k8s:sector=main,k8s:service=prmts" ipv4=10.8.1.20 ipv6= k8sPodName=sprt/gprod-sprt-main-prmts-65665cc8d-8spkc subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=warning msg="Key allocation attempt failed" attempt=0 error="slave key creation failed 'k8s:berlioz_managed=true;k8s:cluster=sprt;k8s:deployment=gprod;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=sprt;k8s:name=gprod-sprt-main-prmts;k8s:sector=main;k8s:service=prmts;': identity does not exist" key="[k8s:berlioz_managed=true k8s:cluster=sprt k8s:deployment=gprod k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=sprt k8s:name=gprod-sprt-main-prmts k8s:sector=main k8s:service=prmts]" subsys=allocator
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Invalid state transition skipped" containerID=ea6ec2c81e datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1261 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4=10.8.1.20 ipv6= k8sPodName=sprt/gprod-sprt-main-prmts-65665cc8d-8spkc line=448 subsys=endpoint
level=info msg="Reusing existing global key" key="k8s:berlioz_managed=true;k8s:cluster=sprt;k8s:deployment=gprod;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=sprt;k8s:name=gprod-sprt-main-prmts;k8s:sector=main;k8s:service=prmts;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID=ea6ec2c81e datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1261 identity=55518 identityLabels="k8s:berlioz_managed=true,k8s:cluster=sprt,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=sprt,k8s:name=gprod-sprt-main-prmts,k8s:sector=main,k8s:service=prmts" ipv4=10.8.1.20 ipv6= k8sPodName=sprt/gprod-sprt-main-prmts-65665cc8d-8spkc oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID=ea6ec2c81e datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1261 identity=55518 ipv4=10.8.1.20 ipv6= k8sPodName=sprt/gprod-sprt-main-prmts-65665cc8d-8spkc subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID=ea6ec2c81e datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1261 identity=55518 ipv4=10.8.1.20 ipv6= k8sPodName=sprt/gprod-sprt-main-prmts-65665cc8d-8spkc subsys=endpoint
level=info msg="Successful endpoint creation" containerID=ea6ec2c81e datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1261 identity=55518 ipv4=10.8.1.20 ipv6= k8sPodName=sprt/gprod-sprt-main-prmts-65665cc8d-8spkc subsys=daemon
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="regenerating all endpoints" reason= subsys=endpoint-manager
level=info msg="New endpoint" containerID=f2863896f6 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1342 ipv4=10.8.1.228 ipv6= k8sPodName=addr/gprod-addr-main-web-66f66876d-spbc6 subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID=f2863896f6 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1342 identityLabels="k8s:berlioz_managed=true,k8s:cluster=addr,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=addr,k8s:name=gprod-addr-main-web,k8s:sector=main,k8s:service=web" ipv4=10.8.1.228 ipv6= k8sPodName=addr/gprod-addr-main-web-66f66876d-spbc6 subsys=endpoint
level=info msg="Reusing existing global key" key="k8s:berlioz_managed=true;k8s:cluster=addr;k8s:deployment=gprod;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=addr;k8s:name=gprod-addr-main-web;k8s:sector=main;k8s:service=web;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID=f2863896f6 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1342 identity=12997 identityLabels="k8s:berlioz_managed=true,k8s:cluster=addr,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=addr,k8s:name=gprod-addr-main-web,k8s:sector=main,k8s:service=web" ipv4=10.8.1.228 ipv6= k8sPodName=addr/gprod-addr-main-web-66f66876d-spbc6 oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID=f2863896f6 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1342 identity=12997 ipv4=10.8.1.228 ipv6= k8sPodName=addr/gprod-addr-main-web-66f66876d-spbc6 subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID=f2863896f6 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1342 identity=12997 ipv4=10.8.1.228 ipv6= k8sPodName=addr/gprod-addr-main-web-66f66876d-spbc6 subsys=endpoint
level=info msg="Successful endpoint creation" containerID=f2863896f6 datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1342 identity=12997 ipv4=10.8.1.228 ipv6= k8sPodName=addr/gprod-addr-main-web-66f66876d-spbc6 subsys=daemon
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0005626678466796875 newInterval=11m15s subsys=map-ct
Logs from cilium.
Ingress node. Responder service is gprod-addr-main-app
level=info msg="Skipped reading configuration file" reason="Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=daemon
level=info msg=" --access-log=''" subsys=daemon
level=info msg=" --agent-labels=''" subsys=daemon
level=info msg=" --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg=" --allow-localhost='auto'" subsys=daemon
level=info msg=" --annotate-k8s-node='true'" subsys=daemon
level=info msg=" --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg=" --auto-direct-node-routes='false'" subsys=daemon
level=info msg=" --aws-instance-limit-mapping='map[]'" subsys=daemon
level=info msg=" --blacklist-conflicting-routes='true'" subsys=daemon
level=info msg=" --bpf-compile-debug='false'" subsys=daemon
level=info msg=" --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg=" --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp='6h0m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-tcp='6h0m0s'" subsys=daemon
level=info msg=" --bpf-nat-global-max='841429'" subsys=daemon
level=info msg=" --bpf-policy-map-max='16384'" subsys=daemon
level=info msg=" --bpf-root=''" subsys=daemon
level=info msg=" --cgroup-root=''" subsys=daemon
level=info msg=" --cluster-id='0'" subsys=daemon
level=info msg=" --cluster-name='default'" subsys=daemon
level=info msg=" --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg=" --cmdref=''" subsys=daemon
level=info msg=" --config=''" subsys=daemon
level=info msg=" --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg=" --conntrack-garbage-collector-interval='0'" subsys=daemon
level=info msg=" --conntrack-gc-interval='0s'" subsys=daemon
level=info msg=" --container-runtime=''" subsys=daemon
level=info msg=" --container-runtime-endpoint='map[]'" subsys=daemon
level=info msg=" --datapath-mode='veth'" subsys=daemon
level=info msg=" --debug='false'" subsys=daemon
level=info msg=" --debug-verbose=''" subsys=daemon
level=info msg=" --device='undefined'" subsys=daemon
level=info msg=" --disable-cnp-status-updates='false'" subsys=daemon
level=info msg=" --disable-conntrack='false'" subsys=daemon
level=info msg=" --disable-endpoint-crd='false'" subsys=daemon
level=info msg=" --disable-envoy-version-check='false'" subsys=daemon
level=info msg=" --disable-ipv4='false'" subsys=daemon
level=info msg=" --disable-k8s-services='false'" subsys=daemon
level=info msg=" --egress-masquerade-interfaces=''" subsys=daemon
level=info msg=" --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg=" --enable-endpoint-routes='false'" subsys=daemon
level=info msg=" --enable-health-checking='true'" subsys=daemon
level=info msg=" --enable-host-reachable-services='false'" subsys=daemon
level=info msg=" --enable-ipsec='false'" subsys=daemon
level=info msg=" --enable-ipv4='true'" subsys=daemon
level=info msg=" --enable-ipv6='false'" subsys=daemon
level=info msg=" --enable-k8s-event-handover='false'" subsys=daemon
level=info msg=" --enable-k8s-external-ips='true'" subsys=daemon
level=info msg=" --enable-l7-proxy='true'" subsys=daemon
level=info msg=" --enable-legacy-services='false'" subsys=daemon
level=info msg=" --enable-local-node-route='true'" subsys=daemon
level=info msg=" --enable-node-port='false'" subsys=daemon
level=info msg=" --enable-policy='default'" subsys=daemon
level=info msg=" --enable-selective-regeneration='true'" subsys=daemon
level=info msg=" --enable-tracing='false'" subsys=daemon
level=info msg=" --encrypt-interface=''" subsys=daemon
level=info msg=" --encrypt-node='false'" subsys=daemon
level=info msg=" --endpoint-interface-name-prefix='lxc+'" subsys=daemon
level=info msg=" --endpoint-queue-size='25'" subsys=daemon
level=info msg=" --envoy-log=''" subsys=daemon
level=info msg=" --exclude-local-address=''" subsys=daemon
level=info msg=" --fixed-identity-mapping='map[]'" subsys=daemon
level=info msg=" --flannel-manage-existing-containers='false'" subsys=daemon
level=info msg=" --flannel-master-device=''" subsys=daemon
level=info msg=" --flannel-uninstall-on-exit='false'" subsys=daemon
level=info msg=" --force-local-policy-eval-at-source='true'" subsys=daemon
level=info msg=" --host-reachable-services-protos=''" subsys=daemon
level=info msg=" --http-403-msg=''" subsys=daemon
level=info msg=" --http-idle-timeout='0'" subsys=daemon
level=info msg=" --http-max-grpc-timeout='0'" subsys=daemon
level=info msg=" --http-request-timeout='3600'" subsys=daemon
level=info msg=" --http-retry-count='3'" subsys=daemon
level=info msg=" --http-retry-timeout='0'" subsys=daemon
level=info msg=" --identity-allocation-mode='crd'" subsys=daemon
level=info msg=" --identity-change-grace-period='5s'" subsys=daemon
level=info msg=" --install-iptables-rules='true'" subsys=daemon
level=info msg=" --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg=" --ipam=''" subsys=daemon
level=info msg=" --ipsec-key-file=''" subsys=daemon
level=info msg=" --ipv4-cluster-cidr-mask-size='8'" subsys=daemon
level=info msg=" --ipv4-node='auto'" subsys=daemon
level=info msg=" --ipv4-pod-subnets=''" subsys=daemon
level=info msg=" --ipv4-range='auto'" subsys=daemon
level=info msg=" --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg=" --ipv4-service-range='auto'" subsys=daemon
level=info msg=" --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg=" --ipv6-node='auto'" subsys=daemon
level=info msg=" --ipv6-pod-subnets=''" subsys=daemon
level=info msg=" --ipv6-range='auto'" subsys=daemon
level=info msg=" --ipv6-service-range='auto'" subsys=daemon
level=info msg=" --ipvlan-master-device='undefined'" subsys=daemon
level=info msg=" --k8s-api-server=''" subsys=daemon
level=info msg=" --k8s-force-json-patch='false'" subsys=daemon
level=info msg=" --k8s-kubeconfig-path=''" subsys=daemon
level=info msg=" --k8s-namespace='cilium'" subsys=daemon
level=info msg=" --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-service-cache-size='128'" subsys=daemon
level=info msg=" --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg=" --k8s-watcher-queue-size='1024'" subsys=daemon
level=info msg=" --keep-bpf-templates='false'" subsys=daemon
level=info msg=" --keep-config='false'" subsys=daemon
level=info msg=" --kvstore=''" subsys=daemon
level=info msg=" --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg=" --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg=" --kvstore-opt='map[]'" subsys=daemon
level=info msg=" --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg=" --label-prefix-file=''" subsys=daemon
level=info msg=" --labels=''" subsys=daemon
level=info msg=" --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg=" --log-driver=''" subsys=daemon
level=info msg=" --log-opt='map[level:info]'" subsys=daemon
level=info msg=" --log-system-load='false'" subsys=daemon
level=info msg=" --masquerade='true'" subsys=daemon
level=info msg=" --max-controller-interval='0'" subsys=daemon
level=info msg=" --metrics=''" subsys=daemon
level=info msg=" --monitor-aggregation='medium'" subsys=daemon
level=info msg=" --monitor-aggregation-flags='all'" subsys=daemon
level=info msg=" --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg=" --monitor-queue-size='0'" subsys=daemon
level=info msg=" --mtu='0'" subsys=daemon
level=info msg=" --nat46-range='0:0:0:0:0:FFFF::/96'" subsys=daemon
level=info msg=" --node-port-range=''" subsys=daemon
level=info msg=" --policy-queue-size='100'" subsys=daemon
level=info msg=" --policy-trigger-interval='1s'" subsys=daemon
level=info msg=" --pprof='false'" subsys=daemon
level=info msg=" --preallocate-bpf-maps='false'" subsys=daemon
level=info msg=" --prefilter-device='undefined'" subsys=daemon
level=info msg=" --prefilter-mode='native'" subsys=daemon
level=info msg=" --prepend-iptables-chains='true'" subsys=daemon
level=info msg=" --prometheus-serve-addr=''" subsys=daemon
level=info msg=" --proxy-connect-timeout='1'" subsys=daemon
level=info msg=" --read-cni-conf=''" subsys=daemon
level=info msg=" --restore='true'" subsys=daemon
level=info msg=" --sidecar-http-proxy='false'" subsys=daemon
level=info msg=" --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg=" --single-cluster-route='false'" subsys=daemon
level=info msg=" --skip-crd-creation='false'" subsys=daemon
level=info msg=" --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg=" --sockops-enable='false'" subsys=daemon
level=info msg=" --state-dir='/var/run/cilium'" subsys=daemon
level=info msg=" --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg=" --tofqdns-enable-poller='false'" subsys=daemon
level=info msg=" --tofqdns-enable-poller-events='true'" subsys=daemon
level=info msg=" --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg=" --tofqdns-min-ttl='0'" subsys=daemon
level=info msg=" --tofqdns-pre-cache=''" subsys=daemon
level=info msg=" --tofqdns-proxy-port='0'" subsys=daemon
level=info msg=" --tofqdns-proxy-response-max-delay='50ms'" subsys=daemon
level=info msg=" --trace-payloadlen='128'" subsys=daemon
level=info msg=" --tunnel='vxlan'" subsys=daemon
level=info msg=" --version='false'" subsys=daemon
level=info msg=" --write-cni-conf-when-ready=''" subsys=daemon
level=info msg=" _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="| _| | | | | | |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.6.90 ca7f68526 2019-11-19T08:00:31-08:00 go version go1.13.4 linux/amd64" subsys=daemon
level=info msg="cilium-envoy version: 7f6cab51ea2f4692a3e1067e1060f42818324bc2/1.12.1/Modified/RELEASE/BoringSSL" subsys=daemon
level=info msg="clang (7.0.0) and kernel (4.14.138) versions: OK!" subsys=linux-datapath
level=info msg="linking environment: OK!" subsys=linux-datapath
level=info msg="bpf_requirements check: OK!" subsys=linux-datapath
level=info msg="bpf_features check: OK!" subsys=linux-datapath
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Valid label prefix configuration:" subsys=labels-filter
level=info msg=" - :io.kubernetes.pod.namespace" subsys=labels-filter
level=info msg=" - :io.cilium.k8s.namespace.labels" subsys=labels-filter
level=info msg=" - :app.kubernetes.io" subsys=labels-filter
level=info msg=" - !:io.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes.io" subsys=labels-filter
level=info msg=" - !:.*beta.kubernetes.io" subsys=labels-filter
level=info msg=" - !:k8s.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=info msg="Initializing daemon" subsys=daemon
level=warning msg="xt_socket kernel module could not be loaded" error="could not load module xt_socket: exit status 1" subsys=iptables
level=info msg="Detected MTU 1460" subsys=mtu
level=info msg="Restored services from maps" failed=0 restored=0 subsys=service
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=info msg="Establishing connection to apiserver" host="https://10.0.0.1:443" subsys=k8s
level=info msg="Connected to apiserver" subsys=k8s
level=info msg="Retrieved node information from kubernetes" nodeName=gke-gprod-uswest1c-default-pool-3a82d9d8-cvzz subsys=k8s
level=info msg="Received own node information from API server" ipAddr.ipv4=10.138.0.38 ipAddr.ipv6="<nil>" nodeName=gke-gprod-uswest1c-default-pool-3a82d9d8-cvzz subsys=k8s v4Prefix=10.8.2.0/24 v6Prefix="<nil>"
level=info msg="Creating CRD (CustomResourceDefinition)..." name=CiliumNetworkPolicy/v2 subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=CiliumNetworkPolicy/v2 subsys=k8s
level=info msg="Creating CRD (CustomResourceDefinition)..." name=CiliumClusterwideNetworkPolicy/v2 subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=CiliumClusterwideNetworkPolicy/v2 subsys=k8s
level=info msg="Creating CRD (CustomResourceDefinition)..." name=v2.CiliumEndpoint subsys=k8s
level=info msg="Updating CRD (CustomResourceDefinition)..." name=v2.CiliumEndpoint subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumEndpoint subsys=k8s
level=info msg="Creating CRD (CustomResourceDefinition)..." name=v2.CiliumNode subsys=k8s
level=info msg="Updating CRD (CustomResourceDefinition)..." name=v2.CiliumNode subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumNode subsys=k8s
level=info msg="Creating CRD (CustomResourceDefinition)..." name=v2.CiliumIdentity subsys=k8s
level=info msg="Updating CRD (CustomResourceDefinition)..." name=v2.CiliumIdentity subsys=k8s
level=info msg="CRD (CustomResourceDefinition) is installed and up-to-date" name=v2.CiliumIdentity subsys=k8s
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing hostscope IPAM" subsys=ipam v4Prefix=10.8.2.0/24 v6Prefix="<nil>"
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="No old endpoints found." subsys=daemon
level=info msg="Addressing information:" subsys=daemon
level=info msg=" Cluster-Name: default" subsys=daemon
level=info msg=" Cluster-ID: 0" subsys=daemon
level=info msg=" Local node-name: gke-gprod-uswest1c-default-pool-3a82d9d8-cvzz" subsys=daemon
level=info msg=" Node-IPv6: <nil>" subsys=daemon
level=info msg=" External-Node IPv4: 10.138.0.38" subsys=daemon
level=info msg=" Internal-Node IPv4: 10.8.2.223" subsys=daemon
level=info msg=" Cluster IPv4 prefix: 10.0.0.0/8" subsys=daemon
level=info msg=" IPv4 allocation prefix: 10.8.2.0/24" subsys=daemon
level=info msg=" Loopback IPv4: 169.254.42.1" subsys=daemon
level=info msg=" Local IPv4 addresses:" subsys=daemon
level=info msg=" - 10.138.0.38" subsys=daemon
level=info msg="Annotating k8s node" subsys=daemon v4CiliumHostIP.IPv4=10.8.2.223 v4Prefix=10.8.2.0/24 v4healthIP.IPv4=10.8.2.126 v6CiliumHostIP.IPv6="<nil>" v6Prefix="<nil>" v6healthIP.IPv6="<nil>"
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Cluster-ID is not specified, skipping ClusterMesh initialization" subsys=daemon
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/xds.sock" subsys=envoy-manager
level=info msg="Adding local node to cluster" subsys=nodediscovery
level=info msg="Setting up base BPF datapath" subsys=datapath-loader
level=info msg="Setting sysctl net.core.bpf_jit_enable=1" subsys=datapath-loader
level=info msg="Setting sysctl net.ipv4.conf.all.rp_filter=0" subsys=datapath-loader
level=info msg="Setting sysctl kernel.unprivileged_bpf_disabled=1" subsys=datapath-loader
level=info msg="Successfully created CiliumNode resource" subsys=nodediscovery
level=info msg="Blacklisting local route as no-alloc" route=10.138.0.1/32 subsys=ipam
level=info msg="Blacklisting local route as no-alloc" route=169.254.123.0/24 subsys=ipam
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -F CILIUM_INPUT]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -X CILIUM_INPUT]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -F CILIUM_OUTPUT]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -X CILIUM_OUTPUT]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -F CILIUM_OUTPUT_raw]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -X CILIUM_OUTPUT_raw]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -F CILIUM_POST_nat]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -X CILIUM_POST_nat]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -F CILIUM_OUTPUT_nat]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -X CILIUM_OUTPUT_nat]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -F CILIUM_PRE_nat]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t nat -X CILIUM_PRE_nat]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -F CILIUM_POST_mangle]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -X CILIUM_POST_mangle]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -F CILIUM_PRE_mangle]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t mangle -X CILIUM_PRE_mangle]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -F CILIUM_PRE_raw]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t raw -X CILIUM_PRE_raw]" subsys=iptables
level=warning msg="Unable to flush Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -F CILIUM_FORWARD]" subsys=iptables
level=warning msg="Unable to delete Cilium iptables chain" error="exit status 1" obj="[-w 5 -t filter -X CILIUM_FORWARD]" subsys=iptables
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Starting IP identity watcher" subsys=ipcache
level=info msg="Validating configured node address ranges" subsys=daemon
level=info msg="Starting connection tracking garbage collector" subsys=daemon
level=info msg="Datapath signal listener running" subsys=signal
level=info msg="Initial scan of connection tracking completed" subsys=ct-gc
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Waiting until all pre-existing resources related to policy have been received" subsys=k8s-watcher
level=info msg="Regenerating restored endpoints" numRestored=0 subsys=daemon
level=info msg="Launching Cilium health daemon" subsys=daemon
level=info msg="All pre-existing resources related to policy have been received; continuing" subsys=k8s-watcher
level=info msg="Finished regenerating restored endpoints" regenerated=0 subsys=daemon total=0
level=info msg="Launching Cilium health endpoint" subsys=daemon
level=info msg="Initializing Cilium API" subsys=daemon
level=info msg="Daemon initialization completed" bootstrapTime=16.464517908s subsys=daemon
level=info msg="Serving cilium at unix:///var/run/cilium/cilium.sock" subsys=daemon
level=info msg="New endpoint" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3363 ipv4=10.8.2.126 ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3363 identityLabels="reserved:health" ipv4=10.8.2.126 ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Identity of endpoint changed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3363 identity=4 identityLabels="reserved:health" ipv4=10.8.2.126 ipv6= k8sPodName=/ oldIdentity="no identity" subsys=endpoint
level=info msg="Compiled new BPF template" BPFCompilationTime=1.015271458s file-path=/var/run/cilium/state/templates/3101d3fea0ba500852c08e279678192b626f277f/bpf_lxc.o subsys=datapath-loader
level=info msg="Rewrote endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3363 identity=4 ipv4=10.8.2.126 ipv6= k8sPodName=/ subsys=endpoint
level=info msg="Serving cilium health at unix:///var/run/cilium/health.sock" subsys=health-server
level=info msg="New endpoint" containerID=b3530d4f79 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1231 ipv4=10.8.2.195 ipv6= k8sPodName=kube-system/kube-dns-79868f54c5-94fk7 subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID=b3530d4f79 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1231 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=kube-dns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" ipv4=10.8.2.195 ipv6= k8sPodName=kube-system/kube-dns-79868f54c5-94fk7 subsys=endpoint
level=info msg="Identity of endpoint changed" containerID=b3530d4f79 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1231 identity=102 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=kube-dns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" ipv4=10.8.2.195 ipv6= k8sPodName=kube-system/kube-dns-79868f54c5-94fk7 oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID=b3530d4f79 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1231 identity=102 ipv4=10.8.2.195 ipv6= k8sPodName=kube-system/kube-dns-79868f54c5-94fk7 subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID=b3530d4f79 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1231 identity=102 ipv4=10.8.2.195 ipv6= k8sPodName=kube-system/kube-dns-79868f54c5-94fk7 subsys=endpoint
level=info msg="Successful endpoint creation" containerID=b3530d4f79 datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1231 identity=102 ipv4=10.8.2.195 ipv6= k8sPodName=kube-system/kube-dns-79868f54c5-94fk7 subsys=daemon
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="New endpoint" containerID=816a77ff58 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3166 ipv4=10.8.2.209 ipv6= k8sPodName=hubble/hubble-gs28r subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID=816a77ff58 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3166 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=hubble,k8s:io.kubernetes.pod.namespace=hubble,k8s:k8s-app=hubble" ipv4=10.8.2.209 ipv6= k8sPodName=hubble/hubble-gs28r subsys=endpoint
level=info msg="Reusing existing global key" key="k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=hubble;k8s:io.kubernetes.pod.namespace=hubble;k8s:k8s-app=hubble;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID=816a77ff58 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3166 identity=47720 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=hubble,k8s:io.kubernetes.pod.namespace=hubble,k8s:k8s-app=hubble" ipv4=10.8.2.209 ipv6= k8sPodName=hubble/hubble-gs28r oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID=816a77ff58 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=3166 identity=47720 ipv4=10.8.2.209 ipv6= k8sPodName=hubble/hubble-gs28r subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID=816a77ff58 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3166 identity=47720 ipv4=10.8.2.209 ipv6= k8sPodName=hubble/hubble-gs28r subsys=endpoint
level=info msg="Successful endpoint creation" containerID=816a77ff58 datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=3166 identity=47720 ipv4=10.8.2.209 ipv6= k8sPodName=hubble/hubble-gs28r subsys=daemon
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Beginning to read perf buffer" startTime="2019-11-27 19:43:47.809521466 +0000 UTC m=+48.977339151" subsys=monitor-agent
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.0002002716064453125 newInterval=7m30s subsys=map-ct
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="regenerating all endpoints" reason= subsys=endpoint-manager
level=info msg="New endpoint" containerID=9f0c204c02 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1454 ipv4=10.8.2.83 ipv6= k8sPodName=sprt/gprod-sprt-main-grfna-5fb5786d56-w57n4 subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID=9f0c204c02 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1454 identityLabels="k8s:berlioz_managed=true,k8s:cluster=sprt,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=sprt,k8s:name=gprod-sprt-main-grfna,k8s:sector=main,k8s:service=grfna" ipv4=10.8.2.83 ipv6= k8sPodName=sprt/gprod-sprt-main-grfna-5fb5786d56-w57n4 subsys=endpoint
level=info msg="Reusing existing global key" key="k8s:berlioz_managed=true;k8s:cluster=sprt;k8s:deployment=gprod;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=sprt;k8s:name=gprod-sprt-main-grfna;k8s:sector=main;k8s:service=grfna;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID=9f0c204c02 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1454 identity=34591 identityLabels="k8s:berlioz_managed=true,k8s:cluster=sprt,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=sprt,k8s:name=gprod-sprt-main-grfna,k8s:sector=main,k8s:service=grfna" ipv4=10.8.2.83 ipv6= k8sPodName=sprt/gprod-sprt-main-grfna-5fb5786d56-w57n4 oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID=9f0c204c02 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1454 identity=34591 ipv4=10.8.2.83 ipv6= k8sPodName=sprt/gprod-sprt-main-grfna-5fb5786d56-w57n4 subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID=9f0c204c02 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=1454 identity=34591 ipv4=10.8.2.83 ipv6= k8sPodName=sprt/gprod-sprt-main-grfna-5fb5786d56-w57n4 subsys=endpoint
level=info msg="Successful endpoint creation" containerID=9f0c204c02 datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=1454 identity=34591 ipv4=10.8.2.83 ipv6= k8sPodName=sprt/gprod-sprt-main-grfna-5fb5786d56-w57n4 subsys=daemon
level=info msg="New endpoint" containerID=8026006ec5 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=229 ipv4=10.8.2.70 ipv6= k8sPodName=addr/gprod-addr-main-app-7799f79c9-vbkpq subsys=endpoint
level=info msg="Resolving identity labels (blocking)" containerID=8026006ec5 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=229 identityLabels="k8s:berlioz_managed=true,k8s:cluster=addr,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=addr,k8s:name=gprod-addr-main-app,k8s:sector=main,k8s:service=app" ipv4=10.8.2.70 ipv6= k8sPodName=addr/gprod-addr-main-app-7799f79c9-vbkpq subsys=endpoint
level=info msg="Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination" labels="map[]" subsys=crd-allocator
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=info msg="Invalid state transition skipped" containerID=8026006ec5 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=229 endpointState.from=waiting-for-identity endpointState.to=waiting-to-regenerate file=/go/src/github.com/cilium/cilium/pkg/endpoint/policy.go ipv4=10.8.2.70 ipv6= k8sPodName=addr/gprod-addr-main-app-7799f79c9-vbkpq line=448 subsys=endpoint
level=info msg="Allocated new global key" key="k8s:berlioz_managed=true;k8s:cluster=addr;k8s:deployment=gprod;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=addr;k8s:name=gprod-addr-main-app;k8s:sector=main;k8s:service=app;" subsys=allocator
level=info msg="Identity of endpoint changed" containerID=8026006ec5 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=229 identity=30742 identityLabels="k8s:berlioz_managed=true,k8s:cluster=addr,k8s:deployment=gprod,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=addr,k8s:name=gprod-addr-main-app,k8s:sector=main,k8s:service=app" ipv4=10.8.2.70 ipv6= k8sPodName=addr/gprod-addr-main-app-7799f79c9-vbkpq oldIdentity="no identity" subsys=endpoint
level=info msg="Waiting for endpoint to be generated" containerID=8026006ec5 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=229 identity=30742 ipv4=10.8.2.70 ipv6= k8sPodName=addr/gprod-addr-main-app-7799f79c9-vbkpq subsys=endpoint
level=info msg="Rewrote endpoint BPF program" containerID=8026006ec5 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=229 identity=30742 ipv4=10.8.2.70 ipv6= k8sPodName=addr/gprod-addr-main-app-7799f79c9-vbkpq subsys=endpoint
level=info msg="Successful endpoint creation" containerID=8026006ec5 datapathPolicyRevision=1 desiredPolicyRevision=1 endpointID=229 identity=30742 ipv4=10.8.2.70 ipv6= k8sPodName=addr/gprod-addr-main-app-7799f79c9-vbkpq subsys=daemon
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=warning msg="Garbage collection on ipv4 TCP CT map failed to finish" interrupted=524212 subsys=map-ct
level=info msg="Conntrack garbage collector interval recalculated" deleteRatio=0.000759124755859375 newInterval=11m15s subsys=map-ct
i reproduced the issue, looks like when pods are running on a different nodes, labels for one of the source or destination are not getting properly populated, and UI could not include those flows into the service map.
"time": "2019-11-27T23:23:40.866685435Z",
"verdict": "FORWARDED",
"ethernet": {
"source": "1a:8a:0d:98:7a:42",
"destination": "9a:72:2e:d9:90:07"
},
"IP": {
"source": "10.28.0.161",
"destination": "10.28.3.193",
"ipVersion": "IPv4"
},
"l4": {
"TCP": {
"source_port": 42618,
"destination_port": 2181,
"flags": {
"PSH": true,
"ACK": true
}
}
},
"source": {
"ID": "2679",
"identity": "50900",
"namespace": "jobs",
"pod_name": "kafka-0"
},
"destination": {
"identity": "11576",
"namespace": "jobs",
"labels": [
"k8s:app=zookeeper",
"k8s:io.cilium.k8s.policy.cluster=default",
"k8s:io.cilium.k8s.policy.serviceaccount=default",
"k8s:io.kubernetes.pod.namespace=jobs"
],
"pod_name": "zookeeper-79f8768c95-cldf4"
},
"Type": "L3_L4",
"node_name": "gke-sergey-hubble-test-default-pool-e94fba21-tmsg",
"event_type": {
"type": 4,
"sub_type": 4
},
"Summary": "TCP Flags: ACK, PSH"
}
zookeeper-79f8768c95-cldf4 1/1 Running 0 5m14s 10.28.3.193 gke-sergey-hubble-test-default-pool-e94fba21-mtbl <none> <none>
kafka-0 1/1 Running 0 5m15s 10.28.0.161 gke-sergey-hubble-test-default-pool-e94fba21-tmsg <none> <none>
cc @gandro
That's probably the case.
Thanks a lot digging deeper, this helped a lot to reproduce it for me as well. The root cause seems to be that Hubble only reads Cilium's AgentNotifyEndpointCreated
to populate the local endpoint cache, but fails to also parse AgentNotifyEndpointRegenerateSuccess
events which contain the labels. I'll work on a fix.
@rubenhak We've merged a PR to master which I believe should fix the issue. Please feel free to reopen if that's not the case.
@gandro, Thanks for quick fix. Which version includes the fix?
@gandro, Thanks for quick fix. Which version includes the fix?
The fix was applied in Hubble master. Since we don't have releases yet, just redeploying the latest
cilium/hubble
image tag should do the trick.
Flows and arrows are not visible in Hubble UI. Yet flows for "hubble" namespace are visible. Running in GKE.
Running procedure:
I can confirm that flows are visible in "cilium monitor", "hubble observe", and "kubectl get cep".