Closed zxDiscovery closed 5 years ago
Describe pod message-dumper
kubectl describe pod message-dumper-00001-deployment-747fbf6bbc-g97h8
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/message-dumper-00001-deployment-747fbf6bbc-g97h8 to 9.30.212.236
Normal Pulled 2m kubelet, 9.30.212.236 Container image "ibmcom/istio-proxy_init:1.0.2" already present on machine
Normal Created 2m kubelet, 9.30.212.236 Created container
Normal Pulled 2m kubelet, 9.30.212.236 Container image "gcr.io/knative-releases/github.com/knative/eventing-sources/cmd/message_dumper@sha256:73a95b05b5b937544af7c514c3116479fa5b6acf7771604b313cfc1587bf0940" already present on machine
Normal Created 2m kubelet, 9.30.212.236 Created container
Normal Started 2m kubelet, 9.30.212.236 Started container
Normal Created 2m kubelet, 9.30.212.236 Created container
Normal Started 2m kubelet, 9.30.212.236 Started container
Normal Pulled 2m kubelet, 9.30.212.236 Container image "gcr.io/knative-releases/github.com/knative/serving/cmd/queue@sha256:fc49125cb29f7bb2de2c4d6bd51153ce190cb522cf42df59898147d2074885cc" already present on machine
Normal Created 2m kubelet, 9.30.212.236 Created container
Normal Pulled 2m kubelet, 9.30.212.236 Container image "ibmcom/istio-proxyv2:1.0.2" already present on machine
Normal Started 2m kubelet, 9.30.212.236 Started container
Normal Started 2m kubelet, 9.30.212.236 Started container
Warning FailedPreStopHook 36s kubelet, 9.30.212.236 Http lifecycle hook (quitquitquit) for Container "queue-proxy" in Pod "message-dumper-00001-deployment-68bd6b4987-rpx8b_default(8bdaf82c-238a-11e9-a717-0016ac101b0f)" failed - error: Get http://10.1.238.243:8022/quitquitquit: dial tcp 10.1.238.243:8022: connect: connection refused, message: ""
Warning FailedPreStopHook 36s kubelet, 9.30.212.236 Http lifecycle hook (quitquitquit) for Container "user-container" in Pod "message-dumper-00001-deployment-68bd6b4987-rpx8b_default(8bdaf82c-238a-11e9-a717-0016ac101b0f)" failed - error: Get http://10.1.238.243:8022/quitquitquit: dial tcp 10.1.238.243:8022: connect: connection refused, message: ""
Normal Killing 36s kubelet, 9.30.212.236 Killing container with id docker://istio-proxy:Need to kill Pod
Normal Killing 36s kubelet, 9.30.212.236 Killing container with id docker://user-container:Need to kill Pod
Warning Unhealthy 28s (x9 over 36s) kubelet, 9.30.212.236 Readiness probe failed: Get http://10.1.238.243:8022/health: dial tcp 10.1.238.243:8022: connect: connection refused
@zxDiscovery Please check if the source pod is alive, also please check istio-proxy side car of a source, to see if it is capable to post message. Please attach istio-proxy container of a source to the issue.
The KubernetesEventSource example is very similar to the debugging example, try following this guide (names will likely change and message dumper is a Knative service in the sample, whereas in debugging it is a K8s Service and Deployment). https://github.com/knative/docs/tree/master/eventing/debugging
@sbezverk @Harwayne Thank you for help! After I following the guide https://github.com/knative/docs/tree/master/eventing/debugging. I find the log in testevents emitted by Envoy as follows:
[2019-01-30T07:53:11.725Z] "POST / HTTP/1.1" 500 - 826 0 12 11 "-" "Go-http-client/1.1" "7c981936-e127-91fd-998c-8bbc3e241182" "testchannel.default.channels.cluster.local" "10.1.238.222:8080"
[2019-01-30T07:53:11.747Z] "POST / HTTP/1.1" 500 - 826 0 7 7 "-" "Go-http-client/1.1" "54b8e8d6-d7d4-94d9-b244-a801bfb45d9f" "testchannel.default.channels.cluster.local" "10.1.238.222:8080"
[2019-01-30T07:53:11.890Z] "POST / HTTP/1.1" 500 - 904 0 10 10 "-" "Go-http-client/1.1" "c778bf96-7a9f-9cbd-8792-ed13c8f56bf5" "testchannel.default.channels.cluster.local" "10.1.238.222:8080"
[2019-01-30T07:53:41.762Z] "POST / HTTP/1.1" 500 - 833 0 30 30 "-" "Go-http-client/1.1" "f328a484-e223-99dd-b460-d225a9c7feba" "testchannel.default.channels.cluster.local" "10.1.238.222:8080"
[2019-01-30T07:53:41.794Z] "POST / HTTP/1.1" 500 - 826 0 16 15 "-" "Go-http-client/1.1" "823fe328-c63b-9854-b2a6-1f1fa94c2007" "testchannel.default.channels.cluster.local" "10.1.238.222:8080"
Then I check the in-memory-channel-dispatcher
log in knative-eventing find the error as follows:
{"level":"error","ts":1548835230.4771786,"caller":"fanout/fanout_handler.go:108","msg":"Fanout had an error","error":"Unable to complete request Post http://message-dumper.default.svc.cluster.local/: dial tcp: lookup message-dumper.default.svc.cluster.local on 10.0.0.10:53: no such host","stacktrace":"github.com/knative/eventing/pkg/sidecar/fanout.(*Handler).dispatch\n\t/home/argent/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:108\ngithub.com/knative/eventing/pkg/sidecar/fanout.createReceiverFunction.func1\n\t/home/argent/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:86\ngithub.com/knative/eventing/pkg/provisioners.(*MessageReceiver).HandleRequest\n\t/home/argent/go/src/github.com/knative/eventing/pkg/provisioners/message_receiver.go:130\ngithub.com/knative/eventing/pkg/sidecar/fanout.(*Handler).ServeHTTP\n\t/home/argent/go/src/github.com/knative/eventing/pkg/sidecar/fanout/fanout_handler.go:91\ngithub.com/knative/eventing/pkg/sidecar/multichannelfanout.(*Handler).ServeHTTP\n\t/home/argent/go/src/github.com/knative/eventing/pkg/sidecar/multichannelfanout/multi_channel_fanout_handler.go:128\ngithub.com/knative/eventing/pkg/sidecar/swappable.(*Handler).ServeHTTP\n\t/home/argent/go/src/github.com/knative/eventing/pkg/sidecar/swappable/swappable.go:105\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2741\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1847"}
I found the I apply the example.yaml. There is no error in in-memory-channel-dispatcher
. But I apply the subscription.yaml then the error appeared. So the sample run failed. Now what should I do?
That error message implies something went wrong finding the address of the message-dumper
Service
. What does the following produce?
kubectl -n default get service message-dumper -o yaml
As in the sample it is a Knative Service, what does the following produce:
kubectl -n default get service.serving.knative.dev message-dumper -o yaml
And just to get everything at once, also include the YAML for the Channel
and Subscription
.
root@inflict1:~/ICP# kubectl -n default get service message-dumper -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-01-31T05:47:39Z
name: message-dumper
namespace: default
ownerReferences:
- apiVersion: serving.knative.dev/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Route
name: message-dumper
uid: b2e4cb8a-251b-11e9-a717-0016ac101b0f
resourceVersion: "1700211"
selfLink: /api/v1/namespaces/default/services/message-dumper
uid: b7d785f1-251b-11e9-a717-0016ac101b0f
spec:
externalName: istio-ingressgateway.istio-system.svc.cluster.local
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
root@inflict1:~/ICP# kubectl -n default get service.serving.knative.dev message-dumper -o yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"serving.knative.dev/v1alpha1","kind":"Service","metadata":{"annotations":{},"name":"message-dumper","namespace":"default"},"spec":{"runLatest":{"configuration":{"revisionTemplate":{"spec":{"container":{"image":"gcr.io/knative-releases/github.com/knative/eventing-sources/cmd/message_dumper@sha256:73a95b05b5b937544af7c514c3116479fa5b6acf7771604b313cfc1587bf0940"}}}}}}}
creationTimestamp: 2019-01-31T05:47:30Z
generation: 1
name: message-dumper
namespace: default
resourceVersion: "1700214"
selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/default/services/message-dumper
uid: b2d749fd-251b-11e9-a717-0016ac101b0f
spec:
generation: 1
runLatest:
configuration:
revisionTemplate:
spec:
container:
image: gcr.io/knative-releases/github.com/knative/eventing-sources/cmd/message_dumper@sha256:73a95b05b5b937544af7c514c3116479fa5b6acf7771604b313cfc1587bf0940
timeoutSeconds: 300
status:
address:
hostname: message-dumper.default.svc.cluster.local
conditions:
- lastTransitionTime: 2019-01-31T05:47:38Z
severity: Error
status: "True"
type: ConfigurationsReady
- lastTransitionTime: 2019-01-31T05:47:39Z
severity: Error
status: "True"
type: Ready
- lastTransitionTime: 2019-01-31T05:47:39Z
severity: Error
status: "True"
type: RoutesReady
domain: message-dumper.default.example.com
domainInternal: message-dumper.default.svc.cluster.local
latestCreatedRevisionName: message-dumper-00001
latestReadyRevisionName: message-dumper-00001
observedGeneration: 1
traffic:
- percent: 100
revisionName: message-dumper-00001
root@inflict1:~/ICP# kubectl get channel testchannel -oyaml
apiVersion: eventing.knative.dev/v1alpha1
kind: Channel
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"eventing.knative.dev/v1alpha1","kind":"Channel","metadata":{"annotations":{},"name":"testchannel","namespace":"default"},"spec":{"provisioner":{"apiVersion":"eventing.knative.dev/v1alpha1","kind":"ClusterChannelProvisioner","name":"in-memory-channel"}}}
creationTimestamp: 2019-01-31T05:45:55Z
finalizers:
- in-memory-channel-controller
generation: 2
name: testchannel
namespace: default
resourceVersion: "1700216"
selfLink: /apis/eventing.knative.dev/v1alpha1/namespaces/default/channels/testchannel
uid: 7a2c7d02-251b-11e9-a717-0016ac101b0f
spec:
generation: 2
provisioner:
apiVersion: eventing.knative.dev/v1alpha1
kind: ClusterChannelProvisioner
name: in-memory-channel
subscribable:
subscribers:
- ref:
name: testevents-subscription
namespace: default
uid: b29f33b6-251b-11e9-a717-0016ac101b0f
subscriberURI: http://message-dumper.default.svc.cluster.local/
status:
address:
hostname: testchannel-channel-87t6n.default.svc.cluster.local
conditions:
- lastTransitionTime: 2019-01-31T05:45:56Z
severity: Error
status: "True"
type: Addressable
- lastTransitionTime: 2019-01-31T05:45:56Z
severity: Error
status: "True"
type: Provisioned
- lastTransitionTime: 2019-01-31T05:45:56Z
severity: Error
status: "True"
type: Ready
root@inflict1:~/ICP# kubectl get Subscription testevents-subscription -oyaml
apiVersion: eventing.knative.dev/v1alpha1
kind: Subscription
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"eventing.knative.dev/v1alpha1","kind":"Subscription","metadata":{"annotations":{},"name":"testevents-subscription","namespace":"default"},"spec":{"channel":{"apiVersion":"eventing.knative.dev/v1alpha1","kind":"Channel","name":"testchannel"},"subscriber":{"ref":{"apiVersion":"serving.knative.dev/v1alpha1","kind":"Service","name":"message-dumper"}}}}
creationTimestamp: 2019-01-31T05:47:30Z
finalizers:
- subscription-controller
generation: 1
name: testevents-subscription
namespace: default
resourceVersion: "1700219"
selfLink: /apis/eventing.knative.dev/v1alpha1/namespaces/default/subscriptions/testevents-subscription
uid: b29f33b6-251b-11e9-a717-0016ac101b0f
spec:
channel:
apiVersion: eventing.knative.dev/v1alpha1
kind: Channel
name: testchannel
generation: 1
subscriber:
ref:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: message-dumper
status:
conditions:
- lastTransitionTime: 2019-01-31T05:47:39Z
severity: Error
status: "True"
type: ChannelReady
- lastTransitionTime: 2019-01-31T05:47:39Z
severity: Error
status: "True"
type: Ready
- lastTransitionTime: 2019-01-31T05:47:39Z
severity: Error
status: "True"
type: Resolved
physicalSubscription:
subscriberURI: http://message-dumper.default.svc.cluster.local/
@Harwayne here is the yaml content. The severity: Error
isn't problem I see in https://github.com/knative/pkg/pull/255
/assign @Harwayne /cc @tcnghia
Hmm, that all looks correct from what I can see. The K8s Service
exists and points at what seems to be the correct place.
Looking at the error messge:
"Unable to complete request Post http://message-dumper.default.svc.cluster.local/: dial tcp: lookup message-dumper.default.svc.cluster.local on 10.0.0.10:53: no such host"
The only thing that strikes me as odd is including "on 10.0.0.10:53". I haven't seen that in the error messages before. As port 53 is the DNS port, I wonder if that is the DNS server it is trying to use. Does that IP address correspond to anything in your cluster? Perhaps the node where the in-memory-channel
's dispatcher is running?
Is there anything custom about your DNS setup?
We can try using that DNS server from inside your cluster. Make a Pod
with the image tutum/dnsutils
and the Istio sidecar injected (similar to this YAML but a different image). And while SSHed into the Pod
try running:
# Use the normal DNS.
dig message-dumper.default.svc.cluster.local any
# Use the DNS from the error message.
dig message-dumper.default.svc.cluster.local any @10.0.0.10
In addition, we can try making curl requests to message-dumper from inside your cluster. Make a Pod
with the image tutum/curl
and the Istio sidecar injected (similar to this YAML). Try doing this once in the default
namespace, and again in the knative-eventing
namespace. And while SSHed into the Pod
try running:
curl -v http://message-dumper.default.svc.cluster.local/ -X POST -H 'Content-Type: application/json' -d '{
"cloudEventsVersion" : "0.1",
"eventType" : "com.example.someevent",
"eventTypeVersion" : "1.0",
"source" : "/mycontext",
"eventID" : "A234-1234-1234",
"eventTime" : "2018-04-05T17:31:00Z",
"extensions" : {
"comExampleExtension" : "value"
},
"contentType" : "text/xml",
"data" : "<much wow=\"xml\"/>"
}'
@zxDiscovery check your cluster dns settings. Make sure the ip address in the output matches to 10.0.0.10.
kubectl get svc -n kube-system
Also this ip address must be with a range allocated by this api server parameter: --service-cluster-ip-range. Check ip address allocated to dns pod.
kubectl get pod -n kube-system | grep dns
kubectl describe pod -n kube-system {dns pod name}
Get ip address assigned.
kubectl describe svc -n kube-system kube-dns
Check that the dns service endpoint points to pod's ip address.
I am also seeing the same issue when using message dumper with k8s event source.
{"level":"info","ts":1548954463.1729238,"caller":"provisioners/message_dispatcher.go:107","msg":"Dispatching message to http://message-dumper.default.svc.cluster.local/"}
{"level":"error","ts":1548954463.1828918,"caller":"dispatcher/dispatcher.go:153","msg":"Failed to dispatch message: ","error":"Unable to complete request Post http://message-dumper.default.svc.cluster.local/: dial tcp: lookup message-dumper.default.svc.cluster.local on 10.96.0.10:53: no such host","stacktrace":"github.com/knative/eventing/contrib/natss/pkg/dispatcher/dispatcher.(*SubscriptionsSupervisor).subscribe.func1\n\t/Users/d066419/go/src/github.com/knative/eventing/contrib/natss/pkg/dispatcher/dispatcher/dispatcher.go:153\ngithub.com/knative/eventing/vendor/github.com/nats-io/go-nats-streaming.(*conn).processMsg\n\t/Users/d066419/go/src/github.com/knative/eventing/vendor/github.com/nats-io/go-nats-streaming/stan.go:751\ngithub.com/knative/eventing/vendor/github.com/nats-io/go-nats-streaming.(*conn).processMsg-fm\n\t/Users/d066419/go/src/github.com/knative/eventing/vendor/github.com/nats-io/go-nats-streaming/sub.go:228\ngithub.com/knative/eventing/vendor/github.com/nats-io/go-nats.(*Conn).waitForMsgs\n\t/Users/d066419/go/src/github.com/knative/eventing/vendor/github.com/nats-io/go-nats/nats.go:1778"}
The message dumper points to message-dumper.default.svc.cluster.local on 10.96.0.10:53
This looks similar to https://github.com/knative/serving/issues/3067. Are you running on minikube?
@Harwayne I running on IBM Cloud Private.
@sbezverk The ip address in the output matches to 10.0.0.10.
root@inflict1:~# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
catalog-ui NodePort 10.0.129.64 <none> 4000:30487/TCP 14d
crypto ClusterIP 10.0.160.168 <none> 8082/TCP 14d
default-backend ClusterIP 10.0.11.176 <none> 80/TCP 14d
elasticsearch ClusterIP 10.0.218.227 <none> 9200/TCP 14d
elasticsearch-data ClusterIP None <none> 9300/TCP 14d
elasticsearch-transport ClusterIP 10.0.190.158 <none> 9300/TCP 14d
heapster ClusterIP 10.0.237.49 <none> 80/TCP 14d
helm-api ClusterIP 10.0.48.146 <none> 3000/TCP 14d
helm-repo ClusterIP 10.0.107.62 <none> 3001/TCP 14d
iam-pap ClusterIP 10.0.192.126 <none> 39001/TCP 14d
iam-pdp ClusterIP 10.0.74.38 <none> 7998/TCP 14d
iam-token-service ClusterIP 10.0.128.112 <none> 10443/TCP 14d
ibmcloud-image-enforcement ClusterIP 10.0.241.252 <none> 443/TCP 14d
icp-management-ingress ClusterIP 10.0.10.248 <none> 8443/TCP 14d
icp-mongodb ClusterIP None <none> 27017/TCP 14d
image-manager ClusterIP 10.0.0.8 <none> 8600/TCP,8500/TCP 14d
kibana ClusterIP 10.0.198.161 <none> 5601/TCP 14d
kms-api ClusterIP 10.0.24.53 <none> 28674/TCP 14d
kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 14d
lifecycle ClusterIP 10.0.6.173 <none> 8942/TCP,9494/TCP 14d
logstash ClusterIP 10.0.222.185 <none> 5044/TCP 14d
mariadb ClusterIP 10.0.137.10 <none> 3306/TCP 14d
master-discovery ClusterIP 10.0.96.214 <none> 9300/TCP 14d
metrics-server ClusterIP 10.0.162.143 <none> 443/TCP 14d
mgmt-repo ClusterIP 10.0.191.171 <none> 3001/TCP 14d
mongodb NodePort 10.0.147.80 <none> 27017:32184/TCP 14d
pep ClusterIP 10.0.153.215 <none> 8935/TCP,28995/TCP 14d
persistence ClusterIP 10.0.143.245 <none> 8985/TCP 14d
platform-api ClusterIP 10.0.126.195 <none> 6969/TCP 14d
platform-auth-service ClusterIP 10.0.176.131 <none> 3100/TCP,9443/TCP 14d
platform-identity-management ClusterIP 10.0.5.11 <none> 4500/TCP 14d
platform-identity-provider ClusterIP 10.0.91.204 <none> 4300/TCP,9443/TCP 14d
platform-ui ClusterIP 10.0.124.91 <none> 3000/TCP 14d
tiller-deploy ClusterIP 10.0.0.9 <none> 44134/TCP 14d
unified-router ClusterIP 10.0.20.138 <none> 9090/TCP 14d
web-terminal ClusterIP 10.0.37.154 <none> 443/TCP 14d
root@inflict1:~# kubectl describe pod -n kube-system kube-dns-2glnk
Name: kube-dns-2glnk
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: 9.30.213.36/9.30.213.36
Start Time: Sun, 20 Jan 2019 18:41:33 -0800
Labels: app=kube-dns
chart=kube-dns-3.1.1
controller-revision-hash=48246888
pod-template-generation=1
release=kube-dns
Annotations: kubernetes.io/psp=ibm-privileged-psp
scheduler.alpha.kubernetes.io/critical-pod=
seccomp.security.alpha.kubernetes.io/pod=docker/default
Status: Running
IP: 10.1.251.65
Controlled By: DaemonSet/kube-dns
Containers:
kube-dns:
Container ID: docker://85948a582be2e4a3b9730027ef955e8cf700fdbc812fd6f8c37a6c1febdbd9c7
Image: hyc-cloud-private-release-docker-local.artifactory.swg-devops.com/ibmcom-amd64/coredns:1.1.3
Image ID: docker-pullable://hyc-cloud-private-release-docker-local.artifactory.swg-devops.com/ibmcom-amd64/coredns@sha256:99520426b9fa4a07b31ad1ff75a174276a1e57dd9738f24c282f22c8f36f0025
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Running
Started: Sun, 20 Jan 2019 18:41:38 -0800
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-smdcg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-dns
Optional: false
default-token-smdcg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-smdcg
Optional: false
QoS Class: Burstable
Node-Selectors: master=true
Tolerations: dedicated:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events: <none>
root@inflict1:~# kubectl describe svc -n kube-system kube-dns
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
app=kube-dns
chart=kube-dns-3.1.1
heritage=Tiller
kubernetes.io/cluster-service=true
kubernetes.io/name=CoreDNS
release=kube-dns
Annotations: prometheus.io/port=9153
prometheus.io/scrape=true
Selector: app=kube-dns,release=kube-dns
Type: ClusterIP
IP: 10.0.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.1.251.65:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.1.251.65:53
Session Affinity: None
Events: <none>
@zxDiscovery in general looks ok to me, with exception of coredns bits, in my cluster I run 1.2.6, some folks on the slack channel were suggesting to move to coredns 1.3.1 due to a bug, but I am not sure if this bug has anything to do with what you see.
Looks like you are using CoreDNS 1.1.3, which is the same as in the Serving bug: https://github.com/knative/serving/issues/3067#issuecomment-460016701. Which @Abd4llA root caused in the next comment: https://github.com/knative/serving/issues/3067#issuecomment-460119761.
I think that means the solution is to update CoreDNS. Sounds like either 1.2.6 (verified by @sbezverk ) or 1.3.1 (verified by @Abd4llA) will work.
Yes, I use the CoreDNS 1.1.3 on my environment. I will update the CoreDNS to verify this issue. Thank you! @Harwayne @sbezverk
After update the CoreDNS to 1.2.6. Kubernetes Event Source example run normally. So close this issue. /cc @gyliu513
Expected Behavior
Kubernetes Event Source example run normally
Actual Behavior
There is no logs when I run the follow commend
Steps to Reproduce the Problem
Follow the step in Kubernetes Event Source example
Additional Info
The log of
message-dumper
in containerqueue-proxy
have some errors