Closed tanjunchen closed 1 year ago
Can you check the supported tls version of your vm?
Can you check the supported tls version of your vm?
there is no policy and the istio cni is disabled.
there are two case:
same question, istio 1.11.4
cni:disable
http://10.98.41.167:31614/productpage upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
Looks like the mtls request is going directly to your app. Maybe iptables isn't set up properly?
On Thu, Nov 18, 2021 at 9:47 PM yuanxch @.***> wrote:
same question, istio 1.11.4 cni:disable [image: image] https://user-images.githubusercontent.com/10543069/142571796-ca56cf88-e0d4-47ed-b0e2-82ecdcffe6fc.png
http://10.98.41.167:31614/productpage upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/istio/istio/issues/35870#issuecomment-973773266, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEYGXP3BXCZWKWYY3YADHTUMXQIFANCNFSM5HI5EPYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
@howardjohn Hello, I followed the documentation on the official website to operate the virtual machine. Once I started mtls for the service in the virtual machine, this problem occurred and the iptables rules were effective. Is there a case of accessing vm from mesh in the official document? Is the case of mutual authentication opened? thanks.
Looks like the mtls request is going directly to your app. Maybe iptables isn't set up properly? … On Thu, Nov 18, 2021 at 9:47 PM yuanxch @.**> wrote: same question, istio 1.11.4* cni:disable [image: image] https://user-images.githubusercontent.com/10543069/142571796-ca56cf88-e0d4-47ed-b0e2-82ecdcffe6fc.png http://10.98.41.167:31614/productpage upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER — You are receiving this because you were assigned. Reply to this email directly, view it on GitHub <#35870 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEYGXP3BXCZWKWYY3YADHTUMXQIFANCNFSM5HI5EPYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
trafficPolicy:
tls:
mode: DISABLE
It works when i add the codes into samples/bookinfo/networking/destination-rule-all.yaml
。
@yuanxch me too.
I would check sudo iptables-save
On Sun, Nov 21, 2021 at 8:02 PM 天马行空 @.***> wrote:
@yuanxch https://github.com/yuanxch me too.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/istio/istio/issues/35870#issuecomment-975064812, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEYGXNEK4WE7ARHXIZEX6LUNG6GPANCNFSM5HI5EPYA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
thank you @howardjohn
both results are same。
sudo iptables-save
# Generated by iptables-save v1.8.4 on Tue Nov 23 09:48:28 2021
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LIBVIRT_INP - [0:0]
:LIBVIRT_OUT - [0:0]
:LIBVIRT_FWO - [0:0]
:LIBVIRT_FWI - [0:0]
:LIBVIRT_FWX - [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -j LIBVIRT_INP
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -o br-59ca438f4591 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-59ca438f4591 -j DOCKER
-A FORWARD -i br-59ca438f4591 ! -o br-59ca438f4591 -j ACCEPT
-A FORWARD -i br-59ca438f4591 -o br-59ca438f4591 -j ACCEPT
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -j LIBVIRT_OUT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
-A LIBVIRT_FWO -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWI -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-59ca438f4591 ! -o br-59ca438f4591 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-59ca438f4591 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
COMMIT
# Completed on Tue Nov 23 09:48:28 2021
# Generated by iptables-save v1.8.4 on Tue Nov 23 09:48:28 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LIBVIRT_PRT - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-WUMTZJLBIHT6QFYJ - [0:0]
:DOCKER - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SEP-BHAOPIJUHF4GBMP7 - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SEP-BZHHBMQSWZWBVQVH - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SEP-PZXRNLJ3TMU4GE65 - [0:0]
:KUBE-SEP-SY2IDVYD6QHI4OSE - [0:0]
:KUBE-SEP-SN5KQBCPIOR75D2V - [0:0]
:KUBE-SEP-UKXHRPYURGFMIXEC - [0:0]
:KUBE-SVC-SIMN7BSCUVAV2FC7 - [0:0]
:KUBE-SEP-JZE46AANXZMPOG6E - [0:0]
:KUBE-SVC-NVNLZVDQSGQUD3NM - [0:0]
:KUBE-SEP-JGORAKDXFN2C552T - [0:0]
:KUBE-SVC-WHNIZNLB5XFXIX2C - [0:0]
:KUBE-SEP-765EF7GFE5EPKZPF - [0:0]
:KUBE-SVC-XHUBMW47Y5G3ICIS - [0:0]
:KUBE-SEP-PUBIY4UWEXDC3U2B - [0:0]
:KUBE-SVC-CG3LQLBYYHBKATGN - [0:0]
:KUBE-SEP-D4PT4JE4IH3ZSRCU - [0:0]
:KUBE-SVC-S4S242M2WNFIAT6Y - [0:0]
:KUBE-SEP-BYVWZYGQLZSLXO7N - [0:0]
:KUBE-SVC-G6D3V5KS3PXPUEDS - [0:0]
:KUBE-SEP-IIYQ4WXDTEOK6OD5 - [0:0]
:KUBE-SVC-7N6LHPYFOVFT454K - [0:0]
:KUBE-SEP-Z7GS6PWWK5U4RYAH - [0:0]
:KUBE-SVC-62L5C2KEOX6ICGVJ - [0:0]
:KUBE-SEP-QYABR3OCVCVPKBDB - [0:0]
:KUBE-SVC-TFRZ6Y6WOLX5SOWZ - [0:0]
:KUBE-SEP-ZCZ3ZIHDOBOCPGZJ - [0:0]
:KUBE-SVC-IBZWWK3KTI7UHZ5A - [0:0]
:KUBE-SEP-JRW2B4SW6QS2OGYA - [0:0]
:KUBE-SVC-F2IARDLERJIFF7VR - [0:0]
:KUBE-SEP-VUG5WQMKBHWAN7ZE - [0:0]
:KUBE-SVC-53SQRANQXVHTJ6HK - [0:0]
:KUBE-SEP-RHCJ5EV7FHKCNZUU - [0:0]
:KUBE-SEP-VN2563Y2S4SEHFZM - [0:0]
:KUBE-SEP-IWJWCF4BOHEY36BZ - [0:0]
:KUBE-SVC-SB7WEE53EMIXFNKY - [0:0]
:KUBE-SEP-FG7KL5MWSB7UGTBW - [0:0]
:KUBE-SVC-4MYBDLPZ2DFGC5Z6 - [0:0]
:KUBE-SEP-5NOPIYZ2DEMWJ4BE - [0:0]
:KUBE-SVC-ROH4UCJ7RVN2OSM4 - [0:0]
:KUBE-SEP-EXEAVA34QMIGNSLM - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.18.0.0/16 ! -o br-59ca438f4591 -j MASQUERADE
-A POSTROUTING -j LIBVIRT_PRT
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.98.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE --random-fully
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A LIBVIRT_PRT -s 192.168.122.0/24 -d 224.0.0.0/24 -j RETURN
-A LIBVIRT_PRT -s 192.168.122.0/24 -d 255.255.255.255/32 -j RETURN
-A LIBVIRT_PRT -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A LIBVIRT_PRT -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A LIBVIRT_PRT -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-7N6LHPYFOVFT454K
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp cluster IP" -m tcp --dport 31400 -j KUBE-SVC-62L5C2KEOX6ICGVJ
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port cluster IP" -m tcp --dport 15021 -j KUBE-SVC-TFRZ6Y6WOLX5SOWZ
-A KUBE-SERVICES -d 10.1.236.187/32 -p tcp -m comment --comment "default/reviews:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-53SQRANQXVHTJ6HK
-A KUBE-SERVICES -d 10.1.108.219/32 -p tcp -m comment --comment "istio-operator/istio-operator:http-metrics cluster IP" -m tcp --dport 8383 -j KUBE-SVC-SIMN7BSCUVAV2FC7
-A KUBE-SERVICES -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:grpc-xds cluster IP" -m tcp --dport 15010 -j KUBE-SVC-NVNLZVDQSGQUD3NM
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-webhook cluster IP" -m tcp --dport 443 -j KUBE-SVC-WHNIZNLB5XFXIX2C
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls cluster IP" -m tcp --dport 15443 -j KUBE-SVC-S4S242M2WNFIAT6Y
-A KUBE-SERVICES -d 10.1.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:http-monitoring cluster IP" -m tcp --dport 15014 -j KUBE-SVC-XHUBMW47Y5G3ICIS
-A KUBE-SERVICES -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-SVC-IBZWWK3KTI7UHZ5A
-A KUBE-SERVICES -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-F2IARDLERJIFF7VR
-A KUBE-SERVICES -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-SVC-G6D3V5KS3PXPUEDS
-A KUBE-SERVICES -d 10.1.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.1.164.17/32 -p tcp -m comment --comment "default/details:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-SB7WEE53EMIXFNKY
-A KUBE-SERVICES -d 10.1.123.2/32 -p tcp -m comment --comment "default/ratings:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-4MYBDLPZ2DFGC5Z6
-A KUBE-SERVICES -d 10.1.176.191/32 -p tcp -m comment --comment "default/productpage:http cluster IP" -m tcp --dport 9080 -j KUBE-SVC-ROH4UCJ7RVN2OSM4
-A KUBE-SERVICES -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-dns cluster IP" -m tcp --dport 15012 -j KUBE-SVC-CG3LQLBYYHBKATGN
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:https" -m tcp --dport 31088 -j KUBE-SVC-7N6LHPYFOVFT454K
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp" -m tcp --dport 32587 -j KUBE-SVC-62L5C2KEOX6ICGVJ
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port" -m tcp --dport 32272 -j KUBE-SVC-TFRZ6Y6WOLX5SOWZ
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls" -m tcp --dport 31800 -j KUBE-SVC-S4S242M2WNFIAT6Y
-A KUBE-NODEPORTS -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2" -m tcp --dport 31614 -j KUBE-SVC-G6D3V5KS3PXPUEDS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.1.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-WUMTZJLBIHT6QFYJ
-A KUBE-SEP-WUMTZJLBIHT6QFYJ -s 10.98.41.167/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-WUMTZJLBIHT6QFYJ -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 10.98.41.167:6443
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-59ca438f4591 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 5000 -j DNAT --to-destination 172.17.0.2:5000
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.1.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SY2IDVYD6QHI4OSE
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-BHAOPIJUHF4GBMP7
-A KUBE-SEP-BHAOPIJUHF4GBMP7 -s 10.98.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-BHAOPIJUHF4GBMP7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.98.0.3:53
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SN5KQBCPIOR75D2V
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-BZHHBMQSWZWBVQVH
-A KUBE-SEP-BZHHBMQSWZWBVQVH -s 10.98.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-BZHHBMQSWZWBVQVH -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.98.0.3:53
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.1.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UKXHRPYURGFMIXEC
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-PZXRNLJ3TMU4GE65
-A KUBE-SEP-PZXRNLJ3TMU4GE65 -s 10.98.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-PZXRNLJ3TMU4GE65 -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.98.0.3:9153
-A KUBE-SEP-SY2IDVYD6QHI4OSE -s 10.98.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SY2IDVYD6QHI4OSE -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.98.0.2:53
-A KUBE-SEP-SN5KQBCPIOR75D2V -s 10.98.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-SN5KQBCPIOR75D2V -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.98.0.2:53
-A KUBE-SEP-UKXHRPYURGFMIXEC -s 10.98.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-UKXHRPYURGFMIXEC -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.98.0.2:9153
-A KUBE-SVC-SIMN7BSCUVAV2FC7 ! -s 10.244.0.0/16 -d 10.1.108.219/32 -p tcp -m comment --comment "istio-operator/istio-operator:http-metrics cluster IP" -m tcp --dport 8383 -j KUBE-MARK-MASQ
-A KUBE-SVC-SIMN7BSCUVAV2FC7 -m comment --comment "istio-operator/istio-operator:http-metrics" -j KUBE-SEP-JZE46AANXZMPOG6E
-A KUBE-SEP-JZE46AANXZMPOG6E -s 10.98.4.9/32 -m comment --comment "istio-operator/istio-operator:http-metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-JZE46AANXZMPOG6E -p tcp -m comment --comment "istio-operator/istio-operator:http-metrics" -m tcp -j DNAT --to-destination 10.98.4.9:8383
-A KUBE-SVC-NVNLZVDQSGQUD3NM ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:grpc-xds cluster IP" -m tcp --dport 15010 -j KUBE-MARK-MASQ
-A KUBE-SVC-NVNLZVDQSGQUD3NM -m comment --comment "istio-system/istiod:grpc-xds" -j KUBE-SEP-JGORAKDXFN2C552T
-A KUBE-SEP-JGORAKDXFN2C552T -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:grpc-xds" -j KUBE-MARK-MASQ
-A KUBE-SEP-JGORAKDXFN2C552T -p tcp -m comment --comment "istio-system/istiod:grpc-xds" -m tcp -j DNAT --to-destination 10.98.3.6:15010
-A KUBE-SVC-WHNIZNLB5XFXIX2C ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-webhook cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-WHNIZNLB5XFXIX2C -m comment --comment "istio-system/istiod:https-webhook" -j KUBE-SEP-765EF7GFE5EPKZPF
-A KUBE-SEP-765EF7GFE5EPKZPF -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:https-webhook" -j KUBE-MARK-MASQ
-A KUBE-SEP-765EF7GFE5EPKZPF -p tcp -m comment --comment "istio-system/istiod:https-webhook" -m tcp -j DNAT --to-destination 10.98.3.6:15017
-A KUBE-SVC-XHUBMW47Y5G3ICIS ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:http-monitoring cluster IP" -m tcp --dport 15014 -j KUBE-MARK-MASQ
-A KUBE-SVC-XHUBMW47Y5G3ICIS -m comment --comment "istio-system/istiod:http-monitoring" -j KUBE-SEP-PUBIY4UWEXDC3U2B
-A KUBE-SEP-PUBIY4UWEXDC3U2B -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:http-monitoring" -j KUBE-MARK-MASQ
-A KUBE-SEP-PUBIY4UWEXDC3U2B -p tcp -m comment --comment "istio-system/istiod:http-monitoring" -m tcp -j DNAT --to-destination 10.98.3.6:15014
-A KUBE-SVC-CG3LQLBYYHBKATGN ! -s 10.244.0.0/16 -d 10.1.65.239/32 -p tcp -m comment --comment "istio-system/istiod:https-dns cluster IP" -m tcp --dport 15012 -j KUBE-MARK-MASQ
-A KUBE-SVC-CG3LQLBYYHBKATGN -m comment --comment "istio-system/istiod:https-dns" -j KUBE-SEP-D4PT4JE4IH3ZSRCU
-A KUBE-SEP-D4PT4JE4IH3ZSRCU -s 10.98.3.6/32 -m comment --comment "istio-system/istiod:https-dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-D4PT4JE4IH3ZSRCU -p tcp -m comment --comment "istio-system/istiod:https-dns" -m tcp -j DNAT --to-destination 10.98.3.6:15012
-A KUBE-SVC-S4S242M2WNFIAT6Y ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls cluster IP" -m tcp --dport 15443 -j KUBE-MARK-MASQ
-A KUBE-SVC-S4S242M2WNFIAT6Y -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls" -m tcp --dport 31800 -j KUBE-MARK-MASQ
-A KUBE-SVC-S4S242M2WNFIAT6Y -m comment --comment "istio-system/istio-ingressgateway:tls" -j KUBE-SEP-BYVWZYGQLZSLXO7N
-A KUBE-SEP-BYVWZYGQLZSLXO7N -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:tls" -j KUBE-MARK-MASQ
-A KUBE-SEP-BYVWZYGQLZSLXO7N -p tcp -m comment --comment "istio-system/istio-ingressgateway:tls" -m tcp -j DNAT --to-destination 10.98.4.10:15443
-A KUBE-SVC-G6D3V5KS3PXPUEDS ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-G6D3V5KS3PXPUEDS -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2" -m tcp --dport 31614 -j KUBE-MARK-MASQ
-A KUBE-SVC-G6D3V5KS3PXPUEDS -m comment --comment "istio-system/istio-ingressgateway:http2" -j KUBE-SEP-IIYQ4WXDTEOK6OD5
-A KUBE-SEP-IIYQ4WXDTEOK6OD5 -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:http2" -j KUBE-MARK-MASQ
-A KUBE-SEP-IIYQ4WXDTEOK6OD5 -p tcp -m comment --comment "istio-system/istio-ingressgateway:http2" -m tcp -j DNAT --to-destination 10.98.4.10:8080
-A KUBE-SVC-7N6LHPYFOVFT454K ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-7N6LHPYFOVFT454K -p tcp -m comment --comment "istio-system/istio-ingressgateway:https" -m tcp --dport 31088 -j KUBE-MARK-MASQ
-A KUBE-SVC-7N6LHPYFOVFT454K -m comment --comment "istio-system/istio-ingressgateway:https" -j KUBE-SEP-Z7GS6PWWK5U4RYAH
-A KUBE-SEP-Z7GS6PWWK5U4RYAH -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-Z7GS6PWWK5U4RYAH -p tcp -m comment --comment "istio-system/istio-ingressgateway:https" -m tcp -j DNAT --to-destination 10.98.4.10:8443
-A KUBE-SVC-62L5C2KEOX6ICGVJ ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp cluster IP" -m tcp --dport 31400 -j KUBE-MARK-MASQ
-A KUBE-SVC-62L5C2KEOX6ICGVJ -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp" -m tcp --dport 32587 -j KUBE-MARK-MASQ
-A KUBE-SVC-62L5C2KEOX6ICGVJ -m comment --comment "istio-system/istio-ingressgateway:tcp" -j KUBE-SEP-QYABR3OCVCVPKBDB
-A KUBE-SEP-QYABR3OCVCVPKBDB -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-QYABR3OCVCVPKBDB -p tcp -m comment --comment "istio-system/istio-ingressgateway:tcp" -m tcp -j DNAT --to-destination 10.98.4.10:31400
-A KUBE-SVC-TFRZ6Y6WOLX5SOWZ ! -s 10.244.0.0/16 -d 10.1.226.63/32 -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port cluster IP" -m tcp --dport 15021 -j KUBE-MARK-MASQ
-A KUBE-SVC-TFRZ6Y6WOLX5SOWZ -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port" -m tcp --dport 32272 -j KUBE-MARK-MASQ
-A KUBE-SVC-TFRZ6Y6WOLX5SOWZ -m comment --comment "istio-system/istio-ingressgateway:status-port" -j KUBE-SEP-ZCZ3ZIHDOBOCPGZJ
-A KUBE-SEP-ZCZ3ZIHDOBOCPGZJ -s 10.98.4.10/32 -m comment --comment "istio-system/istio-ingressgateway:status-port" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCZ3ZIHDOBOCPGZJ -p tcp -m comment --comment "istio-system/istio-ingressgateway:status-port" -m tcp -j DNAT --to-destination 10.98.4.10:15021
-A KUBE-SVC-IBZWWK3KTI7UHZ5A ! -s 10.244.0.0/16 -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:http2 cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SVC-IBZWWK3KTI7UHZ5A -m comment --comment "istio-system/istio-egressgateway:http2" -j KUBE-SEP-JRW2B4SW6QS2OGYA
-A KUBE-SEP-JRW2B4SW6QS2OGYA -s 10.98.3.7/32 -m comment --comment "istio-system/istio-egressgateway:http2" -j KUBE-MARK-MASQ
-A KUBE-SEP-JRW2B4SW6QS2OGYA -p tcp -m comment --comment "istio-system/istio-egressgateway:http2" -m tcp -j DNAT --to-destination 10.98.3.7:8080
-A KUBE-SVC-F2IARDLERJIFF7VR ! -s 10.244.0.0/16 -d 10.1.255.46/32 -p tcp -m comment --comment "istio-system/istio-egressgateway:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-F2IARDLERJIFF7VR -m comment --comment "istio-system/istio-egressgateway:https" -j KUBE-SEP-VUG5WQMKBHWAN7ZE
-A KUBE-SEP-VUG5WQMKBHWAN7ZE -s 10.98.3.7/32 -m comment --comment "istio-system/istio-egressgateway:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-VUG5WQMKBHWAN7ZE -p tcp -m comment --comment "istio-system/istio-egressgateway:https" -m tcp -j DNAT --to-destination 10.98.3.7:8443
-A KUBE-SVC-53SQRANQXVHTJ6HK ! -s 10.244.0.0/16 -d 10.1.236.187/32 -p tcp -m comment --comment "default/reviews:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-53SQRANQXVHTJ6HK -m comment --comment "default/reviews:http" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-VN2563Y2S4SEHFZM
-A KUBE-SVC-53SQRANQXVHTJ6HK -m comment --comment "default/reviews:http" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-RHCJ5EV7FHKCNZUU
-A KUBE-SVC-53SQRANQXVHTJ6HK -m comment --comment "default/reviews:http" -j KUBE-SEP-IWJWCF4BOHEY36BZ
-A KUBE-SEP-RHCJ5EV7FHKCNZUU -s 10.98.4.31/32 -m comment --comment "default/reviews:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-RHCJ5EV7FHKCNZUU -p tcp -m comment --comment "default/reviews:http" -m tcp -j DNAT --to-destination 10.98.4.31:9080
-A KUBE-SEP-VN2563Y2S4SEHFZM -s 10.98.3.29/32 -m comment --comment "default/reviews:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-VN2563Y2S4SEHFZM -p tcp -m comment --comment "default/reviews:http" -m tcp -j DNAT --to-destination 10.98.3.29:9080
-A KUBE-SEP-IWJWCF4BOHEY36BZ -s 10.98.4.32/32 -m comment --comment "default/reviews:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-IWJWCF4BOHEY36BZ -p tcp -m comment --comment "default/reviews:http" -m tcp -j DNAT --to-destination 10.98.4.32:9080
-A KUBE-SVC-SB7WEE53EMIXFNKY ! -s 10.244.0.0/16 -d 10.1.164.17/32 -p tcp -m comment --comment "default/details:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-SB7WEE53EMIXFNKY -m comment --comment "default/details:http" -j KUBE-SEP-FG7KL5MWSB7UGTBW
-A KUBE-SEP-FG7KL5MWSB7UGTBW -s 10.98.4.30/32 -m comment --comment "default/details:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-FG7KL5MWSB7UGTBW -p tcp -m comment --comment "default/details:http" -m tcp -j DNAT --to-destination 10.98.4.30:9080
-A KUBE-SVC-4MYBDLPZ2DFGC5Z6 ! -s 10.244.0.0/16 -d 10.1.123.2/32 -p tcp -m comment --comment "default/ratings:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-4MYBDLPZ2DFGC5Z6 -m comment --comment "default/ratings:http" -j KUBE-SEP-5NOPIYZ2DEMWJ4BE
-A KUBE-SEP-5NOPIYZ2DEMWJ4BE -s 10.98.3.28/32 -m comment --comment "default/ratings:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-5NOPIYZ2DEMWJ4BE -p tcp -m comment --comment "default/ratings:http" -m tcp -j DNAT --to-destination 10.98.3.28:9080
-A KUBE-SVC-ROH4UCJ7RVN2OSM4 ! -s 10.244.0.0/16 -d 10.1.176.191/32 -p tcp -m comment --comment "default/productpage:http cluster IP" -m tcp --dport 9080 -j KUBE-MARK-MASQ
-A KUBE-SVC-ROH4UCJ7RVN2OSM4 -m comment --comment "default/productpage:http" -j KUBE-SEP-EXEAVA34QMIGNSLM
-A KUBE-SEP-EXEAVA34QMIGNSLM -s 10.98.3.30/32 -m comment --comment "default/productpage:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-EXEAVA34QMIGNSLM -p tcp -m comment --comment "default/productpage:http" -m tcp -j DNAT --to-destination 10.98.3.30:9080
COMMIT
# Completed on Tue Nov 23 09:48:28 2021
# Generated by iptables-save v1.8.4 on Tue Nov 23 09:48:28 2021
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:LIBVIRT_PRT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
-A POSTROUTING -j LIBVIRT_PRT
-A LIBVIRT_PRT -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT
# Completed on Tue Nov 23 09:48:28 2021
I mean in the pod and/or VM that has the issue
I receive the same error when two pods are on the same mesh but one of them has the following annotations (all traffic bypasses istio).
Error:
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
Annotations on one of pods:
traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/includeOutboundPorts: ""
To solve this currently it is needed to create a DestinationRule
as described above but in my case I've added this to a specific service:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: sbc
spec:
host: sbc
trafficPolicy:
tls:
mode: DISABLE
Is it correct behavior?
I receive the same error when two pods are on the same mesh but one of them has the following annotations (all traffic bypasses istio).
Error:
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
Annotations on one of pods:
traffic.sidecar.istio.io/includeInboundPorts: "" traffic.sidecar.istio.io/includeOutboundPorts: ""
To solve this currently it is needed to create a
DestinationRule
as described above but in my case I've added this to a specific service:apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: sbc spec: host: sbc trafficPolicy: tls: mode: DISABLE
Is it correct behavior?
I also met this tls error when configuring the annotation traffic.sidecar.istio.io/includeInboundPorts: "" to make inbound traffic bypass sidecar.
This solution is useful and worked for me.
We had the same issue and setting the tls block with mode DISABLE
fix the issue on our side.
I have a question for @lxv458 and @Noksa about your istio configuration.
Are you in strict or permissive mode ? On our side we are in permissive mode so maybe it's a bug with permissive mode when tls
block is not set
@Dudesons I didn't change this setting and as I know by default it is permissive
.
So i my case it is also permissive
mode.
This also affects the instructions for running prometheus in a (mostly) strict mesh: https://istio.io/latest/docs/ops/integrations/prometheus/
Grafana and Prometheus are running with sidecars using the above configurations and grafana is unable to talk to prometheus. Additionally, we're unable to route to it from a gateway using a virtualservice. Both scenarios get the ssl wrong version error.
We are also getting same below error when we enable mTLS option in mesh.
"268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER"
Currently destination rule for each service is set as STRICT mode.
Any other resolution other than disabling the TLS mode in destination rule
@Dudesons I also use the default settings, it should be permissive mode
I have a similar problem. And I add a PeerAuthentication, it's just solved.
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
generation: 4
name: myapp
namespace: mynamespace
spec:
mtls:
mode: PERMISSIVE
selector:
matchLabels:
app: myapp
The value of mtls.mode ,both PERMISSIVE and STRICT are ok, UNSET is not work。
Ref:
https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls 中文: https://istio.io/latest/zh/docs/tasks/security/authentication/authn-policy/
The value of mtls.mode ,both PERMISSIVE and STRICT are ok, UNSET is not work。
That seems odd, PERMISSIVE mode is identical to unset
@howardjohn Maybe I know why. I find my cluster(istio version: 1.8.4-r2) has a Globally PeerAuthentication, the mode is DISABLE
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: DISABLE
and the tls.mode set in DestinationRule of myapp is ISTIO_MUTUAL, the two configurations are inconsistent, and I get the 'OPENSSL_internal:WRONG_VERSION_NUMBER' error.
DR:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: myapp
namespace: mynamespace
spec:
host: myapp.mynamespace.svc.cluster.local
subsets:
- labels:
version: v51777
name: v51777
- labels:
version: 20220208091536-v26641
name: 20220208091536-v26641
trafficPolicy:
connectionPool:
http:
idleTimeout: 15s
tcp:
maxConnections: 2048
loadBalancer:
simple: ROUND_ROBIN
tls:
mode: ISTIO_MUTUAL
By creating PeerAuthentication object with mTLS option disabled at Mesh level will solve the OpenSSL_internal:WRONG_VERSION_NUMBER problem?
@nataraj24 I think the key point is the tls config in PeerAuthentication and DestinationRule should not be conflicting.
By creating PeerAuthentication object with mTLS option disabled at Mesh level will solve the OpenSSL_internal:WRONG_VERSION_NUMBER problem?
so solve the problem or not ? i have same problem - -!
By creating PeerAuthentication object with mTLS option disabled at Mesh level will solve the OpenSSL_internal:WRONG_VERSION_NUMBER problem?
so solve the problem or not ? i have same problem - -! now,pls refer to this https://github.com/istio/istio/issues/35870#issuecomment-975022907.
I have the same issue here following the docs from (https://istio.io/latest/docs/setup/install/virtual-machine/#configure-the-virtual-machine).
In my case, the problem was that my test was done, on the VM side, with docker exposing a port in default network mode sudo docker run --rm -it -p 8080:80 nginx
.
If I switch to host network mode it works normally (with traffic VM->MESH & MESH->VM)sudo docker run --rm -it --network host nginx
.
I believe it was a problem with the iptables rules, the traffic was coming directly to the nginx port instead of passing throught the envoy-proxy. With the --network host
the traffic was coming to the envoy and then to nginx port.
I can confirm also that if I DISABLE mTLS, then it works with docker on 'default' or on 'host' network. Below is my peerauthentication.yaml with mTLS DISABLED:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: test-istio-vm
namespace: istio-vm
spec:
mtls:
mode: DISABLE
selector:
matchLabels:
app: test-istio-vm
@sanwen Can you test it on master? 1.8.4 is not maintained.
@andrevcf We canot figure out what's wrong with your info? You can show your WorkloadEntry ServiceEntry. And if you suspect the traffic is not intercepted, you can also run iptables-save
@hzxuzhonghu, my workloadentry is pointing to a vm with nginx running inside docker
(inside the vm: sudo docker run --rm -it -p 80:80 nginx
)
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
name: test-istio-vm
spec:
serviceAccount: istio-vm-sa
address: 10.205.2.8
labels:
app: test-istio-vm
instance-id: test-istio-vm
The service on k8s is pointing to two ports just for testing purpose:
apiVersion: v1
kind: Service
metadata:
labels:
app: test-istio-vm
name: test-istio-vm
namespace: istio-vm
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: test-istio-vm
type: ClusterIP
If I call the service inside the mesh with curl http://test-istio-vm.istio-vm:80
it returns
< HTTP/1.1 503 Service Unavailable
< content-length: 190
< content-type: text/plain
< date: Mon, 25 Apr 2022 01:35:51 GMT
< server: envoy
<
* Connection #0 to host test-istio-vm.istio-vm left intact
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER/
iptables-save WITHOUT --net host
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*raw
:PREROUTING ACCEPT [3976:7885478]
:OUTPUT ACCEPT [4389:1350592]
-A PREROUTING -d 127.0.0.53/32 -p udp -m udp --sport 53 -j CT --zone 1
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --uid-owner 997 -j CT --zone 2
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --gid-owner 997 -j CT --zone 2
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j CT --zone 2
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*mangle
:PREROUTING ACCEPT [67542:336741183]
:INPUT ACCEPT [67374:336720441]
:FORWARD ACCEPT [168:20742]
:OUTPUT ACCEPT [73603:136411188]
:POSTROUTING ACCEPT [61890:135720222]
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*filter
:INPUT ACCEPT [37:11475]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [51:15403]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [4:240]
:POSTROUTING ACCEPT [3:211]
:DOCKER - [0:0]
:ISTIO_INBOUND - [0:0]
:ISTIO_IN_REDIRECT - [0:0]
:ISTIO_OUTPUT - [0:0]
:ISTIO_REDIRECT - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j RETURN
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j RETURN
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j REDIRECT --to-ports 15053
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.2:80
-A ISTIO_INBOUND -p tcp -m tcp --dport 15008 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -p tcp -m tcp ! --dport 53 -m owner --uid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.53/32 -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 15053
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:42:26 2022
*security
:INPUT ACCEPT [1675:2707628]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1608:558317]
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
COMMIT
# Completed on Mon Apr 25 01:42:26 2022
If I add just the --net host
with the command run of docker, then it works
sudo docker run --rm -it --net host nginx
iptables-save with the use of --net host
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*raw
:PREROUTING ACCEPT [3114:5319942]
:OUTPUT ACCEPT [3357:1043366]
-A PREROUTING -d 127.0.0.53/32 -p udp -m udp --sport 53 -j CT --zone 1
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --uid-owner 997 -j CT --zone 2
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j CT --zone 1
-A OUTPUT -p udp -m udp --sport 15053 -m owner --gid-owner 997 -j CT --zone 2
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j CT --zone 2
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*mangle
:PREROUTING ACCEPT [66680:334175647]
:INPUT ACCEPT [66512:334154905]
:FORWARD ACCEPT [168:20742]
:OUTPUT ACCEPT [72568:136104210]
:POSTROUTING ACCEPT [61055:135425244]
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*filter
:INPUT ACCEPT [440:72942]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [476:122758]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [84:5229]
:POSTROUTING ACCEPT [35:2413]
:DOCKER - [0:0]
:ISTIO_INBOUND - [0:0]
:ISTIO_IN_REDIRECT - [0:0]
:ISTIO_OUTPUT - [0:0]
:ISTIO_REDIRECT - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A OUTPUT -p udp -m udp --dport 53 -m owner --uid-owner 997 -j RETURN
-A OUTPUT -p udp -m udp --dport 53 -m owner --gid-owner 997 -j RETURN
-A OUTPUT -d 127.0.0.53/32 -p udp -m udp --dport 53 -j REDIRECT --to-ports 15053
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15008 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A ISTIO_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -p tcp -m tcp ! --dport 53 -m owner --uid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 997 -j RETURN
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -m owner --gid-owner 997 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -p tcp -m tcp ! --dport 53 -m owner ! --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 997 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.53/32 -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 15053
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
# Generated by iptables-save v1.8.4 on Mon Apr 25 01:39:17 2022
*security
:INPUT ACCEPT [813:142092]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [776:266107]
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
-A OUTPUT -d 168.63.129.16/32 -p tcp -m tcp --dport 53 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m owner --uid-owner 0 -j ACCEPT
-A OUTPUT -d 168.63.129.16/32 -p tcp -m conntrack --ctstate INVALID,NEW -j DROP
COMMIT
# Completed on Mon Apr 25 01:39:17 2022
Can you check the inbound cluster type? I suspect you are using an old version istio. The inbound cluster type is not ORIGINAL_DST
istioctl pc cluster xxxx |grep inbound
in VM you should call the API by hand.
@hzxuzhonghu , inside a pod running on the mesh (productpage of bookinfo)
istioctl pc cluster xxxx |grep inbound inbound
9080 - inbound ORIGINAL_DST
istioctl version
client version: 1.13.3
control plane version: 1.13.3
data plane version: 1.13.3 (10 proxies), 1.13.0 (2 proxies)
@hzxuzhonghu, this means that I must use the --net host
for docker to work with istio on vm?
That's fine by me!! It works well!
If you want more information I can try to help but i'm ok with adding --net host
if it is a requirement
Envoy inbould cluster is Original_Dest type. it will send packet to 10.205.2.8:80
. And with -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
, it will not be redirected to 172.xxx:80
From all your info, i think --net host is a requirement
/stale
/stale
FWIW - I see the same TLS error/failure signature but my setup is very different. Documenting here in case it helps others. I am trying an IPv6 scenario and trying to curl from outside of the cluster to inside of the cluster. I am mostly following getting started sequence with some adjustments required for dual-stack and also accounting for https://github.com/istio/istio/pull/29076. When I define the istio-ingressgateway svc to be single stack IPv4 I can curl from outside just fine with an IPv4 loadbalance IP. When i define the svc to be single stack IPv6 (e.g. IPv6 load balance IP) I can't curl from outside and logs show the traffic is not hitting the side car but directly hitting the product page container. When I curl from the ingressgateway istio-proxy container using the product page pods IPv6 address it is successful and the traffic goes through the side car. Since this is sufficiently different than the initial compliant, I will open a different issue.
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2022-10-24. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.
Created by the issue and PR lifecycle manager.
This just happened to me as well but in a little but different scenario than described here. The issue was that the Pod running web server didn't have the istio sidecar injected but the DestinationRule was specified for this workflow.
Changing the DestinationRule with spec.trafficPolicy.tls.mode: DISABLE
helped but it was a workaround.
Specifying label istio-injection: enabled
on the Namespace or sidecar.istio.io/inject: "true"
in Pod labels might help.
Bug Description
Mesh -> VM:
the log of bash: bash-5.1# curl helloworld.vm-test.svc.cluster.local:8500/hello upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBERbash-5.1#
Traffic in this direction failed.
VM -> Mesh: This is correct
Version
Additional Information
the yaml se of we:
I can access to it by ip + port:
the config of vm: cluster.env
istio-token
mesh.yaml:
root-cert.pem:
the istio configmap: