apache / apisix

The Cloud-Native API Gateway
https://apisix.apache.org/blog/
Apache License 2.0
14.3k stars 2.49k forks source link

SSL_set_tlsext_host_name failed. Retrying #8314

Closed lxyqwer closed 10 months ago

lxyqwer commented 1 year ago

Description

2022/11/11 22:33:09 [info] 64#64: 218959 [lua] init.lua:130: handler(): uri: ["","apisix","admin","routes"], client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [info] 64#64: 218959 [lua] v3.lua:155: _request_uri(): v3 request uri: /kv/range, timeout: 30, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] v3.lua:213: _request_uri(): https://192.168.0.203:2279: SSL_set_tlsext_host_name failed. Retrying, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] v3.lua:213: _request_uri(): https://192.168.0.203:2279: SSL_set_tlsext_host_name failed. Retrying, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] health_check.lua:105: report_failure(): update endpoint: https://192.168.0.203:2279 to unhealthy, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] v3.lua:213: _request_uri(): https://192.168.0.203:2279: SSL_set_tlsext_host_name failed. Retrying, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] v3.lua:213: _request_uri(): https://192.168.0.207:2279: SSL_set_tlsext_host_name failed. Retrying, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] v3.lua:213: _request_uri(): https://192.168.0.207:2279: SSL_set_tlsext_host_name failed. Retrying, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] health_check.lua:105: report_failure(): update endpoint: https://192.168.0.207:2279 to unhealthy, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [warn] 64#64: 218959 [lua] v3.lua:213: _request_uri(): https://192.168.0.207:2279: SSL_set_tlsext_host_name failed. Retrying, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [error] 64#64: 218959 [lua] routes.lua:201: failed to get route[/routes] from etcd: has no healthy etcd endpoint available, client: 10.244.3.240, server: , request: "GET /apisix/admin/routes HTTP/1.1", host: "apisix-admin.apisix.svc.cluster.local:9180" 2022/11/11 22:33:09 [info] 65#65: 221199 [lua] config_etcd.lua:332: sync_data(): waitdir key: /apisix/proto prev_index: 224, context: ngx.timer 2022/11/11 22:33:09 [info] 65#65: 221199 [lua] config_etcd.lua:333: sync_data(): res: null, context: ngx.timer 2022/11/11 22:33:09 [error] 65#65: 221199 [lua] config_etcd.lua:568: no healthy etcd endpoint available, next retry after 2s, context: ngx.timer 2022/11/11 22:33:09 [info] 66#66: 195540 [lua] config_etcd.lua:332: sync_data(): waitdir key: /apisix/services prev_index: 224, context: ngx.timer 2022/11/11 22:33:09 [info] 66#66: 195540 [lua] config_etcd.lua:333: sync_data(): res: null, context: ngx.timer 2022/11/11 22:33:09 [error] 66#66: 195540 [lua] config_etcd.lua:568: no healthy etcd endpoint available, next retry after 32s, context: ngx.timer 2022/11/11 22:33:09 [info] 54#54: 221327 [lua] timers.lua:39: run timer[plugin#server-info], context: ngx.timer 2022/11/11 22:33:10 [info] 65#65: 221011 [lua] config_etcd.lua:332: sync_data(): waitdir key: /apisix/routes prev_index: 224, context: ngx.timer 2022/11/11 22:33:10 [info] 65#65: 221011 [lua] config_etcd.lua:333: sync_data(): res: null, context: ngx.timer 2022/11/11 22:33:10 [error] 65#65: 221011 [lua] config_etcd.lua:568: no healthy etcd endpoint available, next retry after 4s, context: ngx.timer 2022/11/11 22:33:10 [info] 65#65: 221056 [lua] config_etcd.lua:332: sync_data(): waitdir key: /apisix/consumers prev_index: 224, context: ngx.timer 2022/11/11 22:33:10 [info] 65#65: 221056 [lua] config_etcd.lua:333: sync_data(): res: null, context: ngx.timer 2022/11/11 22:33:10 [error] 65#65: 221056 [lua] config_etcd.lua:568: no healthy etcd endpoint available, next retry after 4s, context: ngx.timer 2022/11/11 22:33:10 [info] 54#54: 221442 [lua] timers.lua:39: run timer[plugin#server-info], context: ngx.timer 2022/11/11 22:33:10 [info] 65#65: 221096 [lua] config_etcd.lua:332: sync_data(): waitdir key: /apisix/plugin_configs prev_index: 224, context: ngx.timer 2022/11/11 22:33:10 [info] 65#65: 221096 [lua] config_etcd.lua:333: sync_data(): res: null, context: ngx.timer 2022/11/11 22:33:10 [error] 65#65: 221096 [lua] config_etcd.lua:568: no healthy etcd endpoint available, next retry after 4s, context: ngx.timer

helm部署 apisix启动是成功的,但有报错 在apisix dashboard中加入路由后不生效,重启apisix后路由才生效,etcd为健康状态

Environment

tokers commented 1 year ago

How did you install apisix? Via helm?

lxyqwer commented 1 year ago

yes, 昨晚我关掉etcd tls后恢复,早上起来试了下又出现这个问题,重启apisix就恢复了,下面是我的values.yaml配置 `global: imagePullSecrets: []

apisix: enabled: true

enableIPv6: false

enableServerTokens: false

setIDFromPodUID: false

customLuaSharedDicts: [] luaModuleHook: enabled: false luaPath: "" hookPoint: "" configMapRef: name: "" mounts:

nameOverride: "" fullnameOverride: ""

serviceAccount: create: true annotations: {} name: ""

rbac: create: true

gateway: type: NodePort externalTrafficPolicy: Cluster externalIPs: [] http: enabled: true servicePort: 80 containerPort: 9080 tls: enabled: false servicePort: 443 containerPort: 9443 existingCASecret: "" certCAFilename: "" http2: enabled: true sslProtocols: "TLSv1.2 TLSv1.3" stream: enabled: false only: false tcp: [] udp: [] ingress: enabled: false annotations: {} hosts:

admin: enabled: true type: NodePort externalIPs: [] port: 9180 servicePort: 9180 cors: true credentials: admin: edd1c9f034335f136f87ad84b625c8fa viewer: 4054f7cf07e344346cd3f287985e76aa

allow: ipList:

nginx: workerRlimitNofile: "20480" workerConnections: "10620" workerProcesses: auto enableCPUAffinity: true envs: []

plugins:

pluginAttrs: {}

extPlugin: enabled: false cmd: ["/path/to/apisix-plugin-runner/runner", "run"]

wasmPlugins: enabled: false plugins: []

customPlugins: enabled: false luaPath: "/opts/custom_plugins/?.lua" plugins:

updateStrategy: {}

extraVolumes: []

extraVolumeMounts: []

extraInitContainers: []

discovery: enabled: true registry: kubernetes: namespace_selector: not_equal: default

  service:

  client:
    token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

logs: enableAccessLog: true accessLog: "/dev/stdout" accessLogFormat: '$remote_addr - $remote_user [$time_local] $http_host \"$request\" $status $body_bytes_sent $request_time \"$http_referer\" \"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time \"$upstream_scheme://$upstream_host$upstream_uri\"' accessLogFormatEscape: default errorLog: "/dev/stderr" errorLogLevel: "info"

dns: resolvers:

initContainer: image: busybox tag: 1.28

autoscaling: enabled: false minReplicas: 1 maxReplicas: 100 targetCPUUtilizationPercentage: 80 targetMemoryUtilizationPercentage: 80

configurationSnippet: main: |

httpStart: |

httpEnd: |

httpSrv: |

httpAdmin: |

stream: |

serviceMonitor: enabled: false namespace: "" name: "" interval: 15s path: /apisix/prometheus/metrics metricPrefix: apisix_ containerPort: 9091 labels: {} annotations: {}

etcd: enabled: false host:

dashboard: enabled: true

ingress-controller: enabled: false

vault: enabled: false host: "" timeout: 10 token: "" prefix: "" `

tokers commented 1 year ago

@lxyqwer This is a known issue, and it was fixed in the master branch. See https://github.com/apache/apisix-helm-chart/pull/391 for details.

As a workaround, you may choose to set the sni field explicitly (same to the ETCD service name).

github-actions[bot] commented 10 months ago

This issue has been marked as stale due to 350 days of inactivity. It will be closed in 2 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the dev@apisix.apache.org list. Thank you for your contributions.

github-actions[bot] commented 10 months ago

This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.