karmada-io / karmada

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
https://karmada.io
Apache License 2.0
4.12k stars 806 forks source link

mcs 服务不能通过derived-serve访问另一个集群 #2460

Closed 631068264 closed 1 year ago

631068264 commented 1 year ago

What happened: 根据 这个https://karmada.io/docs/userguide/service/multi-cluster-service

应用部署到member1 ,运行正常

kubectl --kubeconfig member1-config run tmp-shell --rm -i --tty --image submariner/nettest:0.12.2 -- /bin/bash

curl serve

'hello from cluster 1 (Node: server-605ae265-df2b-4e91-9efa-19069d84f2d0 Pod: serve-59994d98f6-9bzxj Address: 10.44.0.25)'

但是member2 访问不了

kubectl --kubeconfig member2-config run tmp-shell --rm -i --tty --image submariner/nettest:0.12.2 -- /bin/bash

curl derived-serve
curl: (7) Failed to connect to derived-serve port 80 after 1003 ms: Connection refused
kubectl --kubeconfig member2-config  get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
derived-serve   ClusterIP   10.47.135.147   <none>        80/TCP    16h
kubernetes      ClusterIP   10.47.0.1       <none>        443/TCP   33d

What you expected to happen:

mcs.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: serve
spec:
  replicas: 1
  selector:
    matchLabels:
      app: serve
  template:
    metadata:
      labels:
        app: serve
    spec:
      containers:
      - name: serve
        image: xxxxx/library/serve:0a40de8
        args:
        - "--message='hello from cluster 1 (Node: {{env \"NODE_NAME\"}} Pod: {{env \"POD_NAME\"}} Address: {{addr}})'"
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
---      
apiVersion: v1
kind: Service
metadata:
  name: serve
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: mcs-workload
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: serve
    - apiVersion: v1
      kind: Service
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1

---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-export-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceExport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1

---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: serve
spec:
  type: ClusterSetIP
  ports:
  - port: 80
    protocol: TCP
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-import-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceImport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member2

部署

karmadactl --kubeconfig karmada-apiserver.config apply -f mcs.yaml

submariner 已经安装好了实际上也通过了https://submariner.io/operations/usage/#2-export-services-across-clusters测试 部署到member1然后

m1=/root/karmada/member1-config

kubectl --kubeconfig ${m1} create deployment nginx --image=nginx
kubectl --kubeconfig ${m1} expose deployment nginx --port=80
subctl export service --kubeconfig ${m1} --namespace default nginx

在member2 curl 是通的

kubectl --kubeconfig member2-config run tmp-shell --rm -i --tty --image submariner/nettest:0.12.2 -- /bin/bash

bash-5.1# curl nginx.default.svc.clusterset.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;

Environment: Karmada version: 1.2.1 kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version): karmadactl version: version.Info{GitVersion:"v1.2.1", GitCommit:"de4972b74f848f78a58f9a0f4a4e85f243ba48f8", GitTreeState:"clean", BuildDate:"2022-07-14T09:33:33Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"} Others:

631068264 commented 1 year ago

解决了,https://karmada.io/docs/userguide/service/multi-cluster-service/ 这里面写得有问题

XiShanYongYe-Chang commented 1 year ago

哪里有问题,可以指出来或者提个pr

631068264 commented 1 year ago

https://submariner.io/getting-started/architecture/service-discovery/ 根据这里面说的关于 Lighthouse Agent和Lighthouse DNS Server的一些工作方式

XiShanYongYe-Chang commented 1 year ago

Submarine和Karmada都对MCS API进行了实现,你所指的是Submarine中的实现,与Karmada的实现不冲突。

631068264 commented 1 year ago

反正我按照官网走不下去,要不然就是Karmada有bug。

631068264 commented 1 year ago

@XiShanYongYe-Chang 所以为什么不行呢?

XiShanYongYe-Chang commented 1 year ago

成员集群版本是多少呢?karmada MCS特性当前收集的EndpointSlice v1版本资源,可以到集群member2上看下是否有endpointSlice从member1上同步过来。

631068264 commented 1 year ago

集群版本host member都是: v1.19.5+k3s2

kubectl --kubeconfig member2-config  get endpointSlice -A
NAMESPACE             NAME                                          ADDRESSTYPE   PORTS        ENDPOINTS               AGE
default               nginx-member1                                 IPv4          80           10.44.0.27              5d
default               serve-member1                                 IPv4          8080         10.44.0.29              25h
XiShanYongYe-Chang commented 1 year ago

看上去没有同步成功,如果成功同步的化,会有imported-开头的endpointSlice,如: image

EndpointSlice版本确认下是不是v1的呢(k8s集群1.21开始使用v1版本EndpointSlice)。

相关链接:https://github.com/karmada-io/karmada/pull/1107#issuecomment-997159415

631068264 commented 1 year ago

不是

addressType: IPv4
apiVersion: discovery.k8s.io/v1beta1
endpoints:
- addresses:
  - 10.44.0.27
  conditions:
    ready: true
  hostname: nginx-6799fc88d8-b7xmw
  topology:
    kubernetes.io/hostname: server-605ae265-df2b-4e91-9efa-19069d84f2d0
kind: EndpointSlice
metadata:
XiShanYongYe-Chang commented 1 year ago

可以使用1.21以上的k8s集群再试试。

631068264 commented 1 year ago

所以这个功能得1.21以上(EndpointSlice 是v1的)才能用? 主要是看到支持1.19的。。。行吧我升级试试

XiShanYongYe-Chang commented 1 year ago

是的,文档需要说明下。

631068264 commented 1 year ago
image

好了,只需要升级member集群就行了

XiShanYongYe-Chang commented 1 year ago

好的,感谢分享

631068264 commented 1 year ago

还有个问题为什么不用同名svc , 要加个derived前缀? 这个前缀可以改吗?

XiShanYongYe-Chang commented 1 year ago

加这个前缀是为了跟原有的service做区别,目前是硬编码,没搞成可配置的。

631068264 commented 1 year ago

可以提个需求吗

XiShanYongYe-Chang commented 1 year ago

对于同名的要求,已经有issue了:#2384。 对于可配置的svc前缀,这个可以做,想了解下你那边的使用场景是怎样的;另外,如果service同名达成的话,配置前缀可能不再需要了。

631068264 commented 1 year ago

如果可以的话同名的话确实不用

631068264 commented 1 year ago

https://karmada.io/docs/userguide/service/multi-cluster-ingress 这个host 集群也得1.21+ 不然报错 multi-cluster-ingress-nginx 跑不起来,集群版本必须得说明一下

image
XiShanYongYe-Chang commented 1 year ago

是的,MCI特性当前依赖MCS特性

13567436138 commented 5 months ago

我生成了v1版本的endpointslice,为啥还访问不通呢

Angelica-Sinensis commented 1 month ago

是的,MCI特性当前依赖MCS特性

请问如果使用Istio而不是submariner完成了打通多集群的容器网络之后,还需要保证CIDR不重叠吗。 我看到karmada-mcs-mci里面是写的由于目前暂不支持Submariner的Global IP模式,并且官网Working with Istio on non-flat networkWorking with Istio on non-flat network上也没有提到这个要求,所以不确定这么做是否可行。 我已经进行了尝试,但是不确定是因为自己istio没有装对还是cidr冲突,无论是官网的bookinfo示例还是多集群服务发现示例都无法跑通。

XiShanYongYe-Chang commented 1 month ago

我没有使用 istio 来打通多集群容器网络的经验。

但是不确定是因为自己istio没有装对还是cidr冲突

可以先排除 CIDR 冲突的因素么,比如说未不同集群设置不通的 CIDR。

Angelica-Sinensis commented 1 month ago

我没有使用 istio 来打通多集群容器网络的经验。

但是不确定是因为自己istio没有装对还是cidr冲突

可以先排除 CIDR 冲突的因素么,比如说未不同集群设置不通的 CIDR。

非常感谢您的帮助。我按照你教的方法先排除了CIDR的因素,并且在刚刚跑通了所提到的bookinfo的示例应用,可能我做的实验比较简陋,但是在我的实验里两个CIDR相同的从集群也可以正常工作。 再次感谢您的帮助

XiShanYongYe-Chang commented 1 month ago

@Angelica-Sinensis 感谢你的反馈~