Closed lx308033262 closed 5 months ago
Yes. I will complete the document about this user experience. You can follow the below steps to have a try. First, make sure the system var is opened.
nacos.istio.mcp.server.enabled=true
Then, update meshconfig of istio,you should add configsources for meshconfig. For example
configSources:
- address: xds://<your nacos ip>:18848
- address: k8s://
You should restart istio whenever you update meshconfig.
thanks!
Yes. I will complete the document about this user experience. You can follow the below steps to have a try. First, make sure the system var is opened.
nacos.istio.mcp.server.enabled=true
Then, update meshconfig of istio,you should add configsources for meshconfig. For example
configSources: - address: xds://<your nacos ip>:18848 - address: k8s://
You should restart istio whenever you update meshconfig.
According to the implementation of istiod, we should add k8s source manually when explicitly set configsources if we need also watch istio crd or k8s services.
For k8s sources, just add k8s://
.
The relative code of istiod is following.
// fs:///PATH will load local files. This replaces --configDir.
// example fs:///tmp/configroot
// PATH can be mounted from a config map or volume
File ConfigSourceAddressScheme = "fs"
// xds://ADDRESS - load XDS-over-MCP sources
// example xds://127.0.0.1:49133
XDS ConfigSourceAddressScheme = "xds"
// k8s:// - load in-cluster k8s controller
// example k8s://
Kubernetes ConfigSourceAddressScheme = "k8s"
i have already doing this,but don't show any serviceentires ,Did i miss something?
[root@kubemaster deploy]# kubectl exec -t nacos-0 cat conf/application.properties -n test |grep istio kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. nacos.istio.mcp.server.enabled=true nacos.istio.mcp.server.port=18848 nacos.istio.mcp.push.interval=3000
[root@kubemaster deploy]# kubectl describe pod nacos-0 -n test|grep -i image: Image: nacos/nacos-server:latest
[root@kubemaster deploy]# kubectl get cm istio -n istio-system -o yaml apiVersion: v1 data: mesh: |- accessLogFile: /dev/stdout defaultConfig: discoveryAddress: istiod.istio-system.svc:15012 proxyMetadata: ISTIO_META_DNS_AUTO_ALLOCATE: "true" ISTIO_META_DNS_CAPTURE: "true" trustDomain: cluster.local configSources:
[root@kubemaster deploy]# kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.96.173.118
[root@kubemaster deploy]# kubectl exec -it nacos-0 bash -n test kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. [root@nacos-0 nacos]# curl "127.0.0.1:8848/nacos/v1/ns/service/list?pageNo=1&pageSize=20"|python -m json.tool % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1382 0 1382 0 0 281k 0 --:--:-- --:--:-- --:--:-- 337k { "count": 20, "doms": [ "consumers:com.kuaikuai.kifs.company.api.gym.CoGymService::", "consumers:com.kuaikuai.kifs.company.api.device.CoDeviceLoginService::", "consumers:com.kuaikuai.kifs.company.api.company.CoCompanyService::", "consumers:com.kuaikuai.kifs.calculate.api.selfcheck.SelfcheckNetReportService::", "providers:com.kuaikuai.kifs.collect.api.ColSelfcheckNetRecordService::", "consumers:com.kuaikuai.kifs.live.api.LiveClassService::", "consumers:com.kuaikuai.kifs.base.api.selfcheck.SelfcheckConfigService::", "consumers:com.kuaikuai.kifs.collect.api.MonitorMetadataService::", "providers:com.kuaikuai.kifs.collect.api.MonitorDataRaspiService::", "consumers:com.kuaikuai.kifs.live.api.LiveUrlService::", "providers:com.kuaikuai.kifs.collect.api.MonitorMetadataService::", "consumers:com.kuaikuai.kifs.collect.api.MonitorDataRaspiService::", "consumers:com.kuaikuai.kifs.collect.api.GracefulShutdownTestService::", "providers:com.kuaikuai.kifs.collect.api.GracefulShutdownTestService::", "consumers:com.kuaikuai.kifs.calculate.api.selfcheck.SelfcheckDSReportService::", "consumers:com.kuaikuai.kifs.course.api.GmClassService::", "consumers:com.kuaikuai.kifs.company.api.device.CoDeviceService::", "providers:com.kuaikuai.kifs.collect.api.MonitorDataService::", "providers:com.kuaikuai.kifs.collect.api.ColMonitorTraceService::", "consumers:com.kuaikuai.kifs.base.api.sys.SysAppUpgradeService::" ] }
Istio get service entries from nacos via mcp-over-xds. The service entries only be stored in memory and don't be created in k8s cluster. You can query service entries by the following command.
kubectl exec -it istiod -n istio-system -- curl localhost:15014/debug/configz | grep ServiceEntry
[root@kubemaster deploy]# istioctl proxy-config clusters kifs-gateway-selfcheck-deployment-7cd57b96df-nxhv8 -n test|grep mysql
mysql.default.svc.cluster.local 3306 - outbound EDS
mysql.test.svc.cluster.local 3306 - outbound EDS
[root@kubemaster deploy]# kubectl exec -it istiod-c585d8cdc-hvqx5 -n istio-system -- curl localhost:15014/debug/configz | grep ServiceEntry
[root@kubemaster deploy]# istioctl proxy-status NAME CDS LDS EDS RDS ISTIOD VERSION istio-egressgateway-6c4474966b-srrl4.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c585d8cdc-hvqx5 1.10.6 istio-ingressgateway-ff6cfb6cb-jbdk6.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-c585d8cdc-hvqx5 1.10.6 kifs-collect-deployment-5bfd95844-cjghh.test SYNCED SYNCED SYNCED SYNCED istiod-c585d8cdc-hvqx5 1.10.6 kifs-gateway-selfcheck-deployment-7cd57b96df-nxhv8.test SYNCED SYNCED SYNCED SYNCED istiod-c585d8cdc-hvqx5 1.10.6
k8s service seems ok?
nacos log show
[root@nacos-0 logs]# tail -f istio-main.log -n50 metadata { name: "nacos/providers:com.kuaikuai.kifs.collect.api.MonitorDataRaspiService::.DEFAULT-GROUP.public" create_time { seconds: 1651720409 } annotations { key: "virtual" value: "1" } } body { type_url: "type.googleapis.com/istio.networking.v1alpha3.ServiceEntry" value: "\n\providers:com.kuaikuai.kifs.collect.api.MonitorDataRaspiService::.DEFAULT-GROUP.public.nacos\032\020\b\220\243\001\022\004HTTP\032\004http \001(\0012\210\005\n\f10.244.4.101\022\n\n\004http\020\220\243\001\032\022\n\acluster\022\aDEFAULT\032\020\n\004side\022\bprovider\032R\n\fservice.name\022BServiceBean:/com.kuaikuai.kifs.collect.api.MonitorDataRaspiService\032$\n\amethods\022\031saveMetaData,getRaspiData\032\021\n\arelease\022\0062.7.11\032\023\n\ndeprecated\022\005false\032\017\n\006logger\022\005slf4j\032\016\n\005dubbo\022\0052.0.2\032\016\n\athreads\022\003200\032\b\n\003pid\022\0017\032B\n\tinterface\0225com.kuaikuai.kifs.collect.api.MonitorDataRaspiService\032\020\n\ageneric\022\005false\032\020\n\atimeout\022\00530000\032\032\n\brevision\022\0161.0.0-SNAPSHOT\032=\n\004path\0225com.kuaikuai.kifs.collect.api.MonitorDataRaspiService\032\021\n\bprotocol\022\005dubbo\032\027\n\rmetadata-type\022\006remote\032#\n\vapplication\022\024kifs-collect-service\032\017\n\adynamic\022\004true\032\025\n\bcategory\022\tproviders\032\017\n\aanyhost\022\004true\032\032\n\ttimestamp\022\r16517204118810\001" } } resources { metadata { name: "nacos/providers:com.kuaikuai.kifs.collect.api.ColLivePoorNetworkService::.DEFAULT-GROUP.public" create_time { seconds: 1651720409 } annotations { key: "virtual" value: "1" } } body { type_url: "type.googleapis.com/istio.networking.v1alpha3.ServiceEntry" value: "\n^providers:com.kuaikuai.kifs.collect.api.ColLivePoorNetworkService::.DEFAULT-GROUP.public.nacos\032\020\b\220\243\001\022\004HTTP\032\004http \001(\0012\375\004\n\f10.244.4.101\022\n\n\004http\020\220\243\001\032\022\n\acluster\022\aDEFAULT\032\020\n\004side\022\bprovider\032T\n\fservice.name\022DServiceBean:/com.kuaikuai.kifs.collect.api.ColLivePoorNetworkService\032\023\n\amethods\022\bsaveList\032\021\n\arelease\022\0062.7.11\032\023\n\ndeprecated\022\005false\032\017\n\006logger\022\005slf4j\032\016\n\005dubbo\022\0052.0.2\032\016\n\athreads\022\003200\032\b\n\003pid\022\0017\032D\n\tinterface\0227com.kuaikuai.kifs.collect.api.ColLivePoorNetworkService\032\020\n\ageneric\022\005false\032\020\n\atimeout\022\00530000\032\032\n\brevision\022\0161.0.0-SNAPSHOT\032?\n\004path\0227com.kuaikuai.kifs.collect.api.ColLivePoorNetworkService\032\021\n\bprotocol\022\005dubbo\032\027\n\rmetadata-type\022\006remote\032#\n\vapplication\022\024kifs-collect-service\032\017\n\adynamic\022\004true\032\025\n\bcategory\022\tproviders\032\017\n\aanyhost\022\004true\032\032\n\ttimestamp\022\r16517204119580\001" } } resources { metadata { name: "nacos/providers:com.kuaikuai.kifs.collect.api.ColMonitorTraceService::.DEFAULT-GROUP.public" create_time { seconds: 1651720409 } annotations { key: "virtual" value: "1" } } body { type_url: "type.googleapis.com/istio.networking.v1alpha3.ServiceEntry" value: "\n[providers:com.kuaikuai.kifs.collect.api.ColMonitorTraceService::.DEFAULT-GROUP.public.nacos\032\020\b\220\243\001\022\004HTTP\032\004http \001(\0012\357\004\n\f10.244.4.101\022\n\n\004http\020\220\243\001\032\022\n\acluster\022\aDEFAULT\032\020\n\004side\022\bprovider\032Q\n\fservice.name\022AServiceBean:/com.kuaikuai.kifs.collect.api.ColMonitorTraceService\032\016\n\amethods\022\003add\032\021\n\arelease\022\0062.7.11\032\023\n\ndeprecated\022\005false\032\017\n\006logger\022\005slf4j\032\016\n\005dubbo\022\0052.0.2\032\016\n\athreads\022\003200\032\b\n\003pid\022\0017\032A\n\tinterface\0224com.kuaikuai.kifs.collect.api.ColMonitorTraceService\032\020\n\ageneric\022\005false\032\020\n\atimeout\022\00530000\032\032\n\brevision\022\0161.0.0-SNAPSHOT\032<\n\004path\0224com.kuaikuai.kifs.collect.api.ColMonitorTraceService\032\021\n\bprotocol\022\005dubbo\032\027\n\rmetadata-type\022\006remote\032#\n\vapplication\022\024kifs-collect-service\032\017\n\adynamic\022\004true\032\025\n\bcategory\022\tproviders\032\017\n\aanyhost\022\004true\032\032\n\ttimestamp\022\r16517204118090\001" } } nonce: "1651720469957"
Istio get service entries from nacos via mcp-over-xds. The service entries only be stored in memory and don't be created in k8s cluster. You can query service entries by the following command.
kubectl exec -it istiod -n istio-system -- curl localhost:15014/debug/configz | grep ServiceEntry
i found ServiceEntry,but why its namespace is always nacos, i dont konw how to set it or it uses nacos namespace;the nacos namespace is default,but ServiceEntry is nacos。i found code in nacos about buildServiceEntry, there is no setting namespace.
Thanks for your feedback and contribution. But the issue/pull request has not had recent activity more than 180 days. This issue/pull request will be closed if no further activity occurs 7 days later. We may solve this issue in new version. So can you upgrade to newest version and retry? If there are still issues or want to contribute again. Please create new issue or pull request again.
i want to use istio with nacos,but can't find any instance