Open Notailli opened 1 year ago
cc @sjtuzbk
看上去是 higress.io/destination 配的有问题,如果是zk注册中心,应该有.zookeeper尾缀
@Notailli 你的配置能用代码块包一下么,格式有点乱没对齐
# 代码块
higress.io/destination: providers:com.xkcoding.dubbo.common.service.HelloService:1.0.0:dev.zookeeper 这样么?
建议用higress控制台,可以在服务列表里看到这个服务名字
如果要实现http到dubbo协议转换,VirtualService、ServiceEntry、McpBridge、Ingress、EnvoyFilter这些资源都要配置么?
VirtualService、ServiceEntry、McpBridge、Ingress、EnvoyFilter 这些资源我配置的ns为test,和higress-gateway资源不在同一个ns下,这会影响么
我理解不需要VS/SE,mcpbridge/envoyfilter 需要在 higress-gateway 命名空间下的,并且mcpbridge的name需要是default。建议你在higress控制台管理服务来源,就是这个mcpbridge
dubbo服务和zk也需要和higress-gateway资源在同一个ns么?
zk和dubbo服务部署在哪里都无所谓的,mcpbridge里配置的zk的地址,可以发现zk,从zk里发现dubbo服务地址
apiVersion: networking.higress.io/v1
kind: McpBridge
metadata:
name: default
namespace: higress-system
spec:
registries:
- domain: 10.96.215.94
name: zookeeper
port: 2181
type: zookeeper
如果我在test这个ns下面部署zk,domain写的是service的ip(也在test这个ns下),这样mcpBridge(在higress-system)可以跨ns发现zk么? 还是需要在另外配置
可以的,你的zk都不需要部署在k8s里,只要domain的这个ip网络能通就行
# curl 10.107.58.2:80/dubbo/hello?name='d' -v
* About to connect() to 10.107.58.2 port 80 (#0)
* Trying 10.107.58.2...
* Connected to 10.107.58.2 (10.107.58.2) port 80 (#0)
> GET /dubbo/hello?name=d HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.107.58.2
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Wed, 12 Apr 2023 07:16:19 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host 10.107.58.2 left intact
现在报404,有什么解决这个问题得方向提供么
higress控制台,没有显示higress.io/destination 这些信息,不能确定higress.io/destination: providers:com.xkcoding.dubbo.common.service.HelloService:1.0.0:dev.zookeeper 是否正确
@Notailli 你在控制台的服务来源添加zk的配置了么,如果是的话,应该是 Higress 没法连通你的 zk 地址
curl higress-gatewayip/dubbo/hello 报404
dubbo 2.0 + nacos1.32 在 test namespace下
EnvoyFilter 、Ingress 、McpBridge 在 higress-system namespace下
higress 版本:[v1.0.0-rc]
higress-gateway LoadBalancer 10.107.58.2 <pending> 80:30225/TCP,443:31010/TCP
dubbo-provider ClusterIP 10.105.182.181 <none> 8087/TCP,20880/TCP
nacos NodePort 10.99.160.181 <none> 8848:30008/TCP,9848:31077/TCP
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: http-dubbo-transcoder-test
namespace: higress-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.http_dubbo_transcoder
typed_config:
'@type': type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.http_dubbo_transcoder.v3.HttpDubboTranscoder
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
route:
name: test
patch:
operation: MERGE
value:
route:
upgrade_configs:
- connect_config:
allow_post: true
upgrade_type: CONNECT
typed_per_filter_config:
envoy.filters.http.http_dubbo_transcoder:
'@type': type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.http_dubbo_transcoder.v3.HttpDubboTranscoder
value:
request_validation_options:
reject_unknown_method: true
reject_unknown_query_parameters: true
services_mapping:
- group: dev
method_mapping:
- name: sayHello
parameter_mapping:
- extract_key: name
extract_key_spec: ALL_QUERY_PARAMETER
mapping_type: java.lang.String
passthrough_setting:
passthrough_all_headers: true
path_matcher:
match_http_method_spec: ALL_GET
match_pattern: /dubbo/hello
name: com.xiaobai.api.service.SayHelloService
version: 1.0.0
url_unescape_spec: ALL_CHARACTERS_EXCEPT_RESERVED
- applyTo: CLUSTER
match:
cluster:
service: dubbo.static
context: GATEWAY
patch:
operation: MERGE
value:
upstream_config:
name: envoy.upstreams.http.dubbo_tcp
typed_config:
'@type': type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.upstreams.http.dubbo_tcp.v3.DubboTcpConnectionPoolProto
apiVersion: networking.higress.io/v1
kind: McpBridge
metadata:
name: default
namespace: higress-system
spec:
registries:
- domain: 10.99.160.181
nacosGroups:
- DEFAULT_GROUP
name: nacos-service-resource
port: 8848
type: nacos
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
higress.io/destination: providers:com.xiaobai.api.service.SayHelloService:1.0.0:dev.DEFAULT-GROUP.public.nacos
name: demo
namespace: test
spec:
ingressClassName: higress
rules:
- http:
paths:
- backend:
resource:
apiGroup: networking.higress.io
kind: McpBridge
name: default
path: /dubbo
pathType: Prefix
apiVersion: v1
kind: Service
metadata:
name: nacos
namespace: test
spec:
clusterIP: 10.99.160.181
clusterIPs:
- 10.99.160.181
ports:
- name: tcp-8848
port: 8848
protocol: TCP
targetPort: 8848
- name: tcp-9848
port: 9848
protocol: TCP
targetPort: 9848
selector:
app: nacos
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: nacos
namespace: higress-system
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nacos
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nacos
spec:
containers:
- env:
- name: MODE
value: standalone
image: nacos/nacos-server:1.3.2
imagePullPolicy: IfNotPresent
name: nacos-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: higress-gateway
serviceAccountName: higress-gateway
securityContext: {}
terminationGracePeriodSeconds: 30
apiVersion: v1
kind: Service
metadata:
labels:
name: dubbo-provider-nacos
system/appName: dubbo-provider-nacos
name: dubbo-provider-nacos
namespace: test
spec:
clusterIP: 10.105.182.185
clusterIPs:
- 10.105.182.185
ports:
- name: tcp-port-0
port: 20880
protocol: TCP
targetPort: 20880
selector:
name: dubbo-provider-nacos
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: dubbo-provider-nacos
name: dubbo-provider-nacos
version: v1
name: dubbo-provider-nacos
namespace: test
spec:
minReadySeconds: 10
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: dubbo-provider-nacos
name: dubbo-provider-nacos
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
cni.projectcalico.org/ipv4pools: '["172.31.0.0/16"]'
sidecar.istio.io/inject: "false"
system/container-registry-map: '{"dubbo-provider-nacos":"UPID-KGRrbyfedb64"}'
system/registry: default
v1.multus-cni.io/default-network: kube-system/calico@eth0
creationTimestamp: null
labels:
app: dubbo-provider-nacos
name: dubbo-provider-nacos
version: v1
spec:
containers:
- image: 192.168.90.184/system_containers/dubbo-provider:v1.0.0-010
imagePullPolicy: IfNotPresent
name: dubbo-provider
ports:
- containerPort: 20880
protocol: TCP
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 100m
memory: 100Mi
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: user-1-registrysecret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
[root@node171 ~]# curl 10.107.58.2:80/dubbo/hello?name='f' -v
* About to connect() to 10.107.58.2 port 80 (#0)
* Trying 10.107.58.2...
* Connected to 10.107.58.2 (10.107.58.2) port 80 (#0)
> GET /dubbo/hello?name=f HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.107.58.2
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Thu, 13 Apr 2023 02:02:33 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host 10.107.58.2 left intact
可以详细说明一下EnvoyFilter 中重点修改的参数么? 用http访问可以访问通,用http转dubbo,现在报502 bad gateway错误
@johnlanni 我看0.7.1对协议转换有过修复,但是看起来并没有生效
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
route:
name: test
EnvoyFilter中的 这个name我该如何设置????
这个name的意思是要把这个envoyfilter apply到哪条路由上,所以要填写路由的name,而路由的name就是ingress的name
返回码成功200了,但是没有返回值,dubbo程序有异常,这是什么原因造成的?
curl 10.107.58.2/dubbo/hello?name=kk -v -H 'host: test.mcp.cn'
* About to connect() to 10.107.58.2 port 80 (#0)
* Trying 10.107.58.2...
* Connected to 10.107.58.2 (10.107.58.2) port 80 (#0)
> GET /dubbo/hello?name=kk HTTP/1.1
> User-Agent: curl/7.29.0
> Accept: */*
> host: test.mcp.cn
>
< HTTP/1.1 200 OK
< req-cost-time: 0
< req-arrive-time: 1681398317130
< resp-start-time: 1681398317130
< x-envoy-upstream-service-time: 0
< date: Thu, 13 Apr 2023 15:05:16 GMT
< server: istio-envoy
< transfer-encoding: chunked
<
* transfer closed with outstanding read data remaining
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining
2023-04-14 09:35:16.779 WARN 1 --- [ver worker #1-2] c.a.d.remoting.transport.AbstractServer : [DUBBO] All clients has discontected from /172.31.67.164:20880. You can graceful shutdown now., dubbo version: 2.6.0, current host: 172.31.67.164
2023-04-14 09:35:16.780 INFO 1 --- [:20880-thread-7] c.a.d.rpc.protocol.dubbo.DubboProtocol : [DUBBO] disconected from /172.31.67.161:54208,url:dubbo://172.31.67.164:20880/com.xiaobai.api.service.SayHelloService?anyhost=true&application=provider&bind.ip=172.31.67.164&bind.port=20880&channel.readonly.sent=true&codec=dubbo&dubbo=2.6.0&generic=false&group=dev&heartbeat=60000&interface=com.xiaobai.api.service.SayHelloService&methods=sayHello&pid=1&revision=0.0.1-SNAPSHOT&side=provider×tamp=1681435335330&version=1.0.0, dubbo version: 2.6.0, current host: 172.31.67.164
2023-04-14 09:35:16.781 WARN 1 --- [:20880-thread-6] c.a.d.r.t.d.ChannelEventRunnable : [DUBBO] ChannelEventRunnable handle RECEIVED operation error, channel is NettyChannel [channel=[id: 0x79b1659e, /172.31.67.161:54208 :> /172.31.67.164:20880]], message is Request [id=13, version=2.0.0, twoway=true, event=false, broken=false, data=RpcInvocation [methodName=$invoke, parameterTypes=[class java.lang.String, class [Ljava.lang.String;, class [Ljava.lang.Object;], arguments=[sayHello, [Ljava.lang.String;@7641b1d1, [Ljava.lang.Object;@727e3bc1], attachments={x-envoy-internal=true, x-request-id=8f75f258-8c5c-4e68-a499-32d354b57283, x-forwarded-proto=http, dubbo=2.7.1, x-forwarded-for=192.168.90.171, version=1.0.0, accept=*/*, path=com.xiaobai.api.service.SayHelloService, input=503, :method=GET, :scheme=http, :path=/dubbo/hello?name=kk, x-envoy-decorator-operation=dubbo-provider-nacos.test.svc.cluster.local:20880/dubbo/*, :authority=test.mcp.cn, group=dev, user-agent=curl/7.29.0}]], dubbo version: 2.6.0, current host: 172.31.67.164
com.alibaba.dubbo.remoting.RemotingException: Failed to send message Response [id=13, version=2.0.0, status=20, event=false, error=null, result=RpcResult [result=Hello kk, exception=null]] to /172.31.67.161:54208, cause: null
at com.alibaba.dubbo.remoting.transport.netty.NettyChannel.send(NettyChannel.java:106) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.transport.AbstractPeer.send(AbstractPeer.java:52) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:169) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:50) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:79) ~[dubbo-2.6.0.jar!/:2.6.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_301]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_301]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_301]
Caused by: java.nio.channels.ClosedChannelException: null
at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:643) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:370) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:137) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.Channels.write(Channels.java:632) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:70) ~[netty-3.2.5.Final.jar!/:na]
at com.alibaba.dubbo.remoting.transport.netty.NettyHandler.writeRequested(NettyHandler.java:98) ~[dubbo-2.6.0.jar!/:2.6.0]
at org.jboss.netty.channel.Channels.write(Channels.java:611) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.Channels.write(Channels.java:578) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:251) ~[netty-3.2.5.Final.jar!/:na]
at com.alibaba.dubbo.remoting.transport.netty.NettyChannel.send(NettyChannel.java:96) ~[dubbo-2.6.0.jar!/:2.6.0]
... 7 common frames omitted
返回码成功200了,但是没有返回值,dubbo程序有异常,这是什么原因造成的?
curl 10.107.58.2/dubbo/hello?name=kk -v -H 'host: test.mcp.cn' * About to connect() to 10.107.58.2 port 80 (#0) * Trying 10.107.58.2... * Connected to 10.107.58.2 (10.107.58.2) port 80 (#0) > GET /dubbo/hello?name=kk HTTP/1.1 > User-Agent: curl/7.29.0 > Accept: */* > host: test.mcp.cn > < HTTP/1.1 200 OK < req-cost-time: 0 < req-arrive-time: 1681398317130 < resp-start-time: 1681398317130 < x-envoy-upstream-service-time: 0 < date: Thu, 13 Apr 2023 15:05:16 GMT < server: istio-envoy < transfer-encoding: chunked < * transfer closed with outstanding read data remaining * Closing connection 0 curl: (18) transfer closed with outstanding read data remaining
2023-04-14 09:35:16.779 WARN 1 --- [ver worker #1-2] c.a.d.remoting.transport.AbstractServer : [DUBBO] All clients has discontected from /172.31.67.164:20880. You can graceful shutdown now., dubbo version: 2.6.0, current host: 172.31.67.164 2023-04-14 09:35:16.780 INFO 1 --- [:20880-thread-7] c.a.d.rpc.protocol.dubbo.DubboProtocol : [DUBBO] disconected from /172.31.67.161:54208,url:dubbo://172.31.67.164:20880/com.xiaobai.api.service.SayHelloService?anyhost=true&application=provider&bind.ip=172.31.67.164&bind.port=20880&channel.readonly.sent=true&codec=dubbo&dubbo=2.6.0&generic=false&group=dev&heartbeat=60000&interface=com.xiaobai.api.service.SayHelloService&methods=sayHello&pid=1&revision=0.0.1-SNAPSHOT&side=provider×tamp=1681435335330&version=1.0.0, dubbo version: 2.6.0, current host: 172.31.67.164 2023-04-14 09:35:16.781 WARN 1 --- [:20880-thread-6] c.a.d.r.t.d.ChannelEventRunnable : [DUBBO] ChannelEventRunnable handle RECEIVED operation error, channel is NettyChannel [channel=[id: 0x79b1659e, /172.31.67.161:54208 :> /172.31.67.164:20880]], message is Request [id=13, version=2.0.0, twoway=true, event=false, broken=false, data=RpcInvocation [methodName=$invoke, parameterTypes=[class java.lang.String, class [Ljava.lang.String;, class [Ljava.lang.Object;], arguments=[sayHello, [Ljava.lang.String;@7641b1d1, [Ljava.lang.Object;@727e3bc1], attachments={x-envoy-internal=true, x-request-id=8f75f258-8c5c-4e68-a499-32d354b57283, x-forwarded-proto=http, dubbo=2.7.1, x-forwarded-for=192.168.90.171, version=1.0.0, accept=*/*, path=com.xiaobai.api.service.SayHelloService, input=503, :method=GET, :scheme=http, :path=/dubbo/hello?name=kk, x-envoy-decorator-operation=dubbo-provider-nacos.test.svc.cluster.local:20880/dubbo/*, :authority=test.mcp.cn, group=dev, user-agent=curl/7.29.0}]], dubbo version: 2.6.0, current host: 172.31.67.164 com.alibaba.dubbo.remoting.RemotingException: Failed to send message Response [id=13, version=2.0.0, status=20, event=false, error=null, result=RpcResult [result=Hello kk, exception=null]] to /172.31.67.161:54208, cause: null at com.alibaba.dubbo.remoting.transport.netty.NettyChannel.send(NettyChannel.java:106) ~[dubbo-2.6.0.jar!/:2.6.0] at com.alibaba.dubbo.remoting.transport.AbstractPeer.send(AbstractPeer.java:52) ~[dubbo-2.6.0.jar!/:2.6.0] at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:169) ~[dubbo-2.6.0.jar!/:2.6.0] at com.alibaba.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:50) ~[dubbo-2.6.0.jar!/:2.6.0] at com.alibaba.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:79) ~[dubbo-2.6.0.jar!/:2.6.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_301] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_301] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_301] Caused by: java.nio.channels.ClosedChannelException: null at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:643) ~[netty-3.2.5.Final.jar!/:na] at org.jboss.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:370) ~[netty-3.2.5.Final.jar!/:na] at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:137) ~[netty-3.2.5.Final.jar!/:na] at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76) ~[netty-3.2.5.Final.jar!/:na] at org.jboss.netty.channel.Channels.write(Channels.java:632) ~[netty-3.2.5.Final.jar!/:na] at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:70) ~[netty-3.2.5.Final.jar!/:na] at com.alibaba.dubbo.remoting.transport.netty.NettyHandler.writeRequested(NettyHandler.java:98) ~[dubbo-2.6.0.jar!/:2.6.0] at org.jboss.netty.channel.Channels.write(Channels.java:611) ~[netty-3.2.5.Final.jar!/:na] at org.jboss.netty.channel.Channels.write(Channels.java:578) ~[netty-3.2.5.Final.jar!/:na] at org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:251) ~[netty-3.2.5.Final.jar!/:na] at com.alibaba.dubbo.remoting.transport.netty.NettyChannel.send(NettyChannel.java:96) ~[dubbo-2.6.0.jar!/:2.6.0] ... 7 common frames omitted
match:
cluster:
service: dubbo.static
context: GATEWAY
patch:
operation: MERGE
value:
upstream_config:
name: envoy.upstreams.http.dubbo_tcp
typed_config:
'@type': type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.upstreams.http.dubbo_tcp.v3.DubboTcpConnectionPoolProto
这里的service要换成自己的,就是路由指向的那个service的全名,providers:com.xiaobai.api.service.SayHelloService:1.0.0:dev.DEFAULT-GROUP.public.nacos
之前的代码不是最近的,我这块已修改service ,是我路由指向的那个service的全名,通过http转dubbo,可以访问通接口,但是返回抛异常了,异常可以看上面的日志,看日志有点向返回的时候,返回去的端口不对,我只在service开放了8889,20880这两个端口
Failed to send message Response [id=13, version=2.0.0, status=20, event=false, error=null, result=RpcResult [result=Hello kk, exception=null]] to /172.31.67.161:54208
- applyTo: CLUSTER
match:
cluster:
service: dubbo-provider-nacos.test.svc.cluster.local:20880
context: GATEWAY
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
higress.io/destination: dubbo-provider-nacos.test.svc.cluster.local:20880
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"higress.io/destination":"dubbo-provider-nacos.test.svc.cluster.local:20880"},"name":"demo","namespace":"higress-system"},"spec":{"ingressClassName":"higress","rules":[{"host":"test.mcp.cn","http":{"paths":[{"backend":{"resource":{"apiGroup":"networking.higress.io","kind":"McpBridge","name":"default"}},"path":"/dubbo","pathType":"Prefix"}]}}]}}
creationTimestamp: "2023-04-13T14:58:41Z"
generation: 1
name: demo
namespace: higress-system
resourceVersion: "104344426"
uid: b96bd950-0419-4c56-9b2f-68536ff7eb3e
spec:
ingressClassName: higress
rules:
- host: test.mcp.cn
http:
paths:
- backend:
resource:
apiGroup: networking.higress.io
kind: McpBridge
name: default
path: /dubbo
pathType: Prefix
status:
loadBalancer: {}
EnvoyFilter里services_mapping节点的version和group字段看一下,和服务端实际的取值是否匹配。如果你没设置过的话,group应该是"",version应该是"0.0.0"。
接口可以通过http转dubbo访问到,我看有 System.out.println("----------------Hello " + name); 日志得输出,返回报异常了
providers:com.xiaobai.api.service.SayHelloService:1.0.0:dev
@Component
@Service(version = "1.0.0",group = "dev")
public class SayHelloImpl implements SayHelloService {
@Override
public String sayHello(String name){
System.out.println("----------------Hello " + name);
return "Hello " + name;
}
}
services_mapping:
- group: dev
method_mapping:
- name: sayHello
parameter_mapping:
- extract_key: name
extract_key_spec: ALL_QUERY_PARAMETER
mapping_type: java.lang.String
passthrough_setting:
passthrough_all_headers: true
path_matcher:
match_http_method_spec: ALL_GET
match_pattern: /dubbo/hello
name: com.xiaobai.api.service.SayHelloService
version: 1.0.0
url_unescape_spec: ALL_CHARACTERS_EXCEPT_RESERVED
----------------Hello kk
2023-04-14 14:17:16.282 WARN 1 --- [ver worker #1-1] c.a.d.remoting.transport.AbstractServer : [DUBBO] All clients has discontected from /172.31.67.164:20880. You can graceful shutdown now., dubbo version: 2.6.0, current host: 172.31.67.164
2023-04-14 14:17:16.284 INFO 1 --- [20880-thread-12] c.a.d.rpc.protocol.dubbo.DubboProtocol : [DUBBO] disconected from /172.31.67.161:44626,url:dubbo://172.31.67.164:20880/com.xiaobai.api.service.SayHelloService?anyhost=true&application=provider&bind.ip=172.31.67.164&bind.port=20880&channel.readonly.sent=true&codec=dubbo&dubbo=2.6.0&generic=false&group=dev&heartbeat=60000&interface=com.xiaobai.api.service.SayHelloService&methods=sayHello&pid=1&revision=0.0.1-SNAPSHOT&side=provider×tamp=1681435335330&version=1.0.0, dubbo version: 2.6.0, current host: 172.31.67.164
2023-04-14 14:17:16.284 WARN 1 --- [20880-thread-10] c.a.d.r.t.d.ChannelEventRunnable : [DUBBO] ChannelEventRunnable handle RECEIVED operation error, channel is NettyChannel [channel=[id: 0x44a3d4b7, /172.31.67.161:44626 :> /172.31.67.164:20880]], message is Request [id=14, version=2.0.0, twoway=true, event=false, broken=false, data=RpcInvocation [methodName=$invoke, parameterTypes=[class java.lang.String, class [Ljava.lang.String;, class [Ljava.lang.Object;], arguments=[sayHello, [Ljava.lang.String;@2e7da1cd, [Ljava.lang.Object;@2a4230da], attachments={x-envoy-internal=true, x-request-id=118c53c4-661b-4eb4-a748-86a15f6096d8, x-forwarded-proto=http, dubbo=2.7.1, x-forwarded-for=192.168.90.171, version=1.0.0, accept=*/*, path=com.xiaobai.api.service.SayHelloService, input=503, :method=GET, :scheme=http, :path=/dubbo/hello?name=kk, x-envoy-decorator-operation=dubbo-provider-nacos.test.svc.cluster.local:20880/dubbo/*, :authority=test.mcp.cn, group=dev, user-agent=curl/7.29.0}]], dubbo version: 2.6.0, current host: 172.31.67.164
com.alibaba.dubbo.remoting.RemotingException: Failed to send message Response [id=14, version=2.0.0, status=20, event=false, error=null, result=RpcResult [result=Hello kk, exception=null]] to /172.31.67.161:44626, cause: null
at com.alibaba.dubbo.remoting.transport.netty.NettyChannel.send(NettyChannel.java:106) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.transport.AbstractPeer.send(AbstractPeer.java:52) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:169) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:50) ~[dubbo-2.6.0.jar!/:2.6.0]
at com.alibaba.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:79) ~[dubbo-2.6.0.jar!/:2.6.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_301]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_301]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_301]
Caused by: java.nio.channels.ClosedChannelException: null
at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:643) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:370) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:137) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.Channels.write(Channels.java:632) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:70) ~[netty-3.2.5.Final.jar!/:na]
at com.alibaba.dubbo.remoting.transport.netty.NettyHandler.writeRequested(NettyHandler.java:98) ~[dubbo-2.6.0.jar!/:2.6.0]
at org.jboss.netty.channel.Channels.write(Channels.java:611) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.Channels.write(Channels.java:578) ~[netty-3.2.5.Final.jar!/:na]
at org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:251) ~[netty-3.2.5.Final.jar!/:na]
at com.alibaba.dubbo.remoting.transport.netty.NettyChannel.send(NettyChannel.java:96) ~[dubbo-2.6.0.jar!/:2.6.0]
目前看不出来什么问题。服务的代码是否方便发一下呢?我本地启动一下看看。
- applyTo: CLUSTER
match:
cluster:
service: dubbo-provider-nacos.test.svc.cluster.local:20880 // <---- 这一行把端口去掉试一下
context: GATEWAY
可以了,非常感谢
String类型可以,传用户自定义类和list,Map,不太行
# curl -X POST -d '{"name":"value1", "age":2}' 10.107.58.2/dubbo/person -v -H 'host: test.mcp.cn'
* About to connect() to 10.107.58.2 port 80 (#0)
* Trying 10.107.58.2...
* Connected to 10.107.58.2 (10.107.58.2) port 80 (#0)
> POST /dubbo/person HTTP/1.1
> User-Agent: curl/7.29.0
> Accept: */*
> host: test.mcp.cn
> Content-Length: 26
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 26 out of 26 bytes
< HTTP/1.1 200 OK
< date: Fri, 14 Apr 2023 10:04:48 GMT
< server: istio-envoy
< transfer-encoding: chunked
<
* transfer closed with outstanding read data remaining
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining
String类型可以,传用户自定义类和list,Map,不太行
# curl -X POST -d '{"name":"value1", "age":2}' 10.107.58.2/dubbo/person -v -H 'host: test.mcp.cn' * About to connect() to 10.107.58.2 port 80 (#0) * Trying 10.107.58.2... * Connected to 10.107.58.2 (10.107.58.2) port 80 (#0) > POST /dubbo/person HTTP/1.1 > User-Agent: curl/7.29.0 > Accept: */* > host: test.mcp.cn > Content-Length: 26 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 26 out of 26 bytes < HTTP/1.1 200 OK < date: Fri, 14 Apr 2023 10:04:48 GMT < server: istio-envoy < transfer-encoding: chunked < * transfer closed with outstanding read data remaining * Closing connection 0 curl: (18) transfer closed with outstanding read data remaining
EnvoyFilter 的内容能发一下吗?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: http-dubbo-transcoder-test
namespace: higress-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.http_dubbo_transcoder
typed_config:
'@type': type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.http_dubbo_transcoder.v3.HttpDubboTranscoder
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
route:
name: demo
patch:
operation: MERGE
value:
route:
upgrade_configs:
- connect_config:
allow_post: true
upgrade_type: CONNECT
typed_per_filter_config:
envoy.filters.http.http_dubbo_transcoder:
'@type': type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.filters.http.http_dubbo_transcoder.v3.HttpDubboTranscoder
value:
request_validation_options:
reject_unknown_method: true
reject_unknown_query_parameters: true
services_mapping:
- group: dev
method_mapping:
- name: sayHello
parameter_mapping:
- extract_key: name
extract_key_spec: ALL_QUERY_PARAMETER
mapping_type: java.lang.String
passthrough_setting:
passthrough_all_headers: true
path_matcher:
match_http_method_spec: ALL_GET
match_pattern: /dubbo/hello
- name: echoList
parameter_mapping:
- extract_key: input
extract_key_spec: ALL_BODY
mapping_type: java.util.List
passthrough_setting:
passthrough_all_headers: true
path_matcher:
match_http_method_spec: ALL_POST
match_pattern: /dubbo/list
- name: echoMap
parameter_mapping:
- extract_key: input
extract_key_spec: ALL_BODY
mapping_type: java.util.Map
passthrough_setting:
passthrough_all_headers: true
path_matcher:
match_http_method_spec: ALL_POST
match_pattern: /dubbo/map
- name: echoPerson
parameter_mapping:
- extract_key: input
extract_key_spec: ALL_BODY
mapping_type: com.xiaobai.api.pojo.Person
passthrough_setting:
passthrough_all_headers: true
path_matcher:
match_http_method_spec: ALL_POST
match_pattern: /dubbo/person
name: com.xiaobai.api.service.SayHelloService
version: 1.0.0
url_unescape_spec: ALL_CHARACTERS_EXCEPT_RESERVED
- applyTo: CLUSTER
match:
cluster:
service: dubbo-provider-nacos.test.svc.cluster.local
context: GATEWAY
patch:
operation: MERGE
value:
upstream_config:
name: envoy.upstreams.http.dubbo_tcp
typed_config:
'@type': type.googleapis.com/udpa.type.v1.TypedStruct
type_url: type.googleapis.com/envoy.extensions.upstreams.http.dubbo_tcp.v3.DubboTcpConnectionPoolProto
ALL_BODY 这种映射是读取 body JSON 里面的字段,字段名字就是 extract_key,所以看上面的报文,应该是对不上的。如果需要整个 body,那么 extract_key 要留空。这里需要重新考虑一下怎么配置。
基础配置
McpBridge
Ingress
EnvoyFilter
ServiceEntry
VirtualService
dubbo服务和zk、higress-gateway的service 地址
测试