apache / incubator-seata

:fire: Seata is an easy-to-use, high-performance, open source distributed transaction solution.
https://seata.apache.org/
Apache License 2.0
25.27k stars 8.77k forks source link

seata-client无法注册到seata-server(docker)—— ...0304 register RM failed. #6915

Open Ghost-Unison opened 4 days ago

Ghost-Unison commented 4 days ago

小白求助!本人正在进行Spring Cloud相关组件的学习,正在学习seata教程中使用seata控制下单事务的流程(订单-账户-库存),我的开发环境是这样的: 我在本机的docker环境(docker引擎是windows下的WSL2)上运行了nacos-server(2.4.2)和seata-sever(2.1.0)服务,并且seata服务也已经成功注册到nacos注册中心了 这是我seata-server的配置文件和在nacos配置中心中配置成功的截图

...
server:
  port: 7091

spring:
  application:
    name: seata-server

logging:
  config: classpath:logback-spring.xml
  file:
    path: ${log.home:${user.home}/logs/seata}
  extend:
    logstash-appender:
      destination: 127.0.0.1:4560
    kafka-appender:
      bootstrap-servers: 127.0.0.1:9092
      topic: logback_to_logstash

console:
  user:
    username: seata
    password: seata
seata:
  config:
    # support: nacos, consul, apollo, zk, etcd3
    type: nacos
    nacos:
      #cannot use localhost or 127.0.0.1
      server-addr: host.docker.internal:8848
      namespace:
      group: SEATA_GROUP
      username: nacos
      password: nacos
      context-path:
      data-id: seata-server.properties
  registry:
    # support: nacos, eureka, redis, zk, consul, etcd3, sofa
    type: nacos
    nacos:
      application: seata-server
      #cannot use localhost or 127.0.0.1
      server-addr: host.docker.internal:8848
      group: SEATA_GROUP
      namespace:
      cluster: default
      username: nacos
      password: nacos
      context-path:
  store:
    # support: file 、 db 、 redis 、 raft
    mode: db
    db:
      datasource: druid
      db-type: mysql
      driver-class-name: com.mysql.jdbc.Driver
      url: jdbc:mysql://XXXXXX/seata-server?rewriteBatchedStatements=true&serverTimezone=Asia/Shanghai
      user: root
      password: XXXXXX
      min-conn: 10
      max-conn: 100
      global-table: global_table
      branch-table: branch_table
      lock-table: lock_table
      distributed-lock-table: distributed_lock
      query-limit: 1000
      max-wait: 5000
  #  server:
  #    service-port: 8091 #If not configured, the default is '${server.port} + 1000'
  security:
    secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
    tokenValidityInMilliseconds: 1800000
    ignore:
      urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.jpeg,/**/*.ico,/api/v1/auth/login,/version.json,/health,/error

image image

然后是我在nacos配置中心中对seata-server的配置,我主要修改了数据库相关的配置,并且把service.default.grouplist改成了我seata-server的地址,其他配置都是默认的,没有修改。 image

#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html
#Transport configuration, for client and server
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none

#Transaction routing rules configuration, only for the client
service.vgroupMapping.default_tx_group=default
#If you use a registry, you can ignore it
#service.default.grouplist=127.0.0.1:8091
service.default.grouplist=172.17.0.3:8091
service.enableDegrade=false
service.disableGlobalTransaction=false

#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=true
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#For TCC transaction mode
tcc.fence.logTableName=tcc_fence_log
tcc.fence.cleanPeriod=1h

#Log rule configuration, for client and server
log.exceptionRate=100

#Transaction storage configuration, only for the server. The file, db, and redis configuration values are optional.
store.mode=file
store.lock.mode=file
store.session.mode=file
#Used for password encryption
store.publicKey=

#If `store.mode,store.lock.mode,store.session.mode` are not equal to `file`, you can remove the configuration block.
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100

#These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block.
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://XXXXXX/seata-server?useUnicode=true&rewriteBatchedStatements=true&serverTimezone=Asia/Shanghai
store.db.user=root
store.db.password=XXXXXX
store.db.minConn=10
store.db.maxConn=100
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=1000
store.db.lockTable=lock_table
store.db.maxWait=5000

#These configurations are required if the `store mode` is `redis`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `redis`, you can remove the configuration block.
store.redis.mode=single
store.redis.single.host=127.0.0.1
store.redis.single.port=6379
store.redis.sentinel.masterName=
store.redis.sentinel.sentinelHosts=
store.redis.sentinel.sentinelPassword=
store.redis.maxConn=10
store.redis.minConn=1
store.redis.maxTotal=100
store.redis.database=0
store.redis.password=
store.redis.queryLimit=100

#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.xaerNotaRetryTimeout=60000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=false
server.enableParallelRequestHandle=false

#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898

然后我把下单、修改库存、账户操作分成了3个微服务(在本机测试,并没有发送到docker容器),现在需要启动他们并把他们注册到nacos,并把事务注册到seata-server(我不知道这样说是否恰当)。 这是我订单服务的配置文件

server:
  port: 8180
spring:
  application:
    name: seata-order-service
  cloud:
    nacos:
      discovery:
        server-addr: localhost:8848
        group: SEATA_GROUP
      username: nacos
      password: nacos
  datasource:
    url: jdbc:mysql://XXXXXX/seata-order?useUnicode=true&characterEncoding=utf-8&serverTimezone=Asia/Shanghai
    username: root
    password: XXXXXX
logging:
  level:
    io:
      seata: info
mybatis:
  mapperLocations: classpath:mapper/*.xml
seata:
  enabled: true
  application-id: ${spring.application.name}
  tx-service-group: default_tx_group
  registry:
    type: nacos
    nacos:
      # 应与seata-server实际注册的服务名一致
      application: seata-server
      server-addr: localhost:8848
      group: SEATA_GROUP
      username: nacos
      password: nacos
  config:
    type: nacos
    nacos:
      server-addr: localhost:8848
      group: SEATA_GROUP
      data-id: seata-server.properties
      username: nacos
      password: nacos

就在我启动完成这个订单服务之后,错误出现了,应该是说我的这个seata客户端无法注册到seata-server

2024-10-11T13:29:15.284+08:00  INFO 20096 --- [seata-order-service] [           main] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:TMROLE,address:172.17.0.3:8091,msg:< RegisterTMRequest{version='2.0.0', applicationId='seata-order-service', transactionServiceGroup='default_tx_group', extraData='ak=null
digest=default_tx_group,192.168.31.93,1728624555283
timestamp=1728624555283
authVersion=V4
vgroup=default_tx_group
ip=192.168.31.93
'} >
2024-10-11T13:29:15.815+08:00  INFO 20096 --- [seata-order-service] [tor-localhost-8] com.alibaba.nacos.common.remote.client   : [7c8cb769-07cb-4fe0-b856-f033232e0429] Receive server push request, request = NotifySubscriberRequest, requestId = 7
2024-10-11T13:29:15.815+08:00  INFO 20096 --- [seata-order-service] [tor-localhost-8] com.alibaba.nacos.common.remote.client   : [7c8cb769-07cb-4fe0-b856-f033232e0429] Ack server push request, request = NotifySubscriberRequest, requestId = 7
2024-10-11T13:29:25.382+08:00 ERROR 20096 --- [seata-order-service] [           main] i.s.c.r.netty.NettyClientChannelManager  : 0304 register RM failed.

io.seata.common.exception.FrameworkException: can not connect to services-server.
    at io.seata.core.rpc.netty.NettyClientBootstrap.getNewChannel(NettyClientBootstrap.java:182) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.core.rpc.netty.NettyPoolableFactory.makeObject(NettyPoolableFactory.java:58) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.core.rpc.netty.NettyPoolableFactory.makeObject(NettyPoolableFactory.java:34) ~[seata-all-2.0.0.jar:2.0.0]
    at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220) ~[commons-pool-1.6.jar:1.6]
    at io.seata.core.rpc.netty.NettyClientChannelManager.doConnect(NettyClientChannelManager.java:266) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.core.rpc.netty.NettyClientChannelManager.acquireChannel(NettyClientChannelManager.java:113) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.core.rpc.netty.NettyClientChannelManager.reconnect(NettyClientChannelManager.java:176) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.core.rpc.netty.NettyClientChannelManager.reconnect(NettyClientChannelManager.java:234) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.core.rpc.netty.TmNettyRemotingClient.initConnection(TmNettyRemotingClient.java:288) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.core.rpc.netty.TmNettyRemotingClient.init(TmNettyRemotingClient.java:196) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.tm.TMClient.init(TMClient.java:47) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.spring.annotation.GlobalTransactionScanner.initClient(GlobalTransactionScanner.java:224) ~[seata-all-2.0.0.jar:2.0.0]
    at io.seata.spring.annotation.GlobalTransactionScanner.afterPropertiesSet(GlobalTransactionScanner.java:470) ~[seata-all-2.0.0.jar:2.0.0]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1820) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1769) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:599) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:521) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:325) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:323) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:204) ~[spring-beans-6.1.3.jar:6.1.3]
    at org.springframework.context.support.PostProcessorRegistrationDelegate.registerBeanPostProcessors(PostProcessorRegistrationDelegate.java:265) ~[spring-context-6.1.3.jar:6.1.3]
    at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:805) ~[spring-context-6.1.3.jar:6.1.3]
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:608) ~[spring-context-6.1.3.jar:6.1.3]
    at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146) ~[spring-boot-3.2.2.jar:3.2.2]
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-3.2.2.jar:3.2.2]
    at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:456) ~[spring-boot-3.2.2.jar:3.2.2]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:334) ~[spring-boot-3.2.2.jar:3.2.2]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1354) ~[spring-boot-3.2.2.jar:3.2.2]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-3.2.2.jar:3.2.2]
    at com.macro.cloud.SeataOrderServiceApplication.main(SeataOrderServiceApplication.java:15) ~[classes/:na]
Caused by: io.seata.common.exception.FrameworkException: connect failed, can not connect to services-server.
    at io.seata.core.rpc.netty.NettyClientBootstrap.getNewChannel(NettyClientBootstrap.java:177) ~[seata-all-2.0.0.jar:2.0.0]
    ... 30 common frames omitted

2024-10-11T13:29:25.385+08:00 ERROR 20096 --- [seata-order-service] [           main] i.s.c.r.netty.NettyClientChannelManager  : 0101 can not connect to [172.17.0.3:8091] cause:[can not register RM,err:can not connect to services-server.]
2024-10-11T13:29:25.385+08:00  INFO 20096 --- [seata-order-service] [           main] i.s.s.a.GlobalTransactionScanner         : Transaction Manager Client is initialized. applicationId[seata-order-service] txServiceGroup[default_tx_group]
2024-10-11T13:29:25.386+08:00  INFO 20096 --- [seata-order-service] [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient    : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0x3f0ce4ab, L:null ! R:/172.17.0.3:8091]) will closed
2024-10-11T13:29:25.390+08:00  INFO 20096 --- [seata-order-service] [           main] c.a.n.client.config.impl.ClientWorker    : [fixed-localhost_8848] [subscribe] transport.enableRmClientBatchSendRequest+SEATA_GROUP
2024-10-11T13:29:25.390+08:00  INFO 20096 --- [seata-order-service] [           main] c.a.nacos.client.config.impl.CacheData   : [fixed-localhost_8848] [add-listener] ok, tenant=, dataId=transport.enableRmClientBatchSendRequest, group=SEATA_GROUP, cnt=1
2024-10-11T13:29:25.390+08:00  INFO 20096 --- [seata-order-service] [           main] c.a.nacos.client.config.impl.CacheData   : [fixed-localhost_8848] [add-listener] ok, tenant=, dataId=transport.enableRmClientBatchSendRequest, group=SEATA_GROUP, cnt=2
2024-10-11T13:29:25.527+08:00  INFO 20096 --- [seata-order-service] [           main] io.seata.rm.datasource.AsyncWorker       : Async Commit Buffer Limit: 10000
...

但是我实在不知道我到底哪一步配置得有问题,seata-server服务、分组、地址我都检查过了,为什么客户端还一直连接不上seata-server呢? 我在seata客户端的配置文件中配置了nacos注册中心的地址后,不是应该可以直接获取到seata-server的地址并注册吗? 是不是跟我将seata-server部署在docker上有关系? 是不是nacos注册中心获取到的seata-server地址并不能直接由本机的seata客户端访问?

funky-eyes commented 3 days ago

检查client与172.17.0.3 这个ip的网络是否存在问题 Check if there is a problem with the network of the client and 172.17.0.3 this ip

slievrly commented 9 hours ago

The issue appears to be that business side and seata-server do not belong to the same network subset.

Ghost-Unison commented 57 minutes ago

检查client与172.17.0.3 这个ip的网络是否存在问题 Check if there is a problem with the network of the client and 172.17.0.3 this ip

The issue appears to be that business side and seata-server do not belong to the same network subset.

谢谢回复,我的订单微服务(order-service)启动在本机(IDEA), 然而nacos和seata-server都启动在docker上(WSL2),我明白这确实是网络的问题。——我在本机无法访问172.10.0.3这个地址,所以我的微服务无法通过172.10.0.3这个地址注册到seata-server。 那么我的订单服务在nacos中获取到的这个seata-server的地址(172.10.0.3)是docker所指定的吗,我该怎么才能让我的订单服务获取到一个可访问的seata-server地址呢?

下面是我启动nacos和seata-server的docker命令。 docker run --name nacos -d -p 8848:8848 -p 9848:9848 --env MODE=standalone --env NACOS_AUTH_ENABLE=true -e ... -v ... nacos/nacos-server:latest

docker run --name seata-server -d -p 8091:8091 -p 7091:7091 -v ... apache/seata-server:2.1.0.jre17 这是否涉及到一些docker network相关的知识...

又或者说我只能把我的订单微服务order-service也发到docker上才能访问到seata-server吗,我觉得这样测试起来不太方便。