apache / incubator-seata

:fire: Seata is an easy-to-use, high-performance, open source distributed transaction solution.
https://seata.apache.org/
Apache License 2.0
25.35k stars 8.78k forks source link

seata server 2.0.0 UI界面登录提示"服务器成功返回请求的数据。" 没有任何反应 #6431

Closed biabiubiu closed 7 months ago

biabiubiu commented 8 months ago

现象: 屏幕截图 2024-03-17 173726 后台没看到明显报错日志 image

liuqiufeng commented 8 months ago

请提供具体操作步骤、参数及环境等信息以便复现。另外能否展开浏览器错误提示看下详细错误信息以及响应内容?

Please provide specific steps, parameters and environment information for reproduction. Also, can you expand the browser error message to see the detailed error message and response content?

biabiubiu commented 8 months ago

详细错误信息:去掉附件k8s-master.har.txt 的.txt导入chrome。 k8s-master.har.txt 环境信息: 系统:inux version 3.10.0-957.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) k8s(1 master + 1 node): Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:49:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:43:11Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}

docker 版本: image

biabiubiu commented 8 months ago

以下是yaml 操作步骤: kubectl apply -f xxx.yaml ,没有其它操作

apiVersion: v1
kind: Service
metadata:
  name: seata-server
  namespace: ggs-bs-master
  labels:
    app.kubernetes.io/name: seata-server
spec:
  type: NodePort
  ports:
    - name: http
      protocol: TCP
      port: 8091
      targetPort: 8091
      nodePort: 31091
    - name: http-ui
      protocol: TCP
      port: 7091
      targetPort: 7091
      nodePort: 30091
  selector:
    app.kubernetes.io/name: seata-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: seata-server
  namespace: ggs-bs-master
  labels:
    app.kubernetes.io/name: seata-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: seata-server
  template:
    metadata:
      labels:
        app.kubernetes.io/name: seata-server
    spec:
      containers:
        - name: seata-server
          image: seataio/seata-server:2.0.0
          imagePullPolicy: IfNotPresent
          env:
            - name: SEATA_CONFIG_NAME
              value: file:/root/seata-config/registry
          ports:
            - name: http
              containerPort: 8091
              protocol: TCP
          volumeMounts:
            - name: seata-config
              mountPath: /seata-server/resources/application.yml
              subPath: application.yml
      volumes:
        - name: seata-config
          configMap:
            name: seata-server-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: seata-server-config
  namespace: ggs-bs-master
data:
  application.yml: |
    server:
      port: 7091
    spring:
      application:
        name: seata-server
    logging:
      config: classpath:logback-spring.xml
      file:
        path: ${user.home}/logs/seata
      extend:
        logstash-appender:
          destination: 127.0.0.1:4560
        kafka-appender:
          bootstrap-servers: 127.0.0.1:9092
          topic: logback_to_logstash
    console:
      user:
        username: seata
        password: xxxxx
    seata:
      config:
        # support: nacos 、 consul 、 apollo 、 zk  、 etcd3
        type: nacos
        nacos:
          server-addr: nacos-headless:8848
          namespace: seata
          group: xxx
          username: xxx
          password: xxx
          data-id: seata-server.yaml
      registry:
        # support: nacos 、 eureka 、 redis 、 zk  、 consul 、 etcd3 、 sofa
        type: nacos
        nacos:
          application: seata-server
          server-addr: nacos-headless:8848
          group: xxxx
          namespace: xxx
          cluster: default
          username: xxx
          password: xxxx
      security:
        secretKey: xxxxxxxxxxxxx
        tokenValidityInMilliseconds: 1800000
        ignore:
          urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.jpeg,/**/*.ico,/api/v1/auth/login,/metadata/v1/**
liuqiufeng commented 7 months ago

Can you provide a detailed error message from the browser console? Including server-side response information.

ptyin commented 7 months ago

I cannot reproduce same error as you did. I tested it using following setups, and it works for me.

apiVersion: v1
kind: Service
metadata:
  name: seata-server
  namespace: default
  labels:
    app.kubernetes.io/name: seata-server
spec:
  type: NodePort
  ports:
    - name: http
      protocol: TCP
      port: 8091
      targetPort: 8091
      nodePort: 31091
    - name: http-ui
      protocol: TCP
      port: 7091
      targetPort: 7091
      nodePort: 30091
  selector:
    app.kubernetes.io/name: seata-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: seata-server
  namespace: default
  labels:
    app.kubernetes.io/name: seata-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: seata-server
  template:
    metadata:
      labels:
        app.kubernetes.io/name: seata-server
    spec:
      containers:
        - name: seata-server
          image: seataio/seata-server:2.0.0
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8091
              protocol: TCP
image
biabiubiu commented 7 months ago

Can you provide a detailed error message from the browser console? Including server-side response information. 删掉 .txt 导入chrome k8s-master.har.txt

biabiubiu commented 7 months ago

I cannot reproduce same error as you did. I tested it using following setups, and it works for me.

apiVersion: v1
kind: Service
metadata:
  name: seata-server
  namespace: default
  labels:
    app.kubernetes.io/name: seata-server
spec:
  type: NodePort
  ports:
    - name: http
      protocol: TCP
      port: 8091
      targetPort: 8091
      nodePort: 31091
    - name: http-ui
      protocol: TCP
      port: 7091
      targetPort: 7091
      nodePort: 30091
  selector:
    app.kubernetes.io/name: seata-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: seata-server
  namespace: default
  labels:
    app.kubernetes.io/name: seata-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: seata-server
  template:
    metadata:
      labels:
        app.kubernetes.io/name: seata-server
    spec:
      containers:
        - name: seata-server
          image: seataio/seata-server:2.0.0
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8091
              protocol: TCP
image

i don't know why. i excute kubectl delete -f xxx.yaml and kubectl apply -f xxx.yaml just now.And also has this problem.

ptyin commented 7 months ago

i don't know why. i excute kubectl delete -f xxx.yaml and kubectl apply -f xxx.yaml just now.And also has this problem.

It is actually kinda weird. I have been discussing with @liuqiufeng about this problem. We checked the frontend login logic here,

https://github.com/apache/incubator-seata/blob/9e78cddad0a6f674ea25985d6f09b604f4ffe0a3/console/src/main/resources/static/console-fe/src/utils/request.ts#L59-L73

It prompts with error if and only if your login response had status code is not 200 or body.data.code is not "200". However, I viewed your .har logs, and found the status code and business code are all as expected:

status code business code
d624a52357ef99238795ebc93fa5109d d89d4c9ba160e4b82b050c7b4b484717

Are you sure you have not mistaken providing a wrong log instead?

ptyin commented 7 months ago

Did you configure any middleware or proxy like nginx that may redirect your request (causing firstly reponding with 307 or 308 instead of 200).

biabiubiu commented 7 months ago

Did you configure any middleware or proxy like nginx that may redirect your request (causing firstly reponding with 307 or 308 instead of 200).

not,in my k8s cluster install ingress-nginx.But not use it. I only use IP(k8s in) + nodePort(seata service in k8s) to request .

biabiubiu commented 7 months ago

Did you configure any middleware or proxy like nginx that may redirect your request (causing firstly reponding with 307 or 308 instead of 200).

just now, i re-export the nodePort(delete nodeport and create again) use kubesphere. And it work normal.So strange. May it is k8s problem......

ptyin commented 7 months ago

Did you configure any middleware or proxy like nginx that may redirect your request (causing firstly reponding with 307 or 308 instead of 200).

just now, i re-export the nodePort(delete nodeport and create again) use kubesphere. And it work normal.So strange. May it is k8s problem......

Maybe. It is more likely a problem of network setup. I found your request go through a local HTTP proxy (192.168.150.128:80). Maybe you can try switch the proxy off to test if that error still occurs?

biabiubiu commented 7 months ago

i don't know why. i excute kubectl delete -f xxx.yaml and kubectl apply -f xxx.yaml just now.And also has this problem.

It is actually kinda weird. I have been discussing with @liuqiufeng about this problem. We checked the frontend login logic here,

https://github.com/apache/incubator-seata/blob/9e78cddad0a6f674ea25985d6f09b604f4ffe0a3/console/src/main/resources/static/console-fe/src/utils/request.ts#L59-L73

It prompts with error if and only if your login response had status code is not 200 or body.data.code is not "200". However, I viewed your .har logs, and found the status code and business code are all as expected:

status code business code d624a52357ef99238795ebc93fa5109d d89d4c9ba160e4b82b050c7b4b484717 Are you sure you have not mistaken providing a wrong log instead?

this log is right.I check too many times.

biabiubiu commented 7 months ago

Did you configure any middleware or proxy like nginx that may redirect your request (causing firstly reponding with 307 or 308 instead of 200).

just now, i re-export the nodePort(delete nodeport and create again) use kubesphere. And it work normal.So strange. May it is k8s problem......

Maybe. It is more likely a problem of network setup. I found your request go through a local HTTP proxy (192.168.150.128:80). Maybe you can try switch the proxy off to test if that error still occurs?

this is SwitchHost.i use it mapping a domain,i use 192.168.150.128:30091 is same question.

biabiubiu commented 7 months ago

thanks for all. I think i know this may not be seata'question.