rancher / rancher

Complete container management platform
http://rancher.com
Apache License 2.0
22.81k stars 2.91k forks source link

Fail to expose port of pod #31088

Closed liangminhua closed 2 years ago

liangminhua commented 3 years ago

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible): create pod using rancher ui

Result: crash loop back off

Other details that may be helpful:

deployment yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "4"
    field.cattle.io/creatorId: user-66rjn
    field.cattle.io/publicEndpoints: "null"
  creationTimestamp: "2021-01-29T16:05:21Z"
  generation: 6
  labels:
    cattle.io/creator: norman
    workload.user.cattle.io/workloadselector: deployment-default-kroki
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:field.cattle.io/creatorId: {}
          f:field.cattle.io/publicEndpoints: {}
        f:labels:
          .: {}
          f:cattle.io/creator: {}
          f:workload.user.cattle.io/workloadselector: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector:
          f:matchLabels:
            .: {}
            f:workload.user.cattle.io/workloadselector: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:annotations:
              .: {}
              f:cattle.io/timestamp: {}
              f:field.cattle.io/ports: {}
            f:labels:
              .: {}
              f:workload.user.cattle.io/workloadselector: {}
          f:spec:
            f:containers:
              k:{"name":"kroki"}:
                .: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":8000,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:name: {}
                    f:protocol: {}
                f:resources: {}
                f:securityContext:
                  .: {}
                  f:allowPrivilegeEscalation: {}
                  f:capabilities: {}
                  f:privileged: {}
                  f:readOnlyRootFilesystem: {}
                  f:runAsNonRoot: {}
                f:stdin: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
                f:tty: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: rancher
    operation: Update
    time: "2021-02-02T01:21:58Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:replicas: {}
        f:unavailableReplicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-02-02T01:27:59Z"
  name: kroki
  namespace: default
  resourceVersion: "30150456"
  selfLink: /apis/apps/v1/namespaces/default/deployments/kroki
  uid: d78a9faa-b229-44cc-93fc-b4db5d39a7ff
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: deployment-default-kroki
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2021-02-02T01:21:58Z"
        field.cattle.io/ports: '[[{"containerPort":8000,"dnsName":"kroki","hostPort":0,"kind":"ClusterIP","name":"8000tcp2","protocol":"TCP"}]]'
      creationTimestamp: null
      labels:
        workload.user.cattle.io/workloadselector: deployment-default-kroki
    spec:
      containers:
      - image: yuzutech/kroki
        imagePullPolicy: Always
        name: kroki
        ports:
        - containerPort: 8000
          name: 8000tcp2
          protocol: TCP
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        tty: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  conditions:
  - lastTransitionTime: "2021-01-29T16:05:21Z"
    lastUpdateTime: "2021-02-02T01:22:00Z"
    message: ReplicaSet "kroki-774fdcdb98" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2021-02-02T01:27:59Z"
    lastUpdateTime: "2021-02-02T01:27:59Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  observedGeneration: 6
  replicas: 1
  unavailableReplicas: 1
  updatedReplicas: 1

pod logs

java.lang.ClassCastException: class java.lang.String cannot be cast to class java.lang.Number (java.lang.String and java.lang.Number are in module java.base of loader 'bootstrap') 
    at io.vertx.core.json.JsonObject.getInteger(JsonObject.java:365) 
    at io.kroki.server.Server.start(Server.java:128) 
    at io.kroki.server.Server.lambda$start$1(Server.java:53) 
    at io.vertx.config.impl.ConfigRetrieverImpl.lambda$getConfig$2(ConfigRetrieverImpl.java:182) 
    at io.vertx.config.impl.ConfigRetrieverImpl.lambda$compute$9(ConfigRetrieverImpl.java:296) 
    at io.vertx.core.impl.FutureImpl.dispatch(FutureImpl.java:105) 
    at io.vertx.core.impl.FutureImpl.onComplete(FutureImpl.java:83) 
    at io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:131) 
    at io.vertx.core.impl.CompositeFutureImpl.onComplete(CompositeFutureImpl.java:25) 
    at io.vertx.core.Future.setHandler(Future.java:126) 
    at io.vertx.core.CompositeFuture.setHandler(CompositeFuture.java:184) 
    at io.vertx.config.impl.ConfigRetrieverImpl.compute(ConfigRetrieverImpl.java:275) 
    at io.vertx.config.impl.ConfigRetrieverImpl.getConfig(ConfigRetrieverImpl.java:175) 
    at io.kroki.server.Server.start(Server.java:49) 
    at io.vertx.core.impl.DeploymentManager.lambda$doDeploy$9(DeploymentManager.java:556) 
    at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366) 
    at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38) 
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) 
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) 
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) 
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) 
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) 
    at java.base/java.lang.Thread.run(Unknown Source) 

Environment information

rancher/rancher 2.5.5

Cluster information

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"archive", BuildDate:"2020-11-25T13:19:56Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.6", GitCommit:"fbf646b339dc52336b55d8ec85c181981b86331a", GitTreeState:"clean", BuildDate:"2020-12-18T12:01:36Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Client: Docker Engine - Community
 Version:           19.03.14
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        5eb3275d40
 Built:             Tue Dec  1 19:20:42 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.14
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       5eb3275d40
  Built:            Tue Dec  1 19:19:17 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
stale[bot] commented 3 years ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.