openimsdk / helm-charts

helm charts repository for openim
https://openimsdk.github.io/helm-charts/
Apache License 2.0
14 stars 10 forks source link

build(deps): Bump actions/upload-pages-artifact from 2 to 3 #70

Closed dependabot[bot] closed 6 months ago

dependabot[bot] commented 6 months ago

Bumps actions/upload-pages-artifact from 2 to 3.

Release notes

Sourced from actions/upload-pages-artifact's releases.

v3.0.0

Changelog

To deploy a GitHub Pages site which has been uploaded with this version of actions/upload-pages-artifact, you must also use actions/deploy-pages@v4 or newer.

See details of all code changes since previous release.

Commits
  • 0252fc4 Merge pull request #81 from actions/artifacts-next
  • 2a5c144 Use actions/download-artifact@v4 in test
  • 7e3f6bb Merge pull request #80 from robherley/patch-1
  • 257e666 Use v4 upload-artifact tag
  • 0313a19 Merge pull request #78 from konradpabjan/main
  • 1228e65 Update action.yml
  • eb31309 Update artifact names in tests
  • 241a975 Correct artifact name during download
  • ef95519 Unique artifact name per job
  • ecdd3ed Switch to using download@v4-beta
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
sweep-ai[bot] commented 6 months ago

Apply Sweep Rules to your PR?

kubbot commented 6 months ago

Kubernetes Templates in openim Namespace

openim templates get ./charts/openim-server -f k8s-open-im-server-config.yaml -f config-imserver.yaml ```markdown --- # Source: openim-api/templates/app-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: openim-cm data: config.yaml: |+ api: listenIP: 0.0.0.0 openImApiPort: - 80 callback: afterSendGroupMsg: enable: false timeout: 5 afterSendSingleMsg: enable: false timeout: 5 beforeAddFriend: enable: false failedContinue: true timeout: 5 beforeCreateGroup: enable: false failedContinue: true timeout: 5 beforeMemberJoinGroup: enable: false failedContinue: true timeout: 5 beforeSendGroupMsg: enable: false failedContinue: true timeout: 5 beforeSendSingleMsg: enable: false failedContinue: true timeout: 5 beforeSetGroupMemberInfo: enable: false failedContinue: true timeout: 5 msgModify: enable: false failedContinue: true timeout: 5 offlinePush: enable: false failedContinue: true timeout: 5 onlinePush: enable: false failedContinue: true timeout: 5 setMessageReactionExtensions: enable: false failedContinue: true timeout: 5 superGroupOnlinePush: enable: false failedContinue: true timeout: 5 url: null userKickOff: enable: false timeout: 5 userOffline: enable: false timeout: 5 userOnline: enable: false timeout: 5 chatPersistenceMysql: true chatRecordsClearTime: 0 2 * * 3 envs: discovery: k8s groupMessageHasReadReceiptEnable: true iosPush: badgeCount: true production: false pushSound: xxx kafka: addr: - im-kafka:9092 consumerGroupID: msgToMongo: mongo msgToMySql: mysql msgToPush: push msgToRedis: redis latestMsgToRedis: topic: latestMsgToRedis msgToPush: topic: msgToPush offlineMsgToMongo: topic: offlineMsgToMongoMysql password: proot username: root log: isJson: false isStdout: true remainLogLevel: 6 remainRotationCount: 2 rotationTime: 24 storageLocation: ../logs/ withStack: false longConnSvr: openImMessageGatewayPort: - 88 openImWsPort: - 80 websocketMaxConnNum: 100000 websocketMaxMsgLen: 4096 websocketTimeout: 10 manager: nickname: - system1 - system2 - system3 userID: - openIM123456 - openIM654321 - openIMAdmin messageVerify: friendVerify: false mongo: address: - im-mongodb:27017 database: openIM_v3 maxPoolSize: 100 password: openIM123 uri: "" username: root msgCacheTimeout: 86400 msgDestructTime: 0 2 * * * multiLoginPolicy: 1 mysql: address: - im-mysql:3306 database: openIM_v3 logLevel: 4 maxIdleConn: 100 maxLifeTime: 60 maxOpenConn: 1000 password: openIM123 slowThreshold: 500 username: root object: apiURL: https://openim1.server.top/api cos: bucketURL: https://temp-1252357374.cos.ap-chengdu.myqcloud.com secretID: "" secretKey: "" sessionToken: "" enable: minio minio: accessKeyID: root bucket: openim endpoint: http://im-minio:9000 secretAccessKey: openIM123 sessionToken: "" signEndpoint: https://openim1.server.top/im-minio-api oss: accessKeyID: "" accessKeySecret: "" bucket: demo-9999999 bucketURL: https://demo-9999999.oss-cn-chengdu.aliyuncs.com endpoint: https://oss-cn-chengdu.aliyuncs.com sessionToken: "" prometheus: apiPrometheusPort: - 90 authPrometheusPort: - 90 conversationPrometheusPort: - 90 enable: false friendPrometheusPort: - 90 grafanaUrl: https://openim2.server.top/ groupPrometheusPort: - 90 messageGatewayPrometheusPort: - 90 messagePrometheusPort: - 90 messageTransferPrometheusPort: - 90 - 90 - 90 - 90 pushPrometheusPort: - 90 rtcPrometheusPort: - 90 thirdPrometheusPort: - 90 userPrometheusPort: - 90 push: enable: getui fcm: serviceAccount: x.json geTui: appKey: "" channelID: "" channelName: "" intent: "" masterSecret: "" pushUrl: https://restapi.getui.com/v2/$appId jpns: appKey: null masterSecret: null pushIntent: null pushUrl: null redis: address: - im-redis-master:6379 password: openIM123 username: "" retainChatRecords: 365 rpc: listenIP: 0.0.0.0 registerIP: "" rpcPort: openImAuthPort: - 80 openImConversationPort: - 80 openImFriendPort: - 80 openImGroupPort: - 80 openImMessageGatewayPort: - 88 openImMessagePort: - 80 openImPushPort: - 80 openImThirdPort: - 80 openImUserPort: - 80 rpcRegisterName: openImAuthName: openimserver-openim-rpc-auth:80 openImConversationName: openimserver-openim-rpc-conversation:80 openImFriendName: openimserver-openim-rpc-friend:80 openImGroupName: openimserver-openim-rpc-group:80 openImMessageGatewayName: openimserver-openim-msggateway:88 openImMsgName: openimserver-openim-rpc-msg:80 openImPushName: openimserver-openim-push:80 openImThirdName: openimserver-openim-rpc-third:80 openImUserName: openimserver-openim-rpc-user:80 secret: openIM123 singleMessageHasReadReceiptEnable: true tokenPolicy: expire: 90 zookeeper: address: - 172.28.0.1:12181 password: "" schema: openim username: "" notification.yaml: |+ --- # Source: openim-api/charts/openim-msggateway-proxy/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway/templates/serviceheadless.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway-headless labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name clusterIP: None --- # Source: openim-api/charts/openim-msgtransfer/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msgtransfer labels: helm.sh/chart: openim-msgtransfer-0.1.0 app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-push/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-push labels: helm.sh/chart: openim-push-0.1.0 app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-auth/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-auth labels: helm.sh/chart: openim-rpc-auth-0.1.0 app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-conversation/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-conversation labels: helm.sh/chart: openim-rpc-conversation-0.1.0 app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-friend/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-friend labels: helm.sh/chart: openim-rpc-friend-0.1.0 app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-group/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-group labels: helm.sh/chart: openim-rpc-group-0.1.0 app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-msg/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-msg labels: helm.sh/chart: openim-rpc-msg-0.1.0 app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-third/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-third labels: helm.sh/chart: openim-rpc-third-0.1.0 app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-user/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-user labels: helm.sh/chart: openim-rpc-user-0.1.0 app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name --- # Source: openim-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.3 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway-proxy/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msggateway-proxy securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway-proxy:v3.5.0" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP - name: rpc containerPort: 88 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msgtransfer/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-msgtransfer labels: helm.sh/chart: openim-msgtransfer-0.1.0 app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msgtransfer securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msgtransfer:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-push/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-push labels: helm.sh/chart: openim-push-0.1.0 app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-push securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-push:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-auth/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-auth labels: helm.sh/chart: openim-rpc-auth-0.1.0 app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-auth securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-auth:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-conversation/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-conversation labels: helm.sh/chart: openim-rpc-conversation-0.1.0 app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-conversation securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-conversation:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-friend/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-friend labels: helm.sh/chart: openim-rpc-friend-0.1.0 app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-friend securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-friend:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-group/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-group labels: helm.sh/chart: openim-rpc-group-0.1.0 app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-group securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-group:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-msg/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-msg labels: helm.sh/chart: openim-rpc-msg-0.1.0 app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-msg securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-msg:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-third/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-third labels: helm.sh/chart: openim-rpc-third-0.1.0 app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-third securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-third:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-user/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-user labels: helm.sh/chart: openim-rpc-user-0.1.0 app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-user securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-user:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.3 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-api:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msggateway/templates/deployment.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-openim-msggateway labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: serviceName: release-name-openim-msggateway-headless replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msggateway securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway:release-v3.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP - name: rpc containerPort: 88 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msggateway-proxy/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /msg_gateway(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-openim-msggateway-proxy port: number: 80 --- # Source: openim-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.3 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /api(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-openim-api port: number: 80 ```
openim templates get ./charts/openim-chat -f k8s-chat-server-config.yaml -f config-chatserver.yaml ```markdown --- # Source: admin-api/templates/app-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: imchat-cm data: config.yaml: |+ adminApi: listenIP: null openImAdminApiPort: - 80 adminList: - adminID: admin1 imAdmin: openIM123456 nickname: chat1 - adminID: admin2 imAdmin: openIM654321 nickname: chat2 - adminID: admin3 imAdmin: openIMAdmin nickname: chat3 chatApi: listenIP: null openImChatApiPort: - 80 envs: discovery: k8s log: isJson: false isStdout: true remainLogLevel: 6 remainRotationCount: 2 rotationTime: 24 storageLocation: ../logs/ withStack: false mysql: address: - im-mysql:3306 database: openim_enterprise logLevel: 4 maxIdleConn: 100 maxLifeTime: 60 maxOpenConn: 1000 password: openIM123 slowThreshold: 500 username: root openIMUrl: http://openimserver-openim-api redis: address: - im-redis-master:6379 password: openIM123 username: "" rpc: listenIP: null registerIP: null rpcPort: openImAdminPort: - 80 openImChatPort: - 80 rpcRegisterName: openImAdminName: openimchat-admin-rpc:80 openImChatName: openimchat-chat-rpc:80 secret: openIM123 tokenPolicy: expire: 86400 verifyCode: ali: accessKeyId: "" accessKeySecret: "" endpoint: dysmsapi.aliyuncs.com signName: "" verificationCodeTemplateCode: "" len: 6 maxCount: 10 superCode: "666666" uintTime: 86400 use: "" validCount: 5 validTime: 300 zookeeper: password: "" schema: openim username: "" zkAddr: - 127.0.0.1:12181 --- # Source: admin-api/charts/admin-rpc/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-admin-rpc labels: helm.sh/chart: admin-rpc-0.1.0 app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/chat-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/chat-rpc/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-chat-rpc labels: helm.sh/chart: chat-rpc-0.1.0 app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name --- # Source: admin-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.3 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/admin-rpc/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-admin-rpc labels: helm.sh/chart: admin-rpc-0.1.0 app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: admin-rpc securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-admin:release-v1.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: chat-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-chat:release-v1.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-rpc/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-chat-rpc labels: helm.sh/chart: chat-rpc-0.1.0 app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: chat-rpc securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-chat:release-v1.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.3 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: admin-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-admin:release-v1.5" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /chat(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-chat-api port: number: 80 --- # Source: admin-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.3 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /complete_admin(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-admin-api port: number: 80 ```
sweep-ai[bot] commented 6 months ago
Sweeping Fixing PR: track the progress here.

I'm currently fixing this PR to address the following:

[Sweep GHA Fix] The GitHub Actions run failed with the following error logs: ``` The command: Run thollander/actions-comment-pull-request@v2 yielded the following error: ##[error]Parameter token or opts.auth is required Here are the logs: ```

[!CAUTION]

An error has occurred: Cmd('git') failed due to: exit code(128) cmdline: git clone -v --branch=dependabot/github_actions/actions/upload-pages-artifact-3 -- https://*****:*****@github.com/openimsdk/helm-charts.git /tmp/cache/repos/openimsdk/helm-charts/base/dependabot--github_actions--actions--upload-pages-artifact-3 stderr: 'fatal: could not create work tree dir '/tmp/cache/repos/openimsdk/helm-charts/base/dependabot--github_actions--actions--upload-pages-artifact-3': No space left on device ' (tracking ID: 3f130fea0b)

sweep-ai[bot] commented 6 months ago
Sweeping Fixing PR: track the progress here.

I'm currently fixing this PR to address the following:

[Sweep GHA Fix] The GitHub Actions run failed with the following error logs: ``` The command: Run thollander/actions-comment-pull-request@v2 yielded the following error: ##[error]Parameter token or opts.auth is required ##[group]Run # Install MySQL # Install MySQL helm install my-release oci://registry-1.docker.io/bitnamicharts/mysql -f infra/mysql-config.yaml -n openim --create-namespace  # Install Kafka helm install im-kafka infra/kafka -f infra/kafka-config.yaml -n openim --create-namespace  # Install MinIO helm install im-minio infra/minio -f infra/minio-config.yaml -n openim --create-namespace  # Install MongoDB helm install im-mongodb infra/mongodb -f infra/mongodb-config.yaml -n openim --create-namespace  # Install Redis helm install im-redis infra/redis -f infra/redis-config.yaml -n openim --create-namespace shell: /usr/bin/bash -e {0} ##[endgroup] Pulled: registry-1.docker.io/bitnamicharts/mysql:9.16.1 Digest: sha256:3eb456c10d7829936c38c2f11d6b3c408015c64d5d960213e8cf375fdd570465 NAME: my-release LAST DEPLOYED: Sun Jan 7 19:15:46 2024 NAMESPACE: openim STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: mysql CHART VERSION: 9.16.1 APP VERSION: 8.0.35 ** Please be patient while the chart is being deployed ** Tip: Watch the deployment status using the command: kubectl get pods -w --namespace openim Services: echo Primary: my-release-mysql.openim.svc.cluster.local:3306 Execute the following to get the administrator credentials: echo Username: root MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace openim my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d) To connect to your database: 1. Run a pod that you can use as a client: kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image m.daocloud.io/docker.io/bitnami/mysql:8.0.35-debian-11-r2 --namespace openim --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash 2. To connect to primary service (read/write): mysql -h my-release-mysql.openim.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD" Error: INSTALLATION FAILED: failed post-install: timed out waiting for the condition ##[error]Process completed with exit code 1. ##[group]Run sudo kubectl cluster-info sudo kubectl cluster-info sudo kubectl get pods -n kube-system echo "current-context:" $(kubectl config current-context) echo "environment-kubeconfig:" ${KUBECONFIG} shell: /usr/bin/bash -e {0} ##[endgroup] E0107 19:14:49.691529 8979 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused E0107 19:14:49.691941 8979 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused E0107 19:14:49.693393 8979 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused E0107 19:14:49.693748 8979 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused E0107 19:14:49.695196 8979 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused The connection to the server localhost:8080 was refused - did you specify the right host or port? To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ##[error]Process completed with exit code 1. Here are the logs: ```

[!CAUTION]

An error has occurred: Cmd('git') failed due to: exit code(128) cmdline: git clone -v --branch=dependabot/github_actions/actions/upload-pages-artifact-3 -- https://*****:*****@github.com/openimsdk/helm-charts.git /tmp/cache/repos/openimsdk/helm-charts/base/dependabot--github_actions--actions--upload-pages-artifact-3 stderr: 'fatal: could not create work tree dir '/tmp/cache/repos/openimsdk/helm-charts/base/dependabot--github_actions--actions--upload-pages-artifact-3': No space left on device ' (tracking ID: c52f7fd6ca)

dependabot[bot] commented 6 months ago

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.