openimsdk / helm-charts

helm charts repository for openim
https://openimsdk.github.io/helm-charts/
Apache License 2.0
14 stars 10 forks source link

build(deps): Bump tj-actions/changed-files from 41.0.1 to 44.0.0 #95

Closed dependabot[bot] closed 3 months ago

dependabot[bot] commented 3 months ago

Bumps tj-actions/changed-files from 41.0.1 to 44.0.0.

Release notes

Sourced from tj-actions/changed-files's releases.

v44.0.0

🔥🔥 BREAKING CHANGE 🔥🔥

Overview

We've made a significant update to how pull requests (PRs) from forked repositories are processed. This improvement not only streamlines the handling of such PRs but also fixes a previously identified issue.

Before the Change

Previously, when you created a pull request from a forked repository, any files changed in the target branch after the PR creation would erroneously appear as part of the PR's changed files. This made it difficult to distinguish between the actual changes introduced by the PR and subsequent changes made directly to the target branch.

What Has Changed

With this update, a pull request from a fork will now only include the files that were explicitly changed in the fork. This ensures that the list of changed files in a PR accurately reflects the contributions from the fork, without being muddled by unrelated changes to the target branch.


What's Changed

New Contributors

Full Changelog: https://github.com/tj-actions/changed-files/compare/v43.0.1...v44.0.0

v43.0.1

What's Changed

... (truncated)

Changelog

Sourced from tj-actions/changed-files's changelog.

Changelog

44.0.0 - (2024-03-27)

🐛 Bug Fixes

  • Ensure the fork remote doesn't exists before creating it (#2012) (4bbd49b) - (Tonye Jack)
  • Update previos sha for forks (#2011) (f0e7702) - (Tonye Jack)
  • Update to add the fork remote (#2010) (6354e6c) - (Tonye Jack)
  • Check for setting remote urls for forks (#2009) (1176164) - (Tonye Jack)
  • Bug with prs from forks returning incorrect set of changed files (#2007) (4ff7936) - (Tonye Jack)

➖ Remove

🔄 Update

  • Updated README.md (#2016)

Co-authored-by: jackton1 17484350+jackton1@users.noreply.github.com (2d756ea) - (tj-actions[bot])

  • Update README.md (2d21bbb) - (Tonye Jack)
  • Updated README.md (#2013)

Co-authored-by: jackton1 17484350+jackton1@users.noreply.github.com (2ca8dc4) - (tj-actions[bot])

  • Update README.md (4621617) - (tonyejack1)
  • Update README.md (c6557ed) - (Tonye Jack)
  • Update README.md (0713a11) - (Tonye Jack)

⚙️ Miscellaneous Tasks

  • Update description of outputs removing asterisks (#2015) (ce497c3) - (tonyejack1)
  • Update description of other_deleted_files output (#2008) (ee096d6) - (tonyejack1)
  • deps: Update typescript-eslint monorepo to v7.4.0 (0647424) - (renovate[bot])
  • deps: Lock file maintenance (efe5e6c) - (renovate[bot])

⬆️ Upgrades

  • Upgraded to v43.0.1 (#2004)

Co-authored-by: jackton1 17484350+jackton1@users.noreply.github.com (01e9662) - (tj-actions[bot])

43.0.1 - (2024-03-20)

🐛 Bug Fixes

  • Remove warning with detecting the local git repository when using Github's REST API (#2002) (077b23f) - (Tonye Jack)

📦 Bumps

... (truncated)

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
kubbot commented 3 months ago

Kubernetes Templates in openim Namespace

openim templates get ./charts/openim-server -f k8s-open-im-server-config.yaml -f config-imserver.yaml ```markdown --- # Source: openim-api/templates/app-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: openim-cm data: config.yaml: |+ api: listenIP: 0.0.0.0 openImApiPort: - 80 callback: addBlackBefore: enable: false failedContinue: true timeout: 5 addFriendAfter: enable: false failedContinue: true timeout: 5 addFriendAgreeBefore: enable: false failedContinue: true timeout: 5 afterCreateGroup: enable: false failedContinue: true timeout: 5 afterGroupMsgRead: enable: false failedContinue: true timeout: 5 afterGroupMsgRevoke: enable: false failedContinue: true timeout: 5 afterJoinGroup: enable: false failedContinue: true timeout: 5 afterSendGroupMsg: enable: false failedContinue: true timeout: 5 afterSendSingleMsg: enable: false failedContinue: true timeout: 5 afterSetFriendRemark: enable: false failedContinue: true timeout: 5 afterSetGroupMemberInfo: enable: false failedContinue: true timeout: 5 afterUpdateUserInfoEx: enable: false failedContinue: true timeout: 5 afterUserRegister: enable: false failedContinue: true timeout: 5 beforeAddFriend: enable: false failedContinue: true timeout: 5 beforeCreateGroup: enable: false failedContinue: true timeout: 5 beforeInviteUserToGroup: enable: false failedContinue: true timeout: 5 beforeMemberJoinGroup: enable: false failedContinue: true timeout: 5 beforeSendGroupMsg: enable: false failedContinue: true timeout: 5 beforeSendSingleMsg: enable: false failedContinue: true timeout: 5 beforeSetFriendRemark: enable: false failedContinue: true timeout: 5 beforeSetGroupMemberInfo: enable: false failedContinue: true timeout: 5 beforeUpdateUserInfo: enable: false failedContinue: true timeout: 5 beforeUpdateUserInfoEx: enable: false failedContinue: true timeout: 5 beforeUserRegister: enable: false failedContinue: true timeout: 5 deleteFriendAfter: enable: false failedContinue: true timeout: 5 dismissGroup: enable: false failedContinue: true timeout: 5 groupMsgRead: enable: false failedContinue: true timeout: 5 importFriendsAfter: enable: false failedContinue: true timeout: 5 importFriendsBefore: enable: false failedContinue: true timeout: 5 joinGroup: enable: false failedContinue: true timeout: 5 joinGroupAfter: enable: false failedContinue: true timeout: 5 killGroupMember: enable: false failedContinue: true timeout: 5 msgModify: enable: false failedContinue: true timeout: 5 offlinePush: enable: false failedContinue: true timeout: 5 onlinePush: enable: false failedContinue: true timeout: 5 quitGroup: enable: false failedContinue: true timeout: 5 removeBlackAfter: enable: false failedContinue: true timeout: 5 revokeMsgAfter: enable: false failedContinue: true timeout: 5 setGroupInfoAfter: enable: false failedContinue: true timeout: 5 setGroupInfoBefore: enable: false failedContinue: true timeout: 5 setMessageReactionExtensions: enable: false failedContinue: true timeout: 5 singleMsgRead: enable: false failedContinue: true timeout: 5 superGroupOnlinePush: enable: false failedContinue: true timeout: 5 transferGroupOwner: enable: false failedContinue: true timeout: 5 updateUserInfo: enable: false failedContinue: true timeout: 5 url: null userKickOff: enable: false failedContinue: true timeout: 5 userOffline: enable: false failedContinue: true timeout: 5 userOnline: enable: false failedContinue: true timeout: 5 chatPersistenceMysql: true chatRecordsClearTime: 0 2 * * 3 envs: discovery: k8s groupMessageHasReadReceiptEnable: true im-admin: nickname: - imAdmin userID: - imAdmin iosPush: badgeCount: true production: false pushSound: xxx kafka: addr: - im-kafka:9092 consumerGroupID: msgToMongo: mongo msgToMySql: mysql msgToPush: push msgToRedis: redis latestMsgToRedis: topic: latestMsgToRedis msgToPush: topic: msgToPush offlineMsgToMongo: topic: offlineMsgToMongoMysql password: proot username: root log: isJson: false isStdout: true remainLogLevel: 6 remainRotationCount: 2 rotationTime: 24 storageLocation: ../logs/ withStack: false longConnSvr: openImMessageGatewayPort: - 88 openImWsPort: - 80 websocketMaxConnNum: 100000 websocketMaxMsgLen: 4096 websocketTimeout: 10 manager: nickname: null userID: null messageVerify: friendVerify: false mongo: address: - im-mongodb:27017 database: openim_v3 maxPoolSize: 100 password: openIM123 uri: "" username: openIM msgCacheTimeout: 86400 msgDestructTime: 0 2 * * * multiLoginPolicy: 1 object: apiURL: https://openim1.server.top/api aws: accessKeyID: "" accessKeySecret: "" bucket: demo-9999999 endpoint: '''''' publicRead: false region: us-east-1 cos: bucketURL: https://temp-1252357374.cos.ap-chengdu.myqcloud.com publicRead: false secretID: "" secretKey: "" sessionToken: "" enable: minio kodo: accessKeyID: "" accessKeySecret: "" bucket: demo-9999999 bucketURL: http://your.domain.com endpoint: http://s3.cn-east-1.qiniucs.com publicRead: false sessionToken: "" minio: accessKeyID: root bucket: openim endpoint: http://im-minio:9000 publicRead: false secretAccessKey: openIM123 sessionToken: "" signEndpoint: https://openim1.server.top/im-minio-api oss: accessKeyID: "" accessKeySecret: "" bucket: demo-9999999 bucketURL: https://demo-9999999.oss-cn-chengdu.aliyuncs.com endpoint: https://oss-cn-chengdu.aliyuncs.com publicRead: false sessionToken: "" prometheus: apiPrometheusPort: - 90 authPrometheusPort: - 90 conversationPrometheusPort: - 90 enable: true friendPrometheusPort: - 90 grafanaUrl: https://openim2.server.top/ groupPrometheusPort: - 90 messageGatewayPrometheusPort: - 90 messagePrometheusPort: - 90 messageTransferPrometheusPort: - 90 - 90 - 90 - 90 pushPrometheusPort: - 90 rtcPrometheusPort: - 90 thirdPrometheusPort: - 90 userPrometheusPort: - 90 push: enable: getui fcm: serviceAccount: x.json geTui: appKey: "" channelID: "" channelName: "" intent: "" masterSecret: "" pushUrl: https://restapi.getui.com/v2/$appId jpns: appKey: null masterSecret: null pushIntent: null pushUrl: null redis: address: - im-redis-master:6379 password: openIM123 username: "" retainChatRecords: 365 rpc: listenIP: 0.0.0.0 registerIP: "" rpcPort: openImAuthPort: - 80 openImConversationPort: - 80 openImFriendPort: - 80 openImGroupPort: - 80 openImMessageGatewayPort: - 88 openImMessagePort: - 80 openImPushPort: - 80 openImThirdPort: - 80 openImUserPort: - 80 rpcRegisterName: openImAuthName: openimserver-openim-rpc-auth:80 openImConversationName: openimserver-openim-rpc-conversation:80 openImFriendName: openimserver-openim-rpc-friend:80 openImGroupName: openimserver-openim-rpc-group:80 openImMessageGatewayName: openimserver-openim-msggateway:88 openImMsgName: openimserver-openim-rpc-msg:80 openImPushName: openimserver-openim-push:80 openImThirdName: openimserver-openim-rpc-third:80 openImUserName: openimserver-openim-rpc-user:80 secret: openIM123 singleMessageHasReadReceiptEnable: true tokenPolicy: expire: 90 zookeeper: address: - 172.28.0.1:12181 password: "" schema: openim username: "" notification.yaml: |+ --- # Source: openim-api/charts/openim-msggateway-proxy/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway/templates/serviceheadless.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway-headless labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name clusterIP: None --- # Source: openim-api/charts/openim-msgtransfer/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msgtransfer labels: helm.sh/chart: openim-msgtransfer-0.1.0 app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-push/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-push labels: helm.sh/chart: openim-push-0.1.0 app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-auth/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-auth labels: helm.sh/chart: openim-rpc-auth-0.1.0 app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-conversation/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-conversation labels: helm.sh/chart: openim-rpc-conversation-0.1.0 app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-friend/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-friend labels: helm.sh/chart: openim-rpc-friend-0.1.0 app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-group/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-group labels: helm.sh/chart: openim-rpc-group-0.1.0 app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-msg/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-msg labels: helm.sh/chart: openim-rpc-msg-0.1.0 app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-third/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-third labels: helm.sh/chart: openim-rpc-third-0.1.0 app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-user/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-user labels: helm.sh/chart: openim-rpc-user-0.1.0 app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name --- # Source: openim-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.17 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "3.6.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway-proxy/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msggateway-proxy securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway-proxy:v3.5.0" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP - name: rpc containerPort: 88 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msgtransfer/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-msgtransfer labels: helm.sh/chart: openim-msgtransfer-0.1.0 app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msgtransfer securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msgtransfer:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-push/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-push labels: helm.sh/chart: openim-push-0.1.0 app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-push securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-push:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-auth/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-auth labels: helm.sh/chart: openim-rpc-auth-0.1.0 app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-auth securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-auth:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-conversation/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-conversation labels: helm.sh/chart: openim-rpc-conversation-0.1.0 app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-conversation securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-conversation:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-friend/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-friend labels: helm.sh/chart: openim-rpc-friend-0.1.0 app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-friend securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-friend:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-group/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-group labels: helm.sh/chart: openim-rpc-group-0.1.0 app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-group securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-group:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-msg/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-msg labels: helm.sh/chart: openim-rpc-msg-0.1.0 app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-msg securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-msg:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-third/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-third labels: helm.sh/chart: openim-rpc-third-0.1.0 app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-third securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-third:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-user/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-user labels: helm.sh/chart: openim-rpc-user-0.1.0 app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-user securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-user:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.17 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "3.6.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-api:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msggateway/templates/deployment.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-openim-msggateway labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: serviceName: release-name-openim-msggateway-headless replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msggateway securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP - name: rpc containerPort: 88 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msggateway-proxy/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /msg_gateway(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-openim-msggateway-proxy port: number: 80 --- # Source: openim-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.17 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "3.6.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /api(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-openim-api port: number: 80 ```
openim templates get ./charts/openim-chat -f k8s-chat-server-config.yaml -f config-chatserver.yaml ```markdown --- # Source: admin-api/templates/app-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: imchat-cm data: config.yaml: |+ adminApi: listenIP: null openImAdminApiPort: - 80 adminList: - adminID: null imAdmin: null nickname: null - adminID: null imAdmin: null nickname: null - adminID: null imAdmin: null nickname: null chatAdmin: - adminID: chatAdmin imAdmin: imAdmin nickname: chatAdmin chatApi: listenIP: null openImChatApiPort: - 80 envs: discovery: k8s liveKit: key: "" liveKitUrl: wss://im-livekiturl:7880 secret: "" log: isJson: false isStdout: true remainLogLevel: 6 remainRotationCount: 2 rotationTime: 24 storageLocation: ../logs/ withStack: false mongo: address: - im-mongodb:27017 database: openim_v3 maxPoolSize: 100 password: openIM123 uri: "" username: openIM openIMUrl: http://openimserver-openim-api redis: address: - im-redis-master:6379 password: openIM123 username: null rpc: listenIP: null registerIP: null rpcPort: openImAdminPort: - 80 openImChatPort: - 80 rpcRegisterName: openImAdminName: openimchat-admin-rpc:80 openImChatName: openimchat-chat-rpc:80 secret: openIM123 tokenPolicy: expire: 86400 verifyCode: ali: accessKeyId: "" accessKeySecret: "" endpoint: dysmsapi.aliyuncs.com signName: "" verificationCodeTemplateCode: "" len: 6 mail: senderAuthorizationCode: "" senderMail: "" smtpAddr: smtp.qq.com smtpPort: 465 title: "" maxCount: 10 superCode: "666666" uintTime: 86400 use: "" validCount: 5 validTime: 300 zookeeper: password: "" schema: openim username: "" zkAddr: - 127.0.0.1:12181 --- # Source: admin-api/charts/admin-rpc/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-admin-rpc labels: helm.sh/chart: admin-rpc-0.1.0 app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/chat-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/chat-rpc/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-chat-rpc labels: helm.sh/chart: chat-rpc-0.1.0 app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name --- # Source: admin-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.17 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.6.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/admin-rpc/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-admin-rpc labels: helm.sh/chart: admin-rpc-0.1.0 app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: admin-rpc securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-admin:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: chat-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-chat:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-rpc/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-chat-rpc labels: helm.sh/chart: chat-rpc-0.1.0 app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: chat-rpc securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-chat:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.17 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.6.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: admin-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-admin:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /chat(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-chat-api port: number: 80 --- # Source: admin-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.17 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.6.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /complete_admin(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-admin-api port: number: 80 ```
dependabot[bot] commented 3 months ago

Superseded by #96.