openimsdk / helm-charts

helm charts repository for openim
https://openimsdk.github.io/helm-charts/
Apache License 2.0
14 stars 10 forks source link

build(deps): Bump tj-actions/changed-files from 41.0.1 to 44.4.0 #100

Closed dependabot[bot] closed 1 month ago

dependabot[bot] commented 2 months ago

Bumps tj-actions/changed-files from 41.0.1 to 44.4.0.

Release notes

Sourced from tj-actions/changed-files's releases.

v44.4.0

What's Changed

Full Changelog: https://github.com/tj-actions/changed-files/compare/v44.3.0...v44.4.0

v44.3.0

What's Changed

Full Changelog: https://github.com/tj-actions/changed-files/compare/v44.2.0...v44.3.0

v44.2.0

What's Changed

Full Changelog: https://github.com/tj-actions/changed-files/compare/v44.1.0...v44.2.0

... (truncated)

Changelog

Sourced from tj-actions/changed-files's changelog.

Changelog

44.4.0 - (2024-05-08)

🚀 Features

  • Reduce the default fetch_depth from 50 to 25 and increase fetch_missing_history_max_retries (#2060) (44ce9f4) - (Tonye Jack)

🐛 Bug Fixes

  • deps: Update dependency @​octokit/rest to v20.1.1 (396e5a5) - (renovate[bot])
  • deps: Update dependency yaml to v2.4.2 (1c5b7dc) - (renovate[bot])

➕ Add

  • Added missing changes and modified dist assets. (c393672) - (GitHub Action)
  • Added missing changes and modified dist assets. (15fa7fb) - (GitHub Action)

🔄 Update

  • Updated README.md (#2068)

Co-authored-by: renovate[bot] (0c82494) - (tj-actions[bot])

  • Updated README.md (#2061)

Co-authored-by: jackton1 17484350+jackton1@users.noreply.github.com (cee950d) - (tj-actions[bot])

  • Updated README.md (#2059)

Co-authored-by: jackton1 17484350+jackton1@users.noreply.github.com (7b65c37) - (tj-actions[bot])

  • Update action.yml (532b66a) - (Tonye Jack)
  • Updated README.md (#2057)

Co-authored-by: jackton1 17484350+jackton1@users.noreply.github.com (461ea4f) - (tj-actions[bot])

⚙️ Miscellaneous Tasks

  • deps: Update dependency @​types/node to v20.12.11 (a29e8b5) - (renovate[bot])
  • deps: Update codacy/codacy-analysis-cli-action action to v4.4.1 (5a12705) - (renovate[bot])
  • deps: Update dependency @​types/node to v20.12.10 (5819343) - (renovate[bot])
  • deps: Update dependency @​types/node to v20.12.9 (5587afb) - (renovate[bot])
  • deps: Lock file maintenance (0f039f3) - (renovate[bot])
  • deps: Update tj-actions/verify-changed-files action to v20 (#2079) (6d4230d) - (renovate[bot])
  • deps: Update dependency @​types/lodash to v4.17.1 (1711887) - (renovate[bot])
  • deps: Update dependency eslint-plugin-jest to v28.5.0 (47a2d62) - (renovate[bot])
  • deps: Update dependency eslint-plugin-jest to v28.4.0 (c73b12c) - (renovate[bot])
  • deps-dev: Bump @​types/node from 20.12.7 to 20.12.8 (#2074) (41ce994) - (dependabot[bot])
  • deps: Lock file maintenance (192e174) - (renovate[bot])
  • deps: Update typescript-eslint monorepo to v7.8.0 (5e85e31) - (renovate[bot])

... (truncated)

Commits
  • a29e8b5 chore(deps): update dependency @​types/node to v20.12.11
  • 5a12705 chore(deps): update codacy/codacy-analysis-cli-action action to v4.4.1
  • 5819343 chore(deps): update dependency @​types/node to v20.12.10
  • 5587afb chore(deps): update dependency @​types/node to v20.12.9
  • 0f039f3 chore(deps): lock file maintenance
  • 6d4230d chore(deps): update tj-actions/verify-changed-files action to v20 (#2079)
  • 1711887 chore(deps): update dependency @​types/lodash to v4.17.1
  • c393672 Added missing changes and modified dist assets.
  • 396e5a5 fix(deps): update dependency @​octokit/rest to v20.1.1
  • 47a2d62 chore(deps): update dependency eslint-plugin-jest to v28.5.0
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
kubbot commented 2 months ago

Kubernetes Templates in openim Namespace

openim templates get ./charts/openim-server -f k8s-open-im-server-config.yaml -f config-imserver.yaml ```markdown --- # Source: openim-api/templates/app-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: openim-cm data: config.yaml: |+ api: listenIP: 0.0.0.0 openImApiPort: - 80 callback: addBlackBefore: enable: false failedContinue: true timeout: 5 addFriendAfter: enable: false failedContinue: true timeout: 5 addFriendAgreeBefore: enable: false failedContinue: true timeout: 5 afterCreateGroup: enable: false failedContinue: true timeout: 5 afterGroupMsgRead: enable: false failedContinue: true timeout: 5 afterGroupMsgRevoke: enable: false failedContinue: true timeout: 5 afterJoinGroup: enable: false failedContinue: true timeout: 5 afterSendGroupMsg: enable: false failedContinue: true timeout: 5 afterSendSingleMsg: enable: false failedContinue: true timeout: 5 afterSetFriendRemark: enable: false failedContinue: true timeout: 5 afterSetGroupMemberInfo: enable: false failedContinue: true timeout: 5 afterUpdateUserInfoEx: enable: false failedContinue: true timeout: 5 afterUserRegister: enable: false failedContinue: true timeout: 5 beforeAddFriend: enable: false failedContinue: true timeout: 5 beforeCreateGroup: enable: false failedContinue: true timeout: 5 beforeInviteUserToGroup: enable: false failedContinue: true timeout: 5 beforeMemberJoinGroup: enable: false failedContinue: true timeout: 5 beforeSendGroupMsg: enable: false failedContinue: true timeout: 5 beforeSendSingleMsg: enable: false failedContinue: true timeout: 5 beforeSetFriendRemark: enable: false failedContinue: true timeout: 5 beforeSetGroupMemberInfo: enable: false failedContinue: true timeout: 5 beforeUpdateUserInfo: enable: false failedContinue: true timeout: 5 beforeUpdateUserInfoEx: enable: false failedContinue: true timeout: 5 beforeUserRegister: enable: false failedContinue: true timeout: 5 deleteFriendAfter: enable: false failedContinue: true timeout: 5 dismissGroup: enable: false failedContinue: true timeout: 5 groupMsgRead: enable: false failedContinue: true timeout: 5 importFriendsAfter: enable: false failedContinue: true timeout: 5 importFriendsBefore: enable: false failedContinue: true timeout: 5 joinGroup: enable: false failedContinue: true timeout: 5 joinGroupAfter: enable: false failedContinue: true timeout: 5 killGroupMember: enable: false failedContinue: true timeout: 5 msgModify: enable: false failedContinue: true timeout: 5 offlinePush: enable: false failedContinue: true timeout: 5 onlinePush: enable: false failedContinue: true timeout: 5 quitGroup: enable: false failedContinue: true timeout: 5 removeBlackAfter: enable: false failedContinue: true timeout: 5 revokeMsgAfter: enable: false failedContinue: true timeout: 5 setGroupInfoAfter: enable: false failedContinue: true timeout: 5 setGroupInfoBefore: enable: false failedContinue: true timeout: 5 setMessageReactionExtensions: enable: false failedContinue: true timeout: 5 singleMsgRead: enable: false failedContinue: true timeout: 5 superGroupOnlinePush: enable: false failedContinue: true timeout: 5 transferGroupOwner: enable: false failedContinue: true timeout: 5 updateUserInfo: enable: false failedContinue: true timeout: 5 url: null userKickOff: enable: false failedContinue: true timeout: 5 userOffline: enable: false failedContinue: true timeout: 5 userOnline: enable: false failedContinue: true timeout: 5 chatPersistenceMysql: true chatRecordsClearTime: 0 2 * * 3 envs: discovery: k8s groupMessageHasReadReceiptEnable: true im-admin: nickname: - imAdmin userID: - imAdmin iosPush: badgeCount: true production: false pushSound: xxx kafka: addr: - im-kafka:9092 consumerGroupID: msgToMongo: mongo msgToMySql: mysql msgToPush: push msgToRedis: redis latestMsgToRedis: topic: latestMsgToRedis msgToPush: topic: msgToPush offlineMsgToMongo: topic: offlineMsgToMongoMysql password: proot username: root log: isJson: false isStdout: true remainLogLevel: 6 remainRotationCount: 2 rotationTime: 24 storageLocation: ../logs/ withStack: false longConnSvr: openImMessageGatewayPort: - 88 openImWsPort: - 80 websocketMaxConnNum: 100000 websocketMaxMsgLen: 4096 websocketTimeout: 10 manager: nickname: null userID: null messageVerify: friendVerify: false mongo: address: - im-mongodb:27017 database: openim_v3 maxPoolSize: 100 password: openIM123 uri: "" username: openIM msgCacheTimeout: 86400 msgDestructTime: 0 2 * * * multiLoginPolicy: 1 object: apiURL: https://openim1.server.top/api aws: accessKeyID: "" accessKeySecret: "" bucket: demo-9999999 endpoint: '''''' publicRead: false region: us-east-1 cos: bucketURL: https://temp-1252357374.cos.ap-chengdu.myqcloud.com publicRead: false secretID: "" secretKey: "" sessionToken: "" enable: minio kodo: accessKeyID: "" accessKeySecret: "" bucket: demo-9999999 bucketURL: http://your.domain.com endpoint: http://s3.cn-east-1.qiniucs.com publicRead: false sessionToken: "" minio: accessKeyID: root bucket: openim endpoint: http://im-minio:9000 publicRead: false secretAccessKey: openIM123 sessionToken: "" signEndpoint: https://openim1.server.top/im-minio-api oss: accessKeyID: "" accessKeySecret: "" bucket: demo-9999999 bucketURL: https://demo-9999999.oss-cn-chengdu.aliyuncs.com endpoint: https://oss-cn-chengdu.aliyuncs.com publicRead: false sessionToken: "" prometheus: apiPrometheusPort: - 90 authPrometheusPort: - 90 conversationPrometheusPort: - 90 enable: true friendPrometheusPort: - 90 grafanaUrl: https://openim2.server.top/ groupPrometheusPort: - 90 messageGatewayPrometheusPort: - 90 messagePrometheusPort: - 90 messageTransferPrometheusPort: - 90 - 90 - 90 - 90 pushPrometheusPort: - 90 rtcPrometheusPort: - 90 thirdPrometheusPort: - 90 userPrometheusPort: - 90 push: enable: getui fcm: serviceAccount: x.json geTui: appKey: "" channelID: "" channelName: "" intent: "" masterSecret: "" pushUrl: https://restapi.getui.com/v2/$appId jpns: appKey: null masterSecret: null pushIntent: null pushUrl: null redis: address: - im-redis-master:6379 password: openIM123 username: "" retainChatRecords: 365 rpc: listenIP: 0.0.0.0 registerIP: "" rpcPort: openImAuthPort: - 80 openImConversationPort: - 80 openImFriendPort: - 80 openImGroupPort: - 80 openImMessageGatewayPort: - 88 openImMessagePort: - 80 openImPushPort: - 80 openImThirdPort: - 80 openImUserPort: - 80 rpcRegisterName: openImAuthName: openimserver-openim-rpc-auth:80 openImConversationName: openimserver-openim-rpc-conversation:80 openImFriendName: openimserver-openim-rpc-friend:80 openImGroupName: openimserver-openim-rpc-group:80 openImMessageGatewayName: openimserver-openim-msggateway:88 openImMsgName: openimserver-openim-rpc-msg:80 openImPushName: openimserver-openim-push:80 openImThirdName: openimserver-openim-rpc-third:80 openImUserName: openimserver-openim-rpc-user:80 secret: openIM123 singleMessageHasReadReceiptEnable: true tokenPolicy: expire: 90 zookeeper: address: - 172.28.0.1:12181 password: "" schema: openim username: "" notification.yaml: |+ --- # Source: openim-api/charts/openim-msggateway-proxy/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway/templates/serviceheadless.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msggateway-headless labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: ports: - port: 80 targetPort: http protocol: TCP name: http - port: 88 targetPort: rpc protocol: TCP name: rpc - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name clusterIP: None --- # Source: openim-api/charts/openim-msgtransfer/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-msgtransfer labels: helm.sh/chart: openim-msgtransfer-0.1.0 app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-push/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-push labels: helm.sh/chart: openim-push-0.1.0 app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-auth/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-auth labels: helm.sh/chart: openim-rpc-auth-0.1.0 app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-conversation/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-conversation labels: helm.sh/chart: openim-rpc-conversation-0.1.0 app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-friend/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-friend labels: helm.sh/chart: openim-rpc-friend-0.1.0 app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-group/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-group labels: helm.sh/chart: openim-rpc-group-0.1.0 app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-msg/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-msg labels: helm.sh/chart: openim-rpc-msg-0.1.0 app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-third/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-third labels: helm.sh/chart: openim-rpc-third-0.1.0 app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-rpc-user/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-rpc-user labels: helm.sh/chart: openim-rpc-user-0.1.0 app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name --- # Source: openim-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.17 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "3.6.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http - port: 90 targetPort: 90 protocol: TCP name: metrics-port selector: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name --- # Source: openim-api/charts/openim-msggateway-proxy/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msggateway-proxy securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway-proxy:v3.5.0" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP - name: rpc containerPort: 88 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msgtransfer/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-msgtransfer labels: helm.sh/chart: openim-msgtransfer-0.1.0 app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msgtransfer app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msgtransfer securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msgtransfer:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-push/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-push labels: helm.sh/chart: openim-push-0.1.0 app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-push app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-push securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-push:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-auth/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-auth labels: helm.sh/chart: openim-rpc-auth-0.1.0 app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-auth app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-auth securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-auth:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-conversation/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-conversation labels: helm.sh/chart: openim-rpc-conversation-0.1.0 app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-conversation app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-conversation securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-conversation:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-friend/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-friend labels: helm.sh/chart: openim-rpc-friend-0.1.0 app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-friend app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-friend securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-friend:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-group/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-group labels: helm.sh/chart: openim-rpc-group-0.1.0 app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-group app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-group securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-group:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-msg/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-msg labels: helm.sh/chart: openim-rpc-msg-0.1.0 app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-msg app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-msg securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-msg:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-third/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-third labels: helm.sh/chart: openim-rpc-third-0.1.0 app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-third app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-third securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-third:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-rpc-user/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-rpc-user labels: helm.sh/chart: openim-rpc-user-0.1.0 app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-rpc-user app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-rpc-user securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-user:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.17 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "3.6.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-api:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msggateway/templates/deployment.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-openim-msggateway labels: helm.sh/chart: openim-msggateway-0.1.0 app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: serviceName: release-name-openim-msggateway-headless replicas: 1 selector: matchLabels: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: openim-msggateway app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: openim-msggateway securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway:release-v3.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP - name: rpc containerPort: 88 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http env: - name: MY_MSGGATEWAY_REPLICACOUNT value: "1" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: {} volumeMounts: - mountPath: /openim/openim-server/config/config.yaml name: config subPath: config.yaml - mountPath: /openim/openim-server/config/notification.yaml name: config subPath: notification.yaml volumes: - name: config configMap: name: openim-cm --- # Source: openim-api/charts/openim-msggateway-proxy/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-openim-msggateway-proxy labels: helm.sh/chart: openim-msggateway-proxy-0.1.0 app.kubernetes.io/name: openim-msggateway-proxy app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /msg_gateway(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-openim-msggateway-proxy port: number: 80 --- # Source: openim-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-openim-api labels: helm.sh/chart: openim-api-0.1.17 app.kubernetes.io/name: openim-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "3.6.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /api(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-openim-api port: number: 80 ```
openim templates get ./charts/openim-chat -f k8s-chat-server-config.yaml -f config-chatserver.yaml ```markdown --- # Source: admin-api/templates/app-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: imchat-cm data: config.yaml: |+ adminApi: listenIP: null openImAdminApiPort: - 80 adminList: - adminID: null imAdmin: null nickname: null - adminID: null imAdmin: null nickname: null - adminID: null imAdmin: null nickname: null chatAdmin: - adminID: chatAdmin imAdmin: imAdmin nickname: chatAdmin chatApi: listenIP: null openImChatApiPort: - 80 envs: discovery: k8s liveKit: key: "" liveKitUrl: wss://im-livekiturl:7880 secret: "" log: isJson: false isStdout: true remainLogLevel: 6 remainRotationCount: 2 rotationTime: 24 storageLocation: ../logs/ withStack: false mongo: address: - im-mongodb:27017 database: openim_v3 maxPoolSize: 100 password: openIM123 uri: "" username: openIM openIMUrl: http://openimserver-openim-api redis: address: - im-redis-master:6379 password: openIM123 username: null rpc: listenIP: null registerIP: null rpcPort: openImAdminPort: - 80 openImChatPort: - 80 rpcRegisterName: openImAdminName: openimchat-admin-rpc:80 openImChatName: openimchat-chat-rpc:80 secret: openIM123 tokenPolicy: expire: 86400 verifyCode: ali: accessKeyId: "" accessKeySecret: "" endpoint: dysmsapi.aliyuncs.com signName: "" verificationCodeTemplateCode: "" len: 6 mail: senderAuthorizationCode: "" senderMail: "" smtpAddr: smtp.qq.com smtpPort: 465 title: "" maxCount: 10 superCode: "666666" uintTime: 86400 use: "" validCount: 5 validTime: 300 zookeeper: password: "" schema: openim username: "" zkAddr: - 127.0.0.1:12181 --- # Source: admin-api/charts/admin-rpc/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-admin-rpc labels: helm.sh/chart: admin-rpc-0.1.0 app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/chat-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/chat-rpc/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-chat-rpc labels: helm.sh/chart: chat-rpc-0.1.0 app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name --- # Source: admin-api/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.17 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.6.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name --- # Source: admin-api/charts/admin-rpc/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-admin-rpc labels: helm.sh/chart: admin-rpc-0.1.0 app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: admin-rpc app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: admin-rpc securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-admin:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: chat-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-chat:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-rpc/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-chat-rpc labels: helm.sh/chart: chat-rpc-0.1.0 app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: chat-rpc app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: chat-rpc securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-chat:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.17 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.6.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name template: metadata: labels: app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name spec: serviceAccountName: default securityContext: {} containers: - name: admin-api securityContext: {} image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-admin:release-v1.6" imagePullPolicy: Always ports: - name: http containerPort: 80 protocol: TCP #livenessProbe: # httpGet: # path: / # port: http #readinessProbe: # httpGet: # path: / # port: http resources: {} volumeMounts: - mountPath: /openim/openim-chat/config/config.yaml name: config subPath: config.yaml volumes: - name: config configMap: name: imchat-cm --- # Source: admin-api/charts/chat-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-chat-api labels: helm.sh/chart: chat-api-0.1.0 app.kubernetes.io/name: chat-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.16.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /chat(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-chat-api port: number: 80 --- # Source: admin-api/templates/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: release-name-admin-api labels: helm.sh/chart: admin-api-0.1.17 app.kubernetes.io/name: admin-api app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.6.0" app.kubernetes.io/managed-by: Helm annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: ingressClassName: nginx tls: - hosts: - "openim1.server.top" secretName: webapitls rules: - host: "openim1.server.top" http: paths: - path: /complete_admin(/|$)(.*) pathType: ImplementationSpecific backend: service: name: release-name-admin-api port: number: 80 ```
dependabot[bot] commented 1 month ago

Superseded by #101.