openimsdk / openim-docker

openim-docker configuration for deploying OpenIM. Provides a build solution for a stable distribution, as well as a docker compose deployment strategy
https://openim.io
Apache License 2.0
36 stars 46 forks source link

Bug: dependency failed to start: container openim-server is unhealthy #73

Closed happy2wh7 closed 4 months ago

happy2wh7 commented 7 months ago

What happened?

按官方文档操作:https://docs.openim.io/zh-Hans/guides/gettingStarted/dockerCompose

git clone https://github.com/openimsdk/openim-docker openim-docker && cd openim-docker && make init
export OPENIM_IP="66.42.42.118"
docker compose up -d

然后就出错:dependency failed to start: container openim-server is unhealthy 查看容器openim-server的log

Checking components Round 5...
Starting Mongo failed, server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: 172.28.0.1:37017, Type: Unknown, Last error: dial tcp 172.28.0.1:37017: i/o timeout }, ] };ths addr is:172.28.0.1:37017
Checking components Round 6...
Starting Mongo failed, server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: 172.28.0.1:37017, Type: Unknown, Last error: dial tcp 172.28.0.1:37017: i/o timeout }, ] };ths addr is:172.28.0.1:37017

What did you expect to happen?

容器正常启动

How can we reproduce it (as minimally and precisely as possible)?

docker compose up -d

Anything else we need to know?

No response

version

root@vultr:~# docker --version
Docker version 24.0.7, build afdd53b

Cloud provider

vultr.com

OS version

PRETTY_NAME="Ubuntu 22.04.3 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.3 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy

Install tools

kubbot commented 7 months ago

Hello! Thank you for filing an issue.

If this is a bug report, please include relevant logs to help us debug the problem.

Join slack 🤖 to connect and communicate with our developers.

cubxxw commented 7 months ago

check mongo statue

docker ps
skiffer-git commented 7 months ago

It seems that MongoDB has not started, can you please confirm?

happy2wh7 commented 7 months ago

check mongo statue

docker ps
root@vultr:~/openim-docker# docker ps
CONTAINER ID   IMAGE                                                    COMMAND                  CREATED          STATUS                      PORTS                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        NAMES
c7fae10e21d0   ghcr.io/openimsdk/openim-server:release-v3.5             "/openim/openim-serv…"   10 minutes ago   Up 10 minutes (unhealthy)   0.0.0.0:10001-10002->10001-10002/tcp, :::10001-10002->10001-10002/tcp, 0.0.0.0:20100->20100/tcp, :::20100->20100/tcp, 0.0.0.0:20110->20110/tcp, :::20110->20110/tcp, 0.0.0.0:20120->20120/tcp, :::20120->20120/tcp, 0.0.0.0:20130->20130/tcp, :::20130->20130/tcp, 0.0.0.0:20140->20140/tcp, :::20140->20140/tcp, 0.0.0.0:20150->20150/tcp, :::20150->20150/tcp, 0.0.0.0:20160->20160/tcp, :::20160->20160/tcp, 0.0.0.0:20170->20170/tcp, :::20170->20170/tcp, 0.0.0.0:20230->20230/tcp, :::20230->20230/tcp, 0.0.0.0:21300-21301->21300-21301/tcp, :::21300-21301->21300-21301/tcp, 0.0.0.0:21400-21403->21400-21403/tcp, :::21400-21403->21400-21403/tcp   openim-server
5cc3d6720d4d   bitnami/node-exporter:1.7.0                              "/opt/bitnami/node-e…"   10 minutes ago   Up 10 minutes               0.0.0.0:19100->9100/tcp, :::19100->9100/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  node-exporter
009c96ec2f81   ghcr.io/openimsdk/openim-admin:toc-base-open-docker.35   "/docker-entrypoint.…"   10 minutes ago   Up 10 minutes               0.0.0.0:11002->80/tcp, :::11002->80/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      openim-admin
50dd28dd3726   bitnami/zookeeper:3.8                                    "/opt/bitnami/script…"   10 minutes ago   Up 10 minutes               2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:12181->2181/tcp, :::12181->2181/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    zookeeper
d52e8aec0c25   redis:7.0.0                                              "docker-entrypoint.s…"   10 minutes ago   Up 10 minutes               0.0.0.0:16379->6379/tcp, :::16379->6379/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  redis
f396e86a7919   ghcr.io/openimsdk/openim-web:latest                      "bash -c 'openim-web…"   10 minutes ago   Up 10 minutes               0.0.0.0:11001->11001/tcp, :::11001->11001/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                openim-web
9ba17c6e15b4   grafana/grafana:10.2.2                                   "/run.sh"                10 minutes ago   Up 10 minutes               0.0.0.0:13000->3000/tcp, :::13000->3000/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  grafana
acf97bc08eb3   mysql:5.7                                                "docker-entrypoint.s…"   10 minutes ago   Up 10 minutes               33060/tcp, 0.0.0.0:13306->3306/tcp, :::13306->3306/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       mysql
73255abf24b9   minio/minio:latest                                       "/usr/bin/docker-ent…"   10 minutes ago   Up 10 minutes               0.0.0.0:9090->9090/tcp, :::9090->9090/tcp, 0.0.0.0:10005->9000/tcp, :::10005->9000/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       minio
afe9d74950c2   mongo:6.0.2                                              "docker-entrypoint.s…"   10 minutes ago   Up 10 minutes               0.0.0.0:37017->27017/tcp, :::37017->27017/tcp                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                mongo
12edf8bd06e4   bitnami/kafka:3.5.1                                      "/opt/bitnami/script…"   10 minutes ago   Up 10 minutes               0.0.0.0:19092->9092/tcp, :::19092->9092/tcp                                                                                                                                                                                                                                             
                                                                                                                                                  kafka
happy2wh7 commented 7 months ago

It seems that MongoDB has not started, can you please confirm?

最新mongodb日志:

{"t":{"$date":"2024-01-03T14:46:16.727+08:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"},"storage":{"wiredTiger":{"engineConfig":{"cacheSizeGB":1}}}}}}
{"t":{"$date":"2024-01-03T14:46:16.728+08:00"},"s":"W",  "c":"STORAGE",  "id":22271,   "ctx":"initandlisten","msg":"Detected unclean shutdown - Lock file is not empty","attr":{"lockFile":"/data/db/mongod.lock"}}
{"t":{"$date":"2024-01-03T14:46:16.728+08:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2024-01-03T14:46:16.728+08:00"},"s":"W",  "c":"STORAGE",  "id":22302,   "ctx":"initandlisten","msg":"Recovering data from the last clean checkpoint."}
{"t":{"$date":"2024-01-03T14:46:16.728+08:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2024-01-03T14:46:16.728+08:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=1024M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],"}}
{"t":{"$date":"2024-01-03T14:46:17.791+08:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":1063}}
{"t":{"$date":"2024-01-03T14:46:17.791+08:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2024-01-03T14:46:17.797+08:00"},"s":"W",  "c":"CONTROL",  "id":5123300, "ctx":"initandlisten","msg":"vm.max_map_count is too low","attr":{"currentValue":65530,"recommendedMinimum":1677720,"maxConns":838860},"tags":["startupWarnings"]}
{"t":{"$date":"2024-01-03T14:46:17.799+08:00"},"s":"I",  "c":"NETWORK",  "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":17},"outgoing":{"minWireVersion":6,"maxWireVersion":17},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":17,"maxWireVersion":17},"outgoing":{"minWireVersion":17,"maxWireVersion":17},"isInternalClient":true}}}
{"t":{"$date":"2024-01-03T14:46:17.799+08:00"},"s":"I",  "c":"REPL",     "id":5853300, "ctx":"initandlisten","msg":"current featureCompatibilityVersion value","attr":{"featureCompatibilityVersion":"6.0","context":"startup"}}
{"t":{"$date":"2024-01-03T14:46:17.799+08:00"},"s":"I",  "c":"STORAGE",  "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
{"t":{"$date":"2024-01-03T14:46:17.801+08:00"},"s":"I",  "c":"CONTROL",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2024-01-03T14:46:17.801+08:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2024-01-03T14:46:17.804+08:00"},"s":"I",  "c":"REPL",     "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
{"t":{"$date":"2024-01-03T14:46:17.805+08:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2024-01-03T14:46:17.807+08:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2024-01-03T14:46:17.807+08:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2024-01-03T14:46:17.807+08:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
happy2wh7 commented 7 months ago
root@vultr:~/openim-docker# ss -lntp|grep 37017
LISTEN 0      4096         0.0.0.0:37017      0.0.0.0:*    users:(("docker-proxy",pid=9976,fd=4))
LISTEN 0      4096            [::]:37017         [::]:*    users:(("docker-proxy",pid=9983,fd=4))
cubxxw commented 7 months ago
docker logs c7fae10e21d0

OR checkout openim-server logs:

cat openim-server/logs/*
cat openim-server/_output/logs/*
happy2wh7 commented 7 months ago

在host上进入容器连接mongodb:

root@vultr:~/openim-docker# docker run --network host -ti bitnami/mongodb /bin/bash

I have no name!@vultr:/$ mongosh --username root --password openIM123 127.0.0.1:37017
...
...
...
test> show dbs
admin   100.00 KiB
config   12.00 KiB
local    72.00 KiB
kubbot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Enter the container on the host to connect to mongodb:

root@vultr:~/openim-docker# docker run --network host -ti bitnami/mongodb /bin/bash

I have no name!@vultr:/$ mongosh --username root --password openIM123 127.0.0.1:37017
...
...
...
test> show dbs
admin 100.00 KiB
config 12.00 KiB
local 72.00 KiB
happy2wh7 commented 7 months ago
docker logs c7fae10e21d0

OR checkout openim-server logs:

cat openim-server/logs/*
cat openim-server/_output/logs/*
root@vultr:~/openim-docker/openim-server# ls
config  logs  _output
root@vultr:~/openim-docker/openim-server# ls logs
root@vultr:~/openim-docker/openim-server# ls _output/
logs
root@vultr:~/openim-docker/openim-server# cat _output/logs/openim_20240103.log

# Use Docker to start all openim service
Generating /openim/openim-server/config/config.yaml as it does not exist.
⌚  Working with template file: /openim/openim-server/deployments/templates/openim.yaml to generate /openim/openim-server/config/config.yaml...
Generating /openim/openim-server/config/alertmanager.yml as it does not exist.
⌚  Working with template file: /openim/openim-server/deployments/templates/alertmanager.yml to generate /openim/openim-server/config/alertmanager.yml...
Generating /openim/openim-server/config/prometheus.yml as it does not exist.
⌚  Working with template file: /openim/openim-server/deployments/templates/prometheus.yml to generate /openim/openim-server/config/prometheus.yml...
Generating /openim/openim-server/.env as it does not exist.
⌚  Working with template file: /openim/openim-server/deployments/templates/env-template.yaml to generate /openim/openim-server/.env...
Generating /openim/openim-server/config/notification.yaml as it does not exist.
📋 Copying /openim/openim-server/deployments/templates/notification.yaml to /openim/openim-server/config/notification.yaml...
Generating /openim/openim-server/config/email.tmpl as it does not exist.
📋 Copying /openim/openim-server/deployments/templates/email.tmpl to /openim/openim-server/config/email.tmpl...
Generating /openim/openim-server/config/instance-down-rules.yml as it does not exist.
📋 Copying /openim/openim-server/deployments/templates/instance-down-rules.yml to /openim/openim-server/config/instance-down-rules.yml...
[success 0103 14:46:18] ==>  Configuration and example files operation complete!

# Begin to start all openim service scripts

## Pre Starting OpenIM services
Preparing to start OpenIM Tools...
Starting ncpu...
Starting PATH: /openim/openim-server/_output/bin/tools/linux/amd64/ncpu...
Starting ncpu...
Starting component...
Starting PATH: /openim/openim-server/_output/bin/tools/linux/amd64/component...
Starting component...

# Begin to check all openim service

## Check all dependent service ports

## Check OpenIM service name
!!! [0103 14:47:10] Call tree:
!!! [0103 14:47:10]  1: /openim/openim-server/scripts/install/openim-msgtransfer.sh:145 openim::msgtransfer::check(...)
!!! [0103 14:47:10]  2: /openim/openim-server/scripts/check-all.sh:83 source(...)
!!! Error in /openim/openim-server/scripts/install/openim-msgtransfer.sh:64
  Error in /openim/openim-server/scripts/install/openim-msgtransfer.sh:64. 'PIDS=$(pgrep -f "${OPENIM_OUTPUT_HOSTBIN}/openim-msgtransfer")' exited with status 1
Call stack:
  1: /openim/openim-server/scripts/install/openim-msgtransfer.sh:64 openim::msgtransfer::check(...)
  2: /openim/openim-server/scripts/install/openim-msgtransfer.sh:145 source(...)
  3: /openim/openim-server/scripts/check-all.sh:83 main(...)
Exiting with status 1

# Begin to check all openim service

## Check all dependent service ports

## Check OpenIM service name
!!! [0103 14:48:11] Call tree:
!!! [0103 14:48:11]  1: /openim/openim-server/scripts/install/openim-msgtransfer.sh:145 openim::msgtransfer::check(...)
!!! [0103 14:48:11]  2: /openim/openim-server/scripts/check-all.sh:83 source(...)
!!! Error in /openim/openim-server/scripts/install/openim-msgtransfer.sh:64
  Error in /openim/openim-server/scripts/install/openim-msgtransfer.sh:64. 'PIDS=$(pgrep -f "${OPENIM_OUTPUT_HOSTBIN}/openim-msgtransfer")' exited with status 1
Call stack:
  1: /openim/openim-server/scripts/install/openim-msgtransfer.sh:64 openim::msgtransfer::check(...)
  2: /openim/openim-server/scripts/install/openim-msgtransfer.sh:145 source(...)
  3: /openim/openim-server/scripts/check-all.sh:83 main(...)
Exiting with status 1

# Begin to check all openim service

## Check all dependent service ports

## Check OpenIM service name
!!! [0103 14:49:11] Call tree:
!!! [0103 14:49:11]  1: /openim/openim-server/scripts/install/openim-msgtransfer.sh:145 openim::msgtransfer::check(...)
!!! [0103 14:49:11]  2: /openim/openim-server/scripts/check-all.sh:83 source(...)
!!! Error in /openim/openim-server/scripts/install/openim-msgtransfer.sh:64
  Error in /openim/openim-server/scripts/install/openim-msgtransfer.sh:64. 'PIDS=$(pgrep -f "${OPENIM_OUTPUT_HOSTBIN}/openim-msgtransfer")' exited with status 1
Call stack:
  1: /openim/openim-server/scripts/install/openim-msgtransfer.sh:64 openim::msgtransfer::check(...)
  2: /openim/openim-server/scripts/install/openim-msgtransfer.sh:145 source(...)
  3: /openim/openim-server/scripts/check-all.sh:83 main(...)
Exiting with status 1
skiffer-git commented 7 months ago

docker network inspect openim-docker_openim-server

kubbot commented 7 months ago

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


docker network inspect openim-docker_openim-server

cubxxw commented 7 months ago

It seems that your issue is related to the Docker Compose startup process, specifically concerning the health check of the openim-server. You have used the following configuration:

depends_on:
  openim-server:
    condition: service_healthy

However, Docker Compose is starting subsequent containers before the openim-server has successfully passed its health checks. This premature startup can lead to configuration files not being properly mapped.

To resolve this, please follow these steps:

  1. Bring Down the Current Services: First, you need to bring down the running services. You can do this by executing:

    docker compose down
  2. Clean Up Configuration and Files: Use make clean to remove all existing configurations and files. This ensures a fresh start:

    make clean
  3. Modify the Docker Compose Configuration: Next, edit your docker-compose.yaml file. Temporarily comment out all services that depend on openim-server. Start your comments from openim-chat.

  4. Start Only the OpenIM Server: Now, bring up only the openim-server by executing:

    docker compose up -d
  5. Monitor the OpenIM Server Logs: Monitor the logs of openim-server to check for successful initialization. Use the following command:

    docker compose logs -f openim-server

    Wait until you confirm that openim-server is healthy and running as expected.

  6. Uncomment and Restart Services: Once openim-server is up and running without issues, uncomment the previously commented services in your docker-compose.yaml file.

  7. Restart All Services: Execute the following command to bring up all the services:

    docker compose up -d
  8. Verify Logs of OpenIM Server and OpenIM Chat: Finally, check the logs of both openim-server and openim-chat to ensure they are operating correctly. You can view the logs using the docker compose logs command.

happy2wh7 commented 7 months ago

It seems that your issue is related to the Docker Compose startup process, specifically concerning the health check of the openim-server. You have used the following configuration:

... ... ...


8. **Verify Logs of OpenIM Server and OpenIM Chat:**
   Finally, check the logs of both `openim-server` and `openim-chat` to ensure they are operating correctly. You can view the logs using the `docker compose logs` command.

问题依旧

happy2wh7 commented 7 months ago

docker network inspect openim-docker_openim-server

[
    {
        "Name": "openim-docker_openim-server",
        "Id": "2d7e078ec507d968da419084e6023d4a4d203879e0a637030de23b9dad5a65a8",
        "Created": "2024-01-03T13:06:29.165763021Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.28.0.0/16",
                    "Gateway": "172.28.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "01797a6b7633edff74bcc8455b51aed3947c4a4100fae354e8bd8e374de2306f": {
                "Name": "openim-web",
                "EndpointID": "ee41a90e7fa1589051c689c868115fe6e16bab26671bca33dd38d2860035a6fa",
                "MacAddress": "02:42:ac:1c:00:07",
                "IPv4Address": "172.28.0.7/16",
                "IPv6Address": ""
            },
            "0532eaddd5ad5c9e61897e26767a7558a6880bc994382b9b8d4e79542f7e2761": {
                "Name": "grafana",
                "EndpointID": "0c2a4f82bd6c31bc6d04408fb887b764407ba91a21436ce4f8b9b64ce6ef9ed4",
                "MacAddress": "02:42:ac:1c:00:0b",
                "IPv4Address": "172.28.0.11/16",
                "IPv6Address": ""
            },
            "1312469773dd3906daa07ddc217c6aeb1de43098192c43416fe54bc2b575de9c": {
                "Name": "openim-admin",
                "EndpointID": "6eb41fb90d5e759eb7cda9e417bf365f6995912ec09e46fa2c532d81c9abd37b",
                "MacAddress": "02:42:ac:1c:00:0d",
                "IPv4Address": "172.28.0.13/16",
                "IPv6Address": ""
            },
            "24de7f94b5ed8f89ac9465885c8c8eaa0f9900ed8ea846abb829c217c8562166": {
                "Name": "mongo",
                "EndpointID": "8c665bc5156677e3d0d2be7b643183486fcf8c49addb0634a317f3cba1a32ec4",
                "MacAddress": "02:42:ac:1c:00:02",
                "IPv4Address": "172.28.0.2/16",
                "IPv6Address": ""
            },
            "2b30a32b3685b032174ff41f5da4c1d50a5dcfac177893f859d147312adab4b5": {
                "Name": "node-exporter",
                "EndpointID": "4d3d3c00c80ff02943ea17d066bfaca9a85f48fa7710f2019182bc14e5535626",
                "MacAddress": "02:42:ac:1c:00:0c",
                "IPv4Address": "172.28.0.12/16",
                "IPv6Address": ""
            },
            "5a8fc3b4a111550aec579804f9224ed764c354fe36197829159ed94e3f1448ec": {
                "Name": "mysql",
                "EndpointID": "23193a0fc40ae05f546ab947cb1a4f4d40b7dbd7f45744a31f63bd66eb8ed8d5",
                "MacAddress": "02:42:ac:1c:00:0f",
                "IPv4Address": "172.28.0.15/16",
                "IPv6Address": ""
            },
            "6dbe736d52e946c2c3185cfd704c707e6e04d7afc05f4c0b9e0e3ba8bb48a56a": {
                "Name": "redis",
                "EndpointID": "f8744b4185697dc28d1814b43968053d2a33c4057aac4373c505b1618b1d4d15",
                "MacAddress": "02:42:ac:1c:00:03",
                "IPv4Address": "172.28.0.3/16",
                "IPv6Address": ""
            },
            "8061a672dad0ec9584e1123cdb6aebfb9ab38d7ce80acb43df777f4530a70069": {
                "Name": "zookeeper",
                "EndpointID": "1fc81d026f44d894c32b23e1135e7ffed07e8dc8331637ba4f54adb34e614dbc",
                "MacAddress": "02:42:ac:1c:00:05",
                "IPv4Address": "172.28.0.5/16",
                "IPv6Address": ""
            },
            "923fef32af4ddf6711da2aef0bd63c2c4ee29d5d927bd5b1dfb60c04e22281a0": {
                "Name": "minio",
                "EndpointID": "5097b50a0829219ebb5d8bde8909161f6dec9a3870e1e4eff1b9c93570166c21",
                "MacAddress": "02:42:ac:1c:00:06",
                "IPv4Address": "172.28.0.6/16",
                "IPv6Address": ""
            },
            "a6815c8624af54ac1ac51ad5738a80afdfa925c0b10d8f5636d64d71a9e48f0f": {
                "Name": "openim-server",
                "EndpointID": "f0dffde9643bafa213d5454d4d9ed280084d13a08d4426988dd259c160ceb531",
                "MacAddress": "02:42:ac:1c:00:08",
                "IPv4Address": "172.28.0.8/16",
                "IPv6Address": ""
            },
            "c534bc8c4c4bb3568affe2abad2a9ae757902d947f899d7d2c0ac00eac36ba39": {
                "Name": "kafka",
                "EndpointID": "344f91cf2de2afc928ff9705e1eeb19a23de7f0643ca08f010e23cdc13afd998",
                "MacAddress": "02:42:ac:1c:00:04",
                "IPv4Address": "172.28.0.4/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "openim-server",
            "com.docker.compose.project": "openim-docker",
            "com.docker.compose.version": "2.21.0"
        }
    }
]
skiffer-git commented 7 months ago

It might be due to the firewall. You can enter the IM container and then use nc -zv 172.28.0.1 37017

skiffer-git commented 7 months ago

root@134e1ea9f82b:/openim/openim-server# nc -zv 172.28.0.1 37017 Connection to 172.28.0.1 37017 port [tcp/*] succeeded!

happy2wh7 commented 7 months ago

It might be due to the firewall. You can enter the IM container and then use nc -zv 172.28.0.1 37017

=========================================

在host上进入容器连接mongodb:

root@vultr:~/openim-docker# docker run --network host -ti bitnami/mongodb /bin/bash

I have no name!@vultr:/$ mongosh --username root --password openIM123 127.0.0.1:37017
...
...
...
test> show dbs
admin   100.00 KiB
config   12.00 KiB
local    72.00 KiB
cubxxw commented 7 months ago

Hello,

Could you please confirm if the command mongosh --username root --password openIM123 127.0.0.1:37017 successfully connects? If so, I'm curious to know if the command mongosh --username root --password openIM123 172.28.0.1:37017 would also establish a connection. Thank you!

skiffer-git commented 7 months ago

root@134e1ea9f82b:/openim/openim-server# nc -zv 172.28.0.1 37017 Connection to 172.28.0.1 37017 port [tcp/*] succeeded!

happy2wh7 commented 7 months ago

Hello,

Could you please confirm if the command mongosh --username root --password openIM123 127.0.0.1:37017 successfully connects? If so, I'm curious to know if the command mongosh --username root --password openIM123 172.28.0.1:37017 would also establish a connection. Thank you!

show dbs命令已执行成功

skiffer-git commented 7 months ago

Instead of using the address 127.0.0.1:37017, try using the address 172.28.0.1:37017.

happy2wh7 commented 7 months ago

nc -zv 172.28.0.1 37017

# nc -zv 172.28.0.1 37017
Connection to 172.28.0.1 37017 port [tcp/*] succeeded!
kubbot commented 5 months ago

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days.

u3breeze commented 4 months ago

@happy2wh7 When logging into the system via SSH and viewing the System Information, check if there are multiple addresses for 172.28.0.1 listed. If there are, delete them all by using the command(sudo ip link delete br-xxxx type bridge). This may be necessary to address a potential issue. image

skiffer-git commented 4 months ago

The Docker deployment plan has been fully upgraded. Please refer to the README for the new scheme. The new scheme is simpler to use and supports Linux, Windows, and Mac.