Closed kitty7c6 closed 2 years ago
Looks like you already have some parts of the Mainflux running:
mainflux-auth-redis is up-to-date
mainflux-auth-db is up-to-date
mainflux-users-db is up-to-date
mainflux-keto-db is up-to-date
mainflux-es-redis is up-to-date
mainflux-things-db is up-to-date
mainflux-jaeger is up-to-date
mainflux-broker is up-to-date
mainflux-vernemq is up-to-date
mainflux-keto is up-to-date
mainflux-keto-migrate is up-to-date
mainflux-auth is up-to-date
mainflux-things is up-to-date
mainflux-users is up-to-date
mainflux-mqtt is up-to-date
mainflux-coap is up-to-date
mainflux-http is up-to-date
mainflux-nginx is up-to-date
I'd suggest that you first clean your Mainflux containers, network, and volumes and start fresh.
Messages such as trying to connect to postgres/broker and failed
are fine because you do not control the order in which services are starting up (some may take longer than others), but eventually, after a couple (<10) attempts - all services should be able to connect. You can check this by running docker logs mainflux-<svc-name>
.
Ok, I cleaned all by make pv=true cleandocker
, and then need to wait when all containers initialized.
I see mainflux-broker | [1] 2022/08/24 11:36:15.229234 [INF] Server is ready
but same error - mainflux-coap | {"level":"error","message":"Failed to connect to message broker:
this meens container not initialise yet?
napster@salmon:~/mainflux$ sudo make run
sed -i "s,file: brokers/.*.yml,file: brokers/nats.yml," docker/docker-compose.yml
sed -i "s,MF_BROKER_URL: .*,MF_BROKER_URL: $\{MF_NATS_URL\}," docker/docker-compose.yml
docker-compose -f docker/docker-compose.yml up
Creating network "docker_mainflux-base-net" with driver "bridge"
Creating volume "docker_mainflux-auth-db-volume" with default driver
Creating volume "docker_mainflux-users-db-volume" with default driver
Creating volume "docker_mainflux-things-db-volume" with default driver
Creating volume "docker_mainflux-keto-db-volume" with default driver
Creating volume "docker_mainflux-auth-redis-volume" with default driver
Creating volume "docker_mainflux-es-redis-volume" with default driver
Creating volume "docker_mainflux-mqtt-broker-volume" with default driver
Pulling keto-db (postgres:13.3-alpine)...
13.3-alpine: Pulling from library/postgres
29291e31a76a: Pull complete
c7f8a1ea71cb: Pull complete
64d8912b293d: Pull complete
0d265a24fb71: Pull complete
06559c1681e8: Pull complete
ed849f5f685e: Pull complete
3a646df07e94: Pull complete
1e40d492b730: Pull complete
Digest: sha256:e98a69a836391fe94d889a6ccfbb21257b93f47b2794da114a82ef23e342342f
Status: Downloaded newer image for postgres:13.3-alpine
Pulling keto-migrate (oryd/keto:v0.6.0-alpha.3)...
v0.6.0-alpha.3: Pulling from oryd/keto
540db60ca938: Pull complete
585b58b29e2a: Pull complete
b4ca58e58e44: Pull complete
47ffba45bb23: Pull complete
Digest: sha256:56dceb7743f7e339a433be252e514b4bca5f166da1de5db401bcc9e653962262
Status: Downloaded newer image for oryd/keto:v0.6.0-alpha.3
Pulling broker (nats:2.2.4-alpine)...
2.2.4-alpine: Pulling from library/nats
540db60ca938: Already exists
6217a0b59da6: Pull complete
7631199270cb: Pull complete
448706151ceb: Pull complete
Digest: sha256:170d97969e727db1daf870639952e97cc847901f39fb8b8bb6af3f4668777f36
Status: Downloaded newer image for nats:2.2.4-alpine
Pulling auth (mainflux/auth:latest)...
latest: Pulling from mainflux/auth
4380b665b78f: Pull complete
87c340d46aa0: Pull complete
Digest: sha256:8ce8079596f31071bb6dbe6d332965e1d392091dc6225de7e30da88612b93ac6
Status: Downloaded newer image for mainflux/auth:latest
Pulling users (mainflux/users:latest)...
latest: Pulling from mainflux/users
25f7f8e5138c: Pull complete
2b20587dd1b3: Pull complete
Digest: sha256:20b722463754956762a1b3992f42072e25a5fc273d34af035b9eef8ba252da9a
Status: Downloaded newer image for mainflux/users:latest
Pulling auth-redis (redis:6.2.2-alpine)...
6.2.2-alpine: Pulling from library/redis
540db60ca938: Already exists
29712d301e8c: Pull complete
8173c12df40f: Pull complete
0be901b3c77d: Pull complete
c33773bf45b4: Pull complete
6eeb0c30f7e7: Pull complete
Digest: sha256:f9577ac6e68c70b518e691406f2bebee49d8db22118fc87bad3b39c16a1cb46e
Status: Downloaded newer image for redis:6.2.2-alpine
Pulling things (mainflux/things:latest)...
latest: Pulling from mainflux/things
898391aab224: Pull complete
ce49defdf456: Pull complete
Digest: sha256:a80d6f776efb06035e7441afc7425e2bd92b948273b54113ab5dba6e12db0936
Status: Downloaded newer image for mainflux/things:latest
Pulling jaeger (jaegertracing/all-in-one:1.20)...
1.20: Pulling from jaegertracing/all-in-one
8b42edb6bd6a: Pull complete
22d8418e7530: Pull complete
b6981149966e: Pull complete
Digest: sha256:54c2ea315dab7215c51c1b06b111c666f594e90317584f84eabbc59aa5856b13
Status: Downloaded newer image for jaegertracing/all-in-one:1.20
Pulling vernemq (mainflux/vernemq:latest)...
latest: Pulling from mainflux/vernemq
6097bfa160c1: Pull complete
31bd4857b30e: Pull complete
fe0f077957d1: Pull complete
406cc323b35b: Pull complete
5adaa4e2ab6e: Pull complete
f7e9cb61c8fd: Pull complete
Digest: sha256:72dd45acad3c6b8bfe31605aa480243d379a0e6f9fe8f3cfa089aa6b4bf9a7a5
Status: Downloaded newer image for mainflux/vernemq:latest
Pulling mqtt-adapter (mainflux/mqtt:latest)...
latest: Pulling from mainflux/mqtt
44d616753ea5: Pull complete
ce78fba97dcd: Pull complete
Digest: sha256:b38fb8b68bc329575d8f5240061f4fc80fe491c5b776d09deaec100fc04f0ccf
Status: Downloaded newer image for mainflux/mqtt:latest
Pulling http-adapter (mainflux/http:latest)...
latest: Pulling from mainflux/http
25f7f8e5138c: Already exists
cf8a2049f34a: Pull complete
Digest: sha256:ca0d6e7201ca7689341b4c0313831c24ca0a98432233aeaaadd96dcda76b65fc
Status: Downloaded newer image for mainflux/http:latest
Pulling nginx (nginx:1.20.0-alpine)...
1.20.0-alpine: Pulling from library/nginx
540db60ca938: Already exists
3b88e9e9a17d: Pull complete
3019f265923a: Pull complete
fd39c3601fe1: Pull complete
4d0e0e4dee17: Pull complete
f81ca69df58f: Pull complete
Digest: sha256:e015192ec74937149dce3aa1feb8af016b7cce3a2896246b623cfd55c14939a6
Status: Downloaded newer image for nginx:1.20.0-alpine
Pulling coap-adapter (mainflux/coap:latest)...
latest: Pulling from mainflux/coap
898391aab224: Already exists
26c6c44f1138: Pull complete
Digest: sha256:2ffecdb8226e836efef0e7705da76b5404e11cb44c625722b57f074e0ef982c1
Status: Downloaded newer image for mainflux/coap:latest
Creating mainflux-jaeger ... done
Creating mainflux-auth-redis ... done
Creating mainflux-es-redis ... done
Creating mainflux-keto-db ... done
Creating mainflux-things-db ... done
Creating mainflux-broker ... done
Creating mainflux-vernemq ... done
Creating mainflux-users-db ... done
Creating mainflux-auth-db ... done
Creating mainflux-keto-migrate ... done
Creating mainflux-keto ... done
Creating mainflux-auth ... done
Creating mainflux-users ... done
Creating mainflux-things ... done
Creating mainflux-coap ... done
Creating mainflux-http ... done
Creating mainflux-mqtt ... done
Creating mainflux-nginx ... done
Attaching to mainflux-things-db, mainflux-auth-redis, mainflux-users-db, mainflux-auth-db, mainflux-broker, mainflux-keto-db, mainflux-es-redis, mainflux-jaeger, mainflux-vernemq, mainflux-keto, mainflux-keto-migrate, mainflux-auth, mainflux-users, mainflux-things, mainflux-mqtt, mainflux-http, mainflux-coap, mainflux-nginx
mainflux-auth-db | The files belonging to this database system will be owned by user "postgres".
mainflux-auth-db | This user must also own the server process.
mainflux-auth-db |
mainflux-auth-db | The database cluster will be initialized with locale "en_US.utf8".
mainflux-auth-db | The default database encoding has accordingly been set to "UTF8".
mainflux-auth-db | The default text search configuration will be set to "english".
mainflux-auth-db |
mainflux-auth-db | Data page checksums are disabled.
mainflux-auth-db |
mainflux-auth-db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
mainflux-auth-db | creating subdirectories ... ok
mainflux-auth-db | selecting dynamic shared memory implementation ... posix
mainflux-auth-db | selecting default max_connections ... 100
mainflux-auth-db | selecting default shared_buffers ... 128MB
mainflux-auth-db | selecting default time zone ... UTC
mainflux-auth-db | creating configuration files ... ok
mainflux-auth-db | running bootstrap script ... ok
mainflux-auth-db | performing post-bootstrap initialization ... sh: locale: not found
mainflux-auth-db | 2022-08-24 11:36:23.883 UTC [30] WARNING: no usable system locales were found
mainflux-auth-db | ok
mainflux-auth-db | syncing data to disk ... initdb: warning: enabling "trust" authentication for local connections
mainflux-auth-db | You can change this by editing pg_hba.conf or using the option -A, or
mainflux-auth-db | --auth-local and --auth-host, the next time you run initdb.
mainflux-auth-db | ok
mainflux-auth-db |
mainflux-auth-db |
mainflux-auth-db | Success. You can now start the database server using:
mainflux-auth-db |
mainflux-auth-db | pg_ctl -D /var/lib/postgresql/data -l logfile start
mainflux-auth-db |
mainflux-auth-db | waiting for server to start....2022-08-24 11:37:17.191 UTC [35] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-auth-db | 2022-08-24 11:37:17.243 UTC [35] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-auth-db | 2022-08-24 11:37:17.290 UTC [36] LOG: database system was shut down at 2022-08-24 11:36:34 UTC
mainflux-auth-db | 2022-08-24 11:37:17.336 UTC [35] LOG: database system is ready to accept connections
mainflux-auth-db | done
mainflux-auth-db | server started
mainflux-auth-db | CREATE DATABASE
mainflux-auth-db |
mainflux-auth-db |
mainflux-auth-db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
mainflux-auth-db |
mainflux-auth-db | 2022-08-24 11:37:21.093 UTC [35] LOG: received fast shutdown request
mainflux-auth-db | waiting for server to shut down....2022-08-24 11:37:21.117 UTC [35] LOG: aborting any active transactions
mainflux-auth-db | 2022-08-24 11:37:21.136 UTC [35] LOG: background worker "logical replication launcher" (PID 42) exited with exit code 1
mainflux-auth-db | 2022-08-24 11:37:21.136 UTC [37] LOG: shutting down
mainflux-auth-db | 2022-08-24 11:37:21.289 UTC [35] LOG: database system is shut down
mainflux-auth-db | done
mainflux-auth-db | server stopped
mainflux-auth-db |
mainflux-auth-db | PostgreSQL init process complete; ready for start up.
mainflux-auth-db |
mainflux-auth-db | 2022-08-24 11:37:21.658 UTC [1] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-auth-db | 2022-08-24 11:37:21.666 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
mainflux-auth-db | 2022-08-24 11:37:21.667 UTC [1] LOG: listening on IPv6 address "::", port 5432
mainflux-auth-db | 2022-08-24 11:37:21.695 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-auth-db | 2022-08-24 11:37:21.723 UTC [49] LOG: database system was shut down at 2022-08-24 11:37:21 UTC
mainflux-auth-db | 2022-08-24 11:37:21.742 UTC [1] LOG: database system is ready to accept connections
mainflux-auth-redis | 1:C 24 Aug 2022 11:36:10.945 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
mainflux-auth-redis | 1:C 24 Aug 2022 11:36:10.962 # Redis version=6.2.2, bits=64, commit=00000000, modified=0, pid=1, just started
mainflux-auth-redis | 1:C 24 Aug 2022 11:36:10.962 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
mainflux-auth-redis | 1:M 24 Aug 2022 11:36:10.964 * monotonic clock: POSIX clock_gettime
mainflux-auth-redis | 1:M 24 Aug 2022 11:36:10.965 * Running mode=standalone, port=6379.
mainflux-auth-redis | 1:M 24 Aug 2022 11:36:10.965 # Server initialized
mainflux-auth-redis | 1:M 24 Aug 2022 11:36:10.965 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
mainflux-auth-redis | 1:M 24 Aug 2022 11:36:10.966 * Ready to accept connections
mainflux-broker | [1] 2022/08/24 11:36:15.227240 [INF] Starting nats-server
mainflux-broker | [1] 2022/08/24 11:36:15.227376 [INF] Version: 2.2.4
mainflux-broker | [1] 2022/08/24 11:36:15.227382 [INF] Git: [924b314]
mainflux-broker | [1] 2022/08/24 11:36:15.227387 [INF] Name: NDJSFDOUGGPN4M325TZNDSHTFE4USFBLIPVJFBYGQCXDERB3TWHAEB7C
mainflux-broker | [1] 2022/08/24 11:36:15.227392 [INF] ID: NDJSFDOUGGPN4M325TZNDSHTFE4USFBLIPVJFBYGQCXDERB3TWHAEB7C
mainflux-broker | [1] 2022/08/24 11:36:15.227408 [INF] Using configuration file: /etc/nats/nats.conf
mainflux-broker | [1] 2022/08/24 11:36:15.228938 [INF] Listening for client connections on 0.0.0.0:4222
mainflux-broker | [1] 2022/08/24 11:36:15.229234 [INF] Server is ready
mainflux-coap | 2022/08/24 11:37:33 The binary was build using Nats as the message broker
mainflux-coap | {"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-24T11:37:33.572944201Z"}
mainflux-coap | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:35.635718491Z"}
mainflux-es-redis | 1:C 24 Aug 2022 11:36:17.649 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
mainflux-es-redis | 1:C 24 Aug 2022 11:36:17.676 # Redis version=6.2.2, bits=64, commit=00000000, modified=0, pid=1, just started
mainflux-es-redis | 1:C 24 Aug 2022 11:36:17.676 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
mainflux-es-redis | 1:M 24 Aug 2022 11:36:17.678 * monotonic clock: POSIX clock_gettime
mainflux-es-redis | 1:M 24 Aug 2022 11:36:17.680 * Running mode=standalone, port=6379.
mainflux-es-redis | 1:M 24 Aug 2022 11:36:17.685 # Server initialized
mainflux-es-redis | 1:M 24 Aug 2022 11:36:17.685 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
mainflux-es-redis | 1:M 24 Aug 2022 11:36:17.686 * Ready to accept connections
mainflux-http | 2022/08/24 11:37:32 The binary was build using Nats as the message broker
mainflux-http | {"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-24T11:37:32.865935889Z"}
mainflux-http | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:34.900915931Z"}
mainflux-jaeger | 2022/08/24 11:36:18 maxprocs: Leaving GOMAXPROCS=1: CPU quota undefined
mainflux-jaeger | {"level":"info","ts":1661340979.4940429,"caller":"flags/service.go:116","msg":"Mounting metrics handler on admin server","route":"/metrics"}
mainflux-jaeger | {"level":"info","ts":1661340979.4970489,"caller":"flags/admin.go:120","msg":"Mounting health check on admin server","route":"/"}
mainflux-jaeger | {"level":"info","ts":1661340979.4972653,"caller":"flags/admin.go:126","msg":"Starting admin HTTP server","http-addr":":14269"}
mainflux-jaeger | {"level":"info","ts":1661340979.4974656,"caller":"flags/admin.go:112","msg":"Admin server started","http.host-port":"[::]:14269","health-status":"unavailable"}
mainflux-jaeger | {"level":"info","ts":1661340979.5556624,"caller":"memory/factory.go:61","msg":"Memory storage initialized","configuration":{"MaxTraces":0}}
mainflux-jaeger | {"level":"info","ts":1661340979.7985938,"caller":"server/grpc.go:76","msg":"Starting jaeger-collector gRPC server","grpc.host-port":":14250"}
mainflux-jaeger | {"level":"info","ts":1661340979.816617,"caller":"server/http.go:44","msg":"Starting jaeger-collector HTTP server","http host-port":":14268"}
mainflux-jaeger | {"level":"info","ts":1661340979.817288,"caller":"server/zipkin.go:48","msg":"Listening for Zipkin HTTP traffic","zipkin host-port":":0"}
mainflux-jaeger | {"level":"info","ts":1661340980.2862332,"caller":"grpc/builder.go:67","msg":"Agent requested insecure grpc connection to collector(s)"}
mainflux-jaeger | {"level":"info","ts":1661340980.3053691,"caller":"grpc@v1.29.1/clientconn.go:243","msg":"parsed scheme: \"\"","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3064094,"caller":"grpc@v1.29.1/clientconn.go:249","msg":"scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.307845,"caller":"grpc@v1.29.1/resolver_conn_wrapper.go:143","msg":"ccResolverWrapper: sending update to cc: {[{:14250 <nil> 0 <nil>}] <nil> <nil>}","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3088276,"caller":"grpc@v1.29.1/clientconn.go:667","msg":"ClientConn switching balancer to \"round_robin\"","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3098023,"caller":"grpc@v1.29.1/clientconn.go:682","msg":"Channel switches to new LB policy \"round_robin\"","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3104753,"caller":"grpc@v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3117537,"caller":"command-line-arguments/main.go:212","msg":"Starting agent"}
mainflux-jaeger | {"level":"info","ts":1661340980.3121135,"caller":"querysvc/query_service.go:137","msg":"Archive storage not created","reason":"archive storage not supported"}
mainflux-jaeger | {"level":"info","ts":1661340980.312243,"caller":"app/flags.go:143","msg":"Archive storage not initialized"}
mainflux-jaeger | {"level":"info","ts":1661340980.3132906,"caller":"app/server.go:163","msg":"Query server started","port":16686,"addr":":16686"}
mainflux-jaeger | {"level":"info","ts":1661340980.3135626,"caller":"healthcheck/handler.go:128","msg":"Health Check state change","status":"ready"}
mainflux-jaeger | {"level":"info","ts":1661340980.3137112,"caller":"app/server.go:232","msg":"Starting CMUX server","port":16686,"addr":":16686"}
mainflux-jaeger | {"level":"info","ts":1661340980.3138866,"caller":"grpc@v1.29.1/clientconn.go:417","msg":"Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3140056,"caller":"grpc@v1.29.1/clientconn.go:1193","msg":"Subchannel picks a new address \":14250\" to connect","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3145704,"caller":"grpc/builder.go:101","msg":"Checking connection to collector"}
mainflux-jaeger | {"level":"info","ts":1661340980.3146906,"caller":"grpc/builder.go:104","msg":"Agent collector connection state change","dialTarget":":14250","status":"CONNECTING"}
mainflux-jaeger | {"level":"info","ts":1661340980.3150582,"caller":"app/agent.go:69","msg":"Starting jaeger-agent HTTP server","http-port":5778}
mainflux-jaeger | {"level":"info","ts":1661340980.3153167,"caller":"app/server.go:208","msg":"Starting HTTP server","port":16686,"addr":":16686"}
mainflux-jaeger | {"level":"info","ts":1661340980.3155115,"caller":"app/server.go:221","msg":"Starting GRPC server","port":16686,"addr":":16686"}
mainflux-jaeger | {"level":"info","ts":1661340980.3321228,"caller":"grpc@v1.29.1/clientconn.go:1056","msg":"Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.332325,"caller":"base/balancer.go:200","msg":"roundrobinPicker: newPicker called with info: {map[0xc00056e3c0:{{:14250 <nil> 0 <nil>}}]}","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.3324513,"caller":"grpc@v1.29.1/clientconn.go:417","msg":"Channel Connectivity change to READY","system":"grpc","grpc_log":true}
mainflux-jaeger | {"level":"info","ts":1661340980.332556,"caller":"grpc/builder.go:104","msg":"Agent collector connection state change","dialTarget":":14250","status":"READY"}
mainflux-keto | time=2022-08-24T11:36:38Z level=info msg=No tracer configured - skipping tracing setup audience=application service_name=ORY Keto service_version=master
mainflux-keto-db | The files belonging to this database system will be owned by user "postgres".
mainflux-keto-db | This user must also own the server process.
mainflux-keto-db |
mainflux-keto-db | The database cluster will be initialized with locale "en_US.utf8".
mainflux-keto-db | The default database encoding has accordingly been set to "UTF8".
mainflux-keto-db | The default text search configuration will be set to "english".
mainflux-keto-db |
mainflux-keto-db | Data page checksums are disabled.
mainflux-keto-db |
mainflux-keto-db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
mainflux-keto-db | creating subdirectories ... ok
mainflux-keto-db | selecting dynamic shared memory implementation ... posix
mainflux-keto-db | selecting default max_connections ... 100
mainflux-keto-db | selecting default shared_buffers ... 128MB
mainflux-keto-db | selecting default time zone ... UTC
mainflux-keto-db | creating configuration files ... ok
mainflux-keto-db | running bootstrap script ... ok
mainflux-keto-db | performing post-bootstrap initialization ... sh: locale: not found
mainflux-keto-db | 2022-08-24 11:36:26.438 UTC [30] WARNING: no usable system locales were found
mainflux-keto-db | ok
mainflux-keto-db | syncing data to disk ... initdb: warning: enabling "trust" authentication for local connections
mainflux-keto-db | You can change this by editing pg_hba.conf or using the option -A, or
mainflux-keto-db | --auth-local and --auth-host, the next time you run initdb.
mainflux-keto-db | ok
mainflux-keto-db |
mainflux-keto-db |
mainflux-keto-db | Success. You can now start the database server using:
mainflux-keto-db |
mainflux-keto-db | pg_ctl -D /var/lib/postgresql/data -l logfile start
mainflux-keto-db |
mainflux-keto-db | waiting for server to start....2022-08-24 11:37:16.927 UTC [35] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-keto-db | 2022-08-24 11:37:16.970 UTC [35] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-keto-db | 2022-08-24 11:37:17.027 UTC [36] LOG: database system was shut down at 2022-08-24 11:36:36 UTC
mainflux-keto-db | 2022-08-24 11:37:17.147 UTC [35] LOG: database system is ready to accept connections
mainflux-keto-db | done
mainflux-keto-db | server started
mainflux-keto-db | CREATE DATABASE
mainflux-keto-db |
mainflux-keto-db |
mainflux-keto-db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
mainflux-keto-db |
mainflux-keto-db | 2022-08-24 11:37:20.750 UTC [35] LOG: received fast shutdown request
mainflux-keto-db | waiting for server to shut down....2022-08-24 11:37:20.776 UTC [35] LOG: aborting any active transactions
mainflux-keto-db | 2022-08-24 11:37:20.786 UTC [35] LOG: background worker "logical replication launcher" (PID 42) exited with exit code 1
mainflux-keto-db | 2022-08-24 11:37:20.810 UTC [37] LOG: shutting down
mainflux-keto-db | 2022-08-24 11:37:21.002 UTC [35] LOG: database system is shut down
mainflux-keto-db | done
mainflux-keto-db | server stopped
mainflux-keto-db |
mainflux-keto-db | PostgreSQL init process complete; ready for start up.
mainflux-keto-db |
mainflux-keto-db | 2022-08-24 11:37:21.325 UTC [1] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-keto-db | 2022-08-24 11:37:21.396 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
mainflux-keto-db | 2022-08-24 11:37:21.396 UTC [1] LOG: listening on IPv6 address "::", port 5432
mainflux-keto-db | 2022-08-24 11:37:21.434 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-keto-db | 2022-08-24 11:37:21.466 UTC [49] LOG: database system was shut down at 2022-08-24 11:37:20 UTC
mainflux-keto-db | 2022-08-24 11:37:21.595 UTC [1] LOG: database system is ready to accept connections
mainflux-keto-migrate | time=2022-08-24T11:36:38Z level=info msg=No tracer configured - skipping tracing setup audience=application service_name=ORY Keto service_version=master
mainflux-mqtt | 2022/08/24 11:37:31 The binary was build using Nats as the message broker
mainflux-nginx | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
mainflux-nginx | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
mainflux-nginx | /docker-entrypoint.sh: Launching /docker-entrypoint.d/entrypoint.sh
mainflux-things-db | The files belonging to this database system will be owned by user "postgres".
mainflux-things-db | This user must also own the server process.
mainflux-things-db |
mainflux-things-db | The database cluster will be initialized with locale "en_US.utf8".
mainflux-things-db | The default database encoding has accordingly been set to "UTF8".
mainflux-things-db | The default text search configuration will be set to "english".
mainflux-things-db |
mainflux-things-db | Data page checksums are disabled.
mainflux-things-db |
mainflux-things-db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
mainflux-things-db | creating subdirectories ... ok
mainflux-things-db | selecting dynamic shared memory implementation ... posix
mainflux-things-db | selecting default max_connections ... 100
mainflux-things-db | selecting default shared_buffers ... 128MB
mainflux-things-db | selecting default time zone ... UTC
mainflux-things-db | creating configuration files ... ok
mainflux-things-db | running bootstrap script ... ok
mainflux-things-db | performing post-bootstrap initialization ... sh: locale: not found
mainflux-things-db | 2022-08-24 11:36:11.980 UTC [30] WARNING: no usable system locales were found
mainflux-things-db | ok
mainflux-things-db | syncing data to disk ... ok
mainflux-things-db |
mainflux-things-db |
mainflux-things-db | Success. You can now start the database server using:
mainflux-things-db |
mainflux-things-db | pg_ctl -D /var/lib/postgresql/data -l logfile start
mainflux-things-db |
mainflux-things-db | initdb: warning: enabling "trust" authentication for local connections
mainflux-things-db | You can change this by editing pg_hba.conf or using the option -A, or
mainflux-things-db | --auth-local and --auth-host, the next time you run initdb.
mainflux-things-db | waiting for server to start....2022-08-24 11:37:14.811 UTC [35] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-things-db | 2022-08-24 11:37:15.056 UTC [35] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-things-db | .2022-08-24 11:37:15.124 UTC [36] LOG: database system was shut down at 2022-08-24 11:36:22 UTC
mainflux-things-db | 2022-08-24 11:37:15.413 UTC [35] LOG: database system is ready to accept connections
mainflux-things-db | done
mainflux-things-db | server started
mainflux-things-db | CREATE DATABASE
mainflux-things-db |
mainflux-things-db |
mainflux-things-db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
mainflux-things-db |
mainflux-things-db | 2022-08-24 11:37:20.652 UTC [35] LOG: received fast shutdown request
mainflux-things-db | waiting for server to shut down....2022-08-24 11:37:20.678 UTC [35] LOG: aborting any active transactions
mainflux-things-db | 2022-08-24 11:37:20.713 UTC [35] LOG: background worker "logical replication launcher" (PID 42) exited with exit code 1
mainflux-things-db | 2022-08-24 11:37:20.714 UTC [37] LOG: shutting down
mainflux-things-db | 2022-08-24 11:37:20.949 UTC [35] LOG: database system is shut down
mainflux-things-db | done
mainflux-things-db | server stopped
mainflux-things-db |
mainflux-things-db | PostgreSQL init process complete; ready for start up.
mainflux-things-db |
mainflux-things-db | 2022-08-24 11:37:21.278 UTC [1] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-things-db | 2022-08-24 11:37:21.302 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
mainflux-things-db | 2022-08-24 11:37:21.302 UTC [1] LOG: listening on IPv6 address "::", port 5432
mainflux-things-db | 2022-08-24 11:37:21.356 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-things-db | 2022-08-24 11:37:21.395 UTC [49] LOG: database system was shut down at 2022-08-24 11:37:20 UTC
mainflux-things-db | 2022-08-24 11:37:21.418 UTC [1] LOG: database system is ready to accept connections
mainflux-users-db | The files belonging to this database system will be owned by user "postgres".
mainflux-users-db | This user must also own the server process.
mainflux-users-db |
mainflux-users-db | The database cluster will be initialized with locale "en_US.utf8".
mainflux-users-db | The default database encoding has accordingly been set to "UTF8".
mainflux-users-db | The default text search configuration will be set to "english".
mainflux-users-db |
mainflux-users-db | Data page checksums are disabled.
mainflux-users-db |
mainflux-users-db | fixing permissions on existing directory /var/lib/postgresql/data ... ok
mainflux-users-db | creating subdirectories ... ok
mainflux-users-db | selecting dynamic shared memory implementation ... posix
mainflux-users-db | selecting default max_connections ... 100
mainflux-users-db | selecting default shared_buffers ... 128MB
mainflux-users-db | selecting default time zone ... UTC
mainflux-users-db | creating configuration files ... ok
mainflux-users-db | running bootstrap script ... ok
mainflux-users-db | performing post-bootstrap initialization ... sh: locale: not found
mainflux-users-db | 2022-08-24 11:36:22.609 UTC [30] WARNING: no usable system locales were found
mainflux-users-db | ok
mainflux-users-db | syncing data to disk ... ok
mainflux-users-db |
mainflux-users-db |
mainflux-users-db | Success. You can now start the database server using:
mainflux-users-db |
mainflux-users-db | pg_ctl -D /var/lib/postgresql/data -l logfile start
mainflux-users-db |
mainflux-users-db | initdb: warning: enabling "trust" authentication for local connections
mainflux-users-db | You can change this by editing pg_hba.conf or using the option -A, or
mainflux-users-db | --auth-local and --auth-host, the next time you run initdb.
mainflux-users-db | waiting for server to start....2022-08-24 11:37:15.646 UTC [35] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-users-db | 2022-08-24 11:37:15.676 UTC [35] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-users-db | 2022-08-24 11:37:15.832 UTC [36] LOG: database system was shut down at 2022-08-24 11:36:32 UTC
mainflux-users-db | 2022-08-24 11:37:15.930 UTC [35] LOG: database system is ready to accept connections
mainflux-users-db | done
mainflux-users-db | server started
mainflux-users-db | CREATE DATABASE
mainflux-users-db |
mainflux-users-db |
mainflux-users-db | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
mainflux-users-db |
mainflux-users-db | 2022-08-24 11:37:20.723 UTC [35] LOG: received fast shutdown request
mainflux-users-db | waiting for server to shut down....2022-08-24 11:37:20.743 UTC [35] LOG: aborting any active transactions
mainflux-users-db | 2022-08-24 11:37:20.759 UTC [35] LOG: background worker "logical replication launcher" (PID 42) exited with exit code 1
mainflux-users-db | 2022-08-24 11:37:20.785 UTC [37] LOG: shutting down
mainflux-users-db | 2022-08-24 11:37:21.035 UTC [35] LOG: database system is shut down
mainflux-users-db | done
mainflux-users-db | server stopped
mainflux-users-db |
mainflux-users-db | PostgreSQL init process complete; ready for start up.
mainflux-users-db |
mainflux-users-db | 2022-08-24 11:37:21.345 UTC [1] LOG: starting PostgreSQL 13.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
mainflux-users-db | 2022-08-24 11:37:21.359 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
mainflux-users-db | 2022-08-24 11:37:21.360 UTC [1] LOG: listening on IPv6 address "::", port 5432
mainflux-users-db | 2022-08-24 11:37:21.405 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
mainflux-users-db | 2022-08-24 11:37:21.451 UTC [49] LOG: database system was shut down at 2022-08-24 11:37:20 UTC
mainflux-users-db | 2022-08-24 11:37:21.478 UTC [1] LOG: database system is ready to accept connections
mainflux-vernemq | config is OK
mainflux-vernemq | -config /vernemq/data/generated.configs/app.2022.08.24.11.36.36.config -args_file /vernemq/bin/../etc/vm.args -vm_args /vernemq/bin/../etc/vm.args
mainflux-vernemq | Exec: /vernemq/bin/../erts-12.3.1/bin/erlexec -boot /vernemq/bin/../releases/1.12.5/vernemq -config /vernemq/data/generated.configs/app.2022.08.24.11.36.36.config -args_file /vernemq/bin/../etc/vm.args -vm_args /vernemq/bin/../etc/vm.args -pa /vernemq/bin/../lib/erlio-patches -- console -noshell -noinput
mainflux-vernemq | Root: /vernemq/bin/..
mainflux-vernemq | 11:37:21.776 [error] can't reconfigure mqtts listener({127,0,0,1}, 1883) with Options [{max_connections,10000},{nr_of_acceptors,10},{mountpoint,[]},{depth,1},{eccs,[secp521r1,brainpoolP512r1,brainpoolP384r1,secp384r1,brainpoolP256r1,secp256k1,secp256r1,secp224k1,secp224r1,secp192k1,secp192r1,secp160k1,secp160r1,secp160r2]},{require_certificate,false},{tls_version,'tlsv1.2'},{use_identity_as_username,false},{allowed_protocol_versions,[3,4,131]},{allow_anonymous_override,false}] due to {already_started,<0.435.0>}
mainflux-http exited with code 1
mainflux-coap exited with code 1
mainflux-http | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:46.879368634Z"}
mainflux-coap | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:47.532017853Z"}
mainflux-http | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:52.56997397Z"}
mainflux-coap | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:53.181598801Z"}
mainflux-http exited with code 1
mainflux-coap exited with code 1
mainflux-http | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:58.762259838Z"}
mainflux-coap | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.19.0.10:4222: i/o timeout","ts":"2022-08-24T11:37:59.164928969Z"}
mainflux-http exited with code 1
Can you, please, share logs docker logs mainflux-http
and docker logs mainflux-things
? I have just tested sudo make run
from a fresh VM with Manjaro and the same versions of Docker and Compose and it is running without problems.
docker logs mainflux-http:
napster@salmon:~$ sudo docker logs mainflux-http
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:27:44.848220468Z"}
2022/08/25 03:27:44 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:27:46.920137462Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:27:51.243082167Z"}
2022/08/25 03:27:51 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:27:53.259208958Z"}
2022/08/25 03:27:57 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:27:57.516720226Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:27:59.544498662Z"}
2022/08/25 03:28:03 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:28:03.458236517Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:28:05.486165017Z"}
2022/08/25 03:28:09 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:28:09.449913727Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:28:11.495825627Z"}
2022/08/25 03:28:15 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:28:15.515859188Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:28:17.562104803Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:28:23.15955726Z"}
2022/08/25 03:28:23 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:28:25.184422067Z"}
2022/08/25 03:28:33 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:28:33.954653741Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:28:36.009558949Z"}
2022/08/25 03:28:51 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:28:51.44822029Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:28:53.489608986Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:29:21.581275404Z"}
2022/08/25 03:29:21 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:29:23.616117436Z"}
2022/08/25 03:30:17 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:30:17.289425344Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:30:19.350117314Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:31:21.739980052Z"}
2022/08/25 03:31:21 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:31:23.7627371Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:32:26.156310937Z"}
2022/08/25 03:32:26 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:32:28.21152459Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:33:30.496602863Z"}
2022/08/25 03:33:30 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:33:32.531760154Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:34:34.559680004Z"}
2022/08/25 03:34:34 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:34:36.603740146Z"}
2022/08/25 03:35:40 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:35:40.512818355Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:35:42.555788065Z"}
2022/08/25 03:36:44 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:36:44.134750309Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:36:46.298994538Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:37:47.812145949Z"}
2022/08/25 03:37:47 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:37:49.847494195Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:38:51.302769459Z"}
2022/08/25 03:38:51 The binary was build using Nats as the message broker
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:38:53.349921986Z"}
2022/08/25 03:39:54 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:39:54.942275307Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:39:56.980776712Z"}
2022/08/25 03:41:01 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:41:01.412648796Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:41:03.460339942Z"}
2022/08/25 03:42:05 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-08-25T03:42:05.190047417Z"}
{"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.7:4222: i/o timeout","ts":"2022-08-25T03:42:07.20440974Z"}
and docker logs mainflux-things:
napster@salmon:~$ sudo docker logs mainflux-things
{"level":"error","message":"Failed to connect to postgres: dial tcp 172.20.0.2:5432: connect: connection timed out","ts":"2022-08-25T03:29:48.49790217Z"}
{"level":"error","message":"Failed to connect to postgres: dial tcp 172.20.0.2:5432: connect: connection timed out","ts":"2022-08-25T03:32:03.666601424Z"}
{"level":"error","message":"Failed to connect to postgres: dial tcp 172.20.0.2:5432: connect: connection timed out","ts":"2022-08-25T03:34:18.832307442Z"}
{"level":"error","message":"Failed to connect to postgres: dial tcp 172.20.0.2:5432: connect: connection timed out","ts":"2022-08-25T03:36:31.952081058Z"}
{"level":"error","message":"Failed to connect to postgres: dial tcp 172.20.0.2:5432: connect: connection timed out","ts":"2022-08-25T03:38:45.072026592Z"}
{"level":"error","message":"Failed to connect to postgres: dial tcp 172.20.0.2:5432: connect: connection timed out","ts":"2022-08-25T03:40:58.195408214Z"}
{"level":"error","message":"Failed to connect to postgres: dial tcp 172.20.0.2:5432: connect: connection timed out","ts":"2022-08-25T03:43:15.409155815Z"}
Last from all logs:
mainflux-auth-bd - database system is ready to accept connections. mainflux-auth-redis - Ready to accept connections mainflux-broker - Server is ready mainflux-es-redis - Ready to accept connections mainflux-jaeger - "status":"READY" mainflux-things-db - database system is ready to accept connections mainflux-users-db - database system is ready to accept connections mainflux-keto-db - database system is ready to accept connections
mainflux-nginx - /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/entrypoint.sh
mainflux-auth - Failed to connect to postgres
mainflux-keto - failed to connect to host=keto-db user=mainflux database=keto
: dial error
mainflux-keto-migrate - unable to initialize service registry: failed to connect to host=keto-db user=mainflux database=keto
: dial error
mainflux-users - Failed to connect to postgres
mainflux-vernemq - [error] can't reconfigure mqtts listener
mainflux-mqtt - "level":"info","message":"Broker not ready: Get \"http://vernemq:8888/health\"
mainflux-http - Failed to connect to message broker
mainflux-coap - Failed to connect to message broker / gRPC communication is not encrypted
mainflux-things - Failed to connect to postgres
This looks like a networking problem. Can you try and update the Compose to the latest version, prune everything and try again from fresh?
I check versions, update (but it was already latest), and now:
docker-compose version 1.29.2, build unknown
napster@salmon:~$ docker compose version
Docker Compose version v2.6.0
Is it normal - different versions of composes?
Again cleaned by sudo make pv=true cleandocker
. And.. i think docker crashes:
# Stops containers and removes containers, networks, volumes, and images created by up
docker-compose -f docker/docker-compose.yml down --rmi all -v --remove-orphans
Stopping mainflux-keto-migrate ... error
Stopping mainflux-things-db ... error
Stopping mainflux-users-db ... error
Stopping mainflux-vernemq ... error
Stopping mainflux-es-redis ... error
Stopping mainflux-keto-db ... error
Stopping mainflux-auth-db ... error
Stopping mainflux-auth-redis ... error
ERROR: for mainflux-auth-redis cannot stop container: 2f2e501cd66beb0d2aad1ef45271e09eb70420cd8bd652f62301385bcb928af6: permission denied
restore from image, cleaned by sudo make pv=true cleandocker
, check again:
Docker Compose version v2.6.0
napster@salmon:~/mainflux$ docker-compose -v
docker-compose version 1.29.2, build unknown
napster@salmon:~/mainflux$ docker version
Client: Docker Engine - Community
Version: 20.10.17
API version: 1.41
Go version: go1.17.11
Git commit: 100c701
Built: Mon Jun 6 23:02:46 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/version": dial unix /var/run/docker.sock: connect: permission denied
but all - same
And, I try to install CLI using wget -O- https://github.com/mainflux/mainflux/releases/download/0.7.0/mainflux-cli_v0.7.0_linux-amd64.tar.gz | tar xvz -C $GOBIN
and it does not installed
Try 'tar --help' or 'tar --usage' for more information.
--2022-08-25 09:09:30-- https://github.com/mainflux/mainflux/releases/download/0.7.0/mainflux-cli_v0.7.0_linux-amd64.tar.gz
Resolving github.com (github.com)... 140.82.121.4
Connecting to github.com (github.com)|140.82.121.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/38644318/4ab4c700-fb35-11e8-93ab-4603d33dd7dc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220825%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220825T090930Z&X-Amz-Expires=300&X-Amz-Signature=6a144ec493a8608c1e32fdf88021a1deea091695fdfc36104359b72776c33c4c&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=38644318&response-content-disposition=attachment%3B%20filename%3Dmainflux-cli_v0.7.0_linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2022-08-25 09:09:30-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/38644318/4ab4c700-fb35-11e8-93ab-4603d33dd7dc?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20220825%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220825T090930Z&X-Amz-Expires=300&X-Amz-Signature=6a144ec493a8608c1e32fdf88021a1deea091695fdfc36104359b72776c33c4c&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=38644318&response-content-disposition=attachment%3B%20filename%3Dmainflux-cli_v0.7.0_linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2646053 (2,5M) [application/octet-stream]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s in 0,007s
Cannot write to ‘-’ (Success).
@kitty7c6 Looks like there are several issues but the error Failed to connect to message broker: dial tcp 172.19.0.10:4222
is very probably related to this issue: https://github.com/mainflux/mainflux/pull/1650
The docker-compose is currently broken with latest tag images because MF_NATS_URL
ENVAR was recently replaced by MF_BROKER_URL
but not replaced in the docker-compose.
So, I should add to docker-copmose.yml
MF_BROKER_TYPE=nats
## Nats
MF_NATS_PORT=4222
MF_BROKER_URL=nats://broker:${MF_NATS_PORT}
MF_NATS_URL=nats://broker:${MF_NATS_PORT}
and remove from makefile THIS_lines
ifeq ("$(MF_BROKER_TYPE)", "rabbitmq")
sed -i "s,file: brokers/.*.yml,file: brokers/rabbitmq.yml," docker/docker-compose.yml
THIS -> sed -i "s,MF_BROKER_URL: .*,MF_BROKER_URL: $$\{MF_RABBITMQ_URL\}," docker/docker-compose.yml
else ifeq ("$(MF_BROKER_TYPE)", "nats")
sed -i "s,file: brokers/.*.yml,file: brokers/nats.yml," docker/docker-compose.yml
THIS -> sed -i "s,MF_BROKER_URL: .*,MF_BROKER_URL: $$\{MF_NATS_URL\}," docker/docker-compose.yml
right? then prune all and make run
@manuio MF_BROKER_URL
should only affect add-ons, not the core docker-compose
. However, since no service is able to reach any other services - it looks like a setup issue. @kitty7c6 Can you try removing all the running containers docker rm $(docker ps -a -q) -f
and all MF volumes docker volume rm $(docker volume ls | grep "mainflux") -f
and MF network docker network rm docker_mainflux-base-net
. Also, remove all MF images or make sure you use all the latest.
@dborovcanin @kitty7c6 Oh yes my bad, I thought that it wasn't done for core services. In that case maybe you are not using latest
images but 0.13.0
but you are using the latest docker-compose. What is sure is that the MF_BROKER_URL envar (or the MF_NATS_URL) is not properly set.
manuio I use from master
repo - get link like in https://www.youtube.com/watch?v=H4BPKfvCZLk&ab_channel=MainfluxIoT tutorial
I'm talking about the docker images tag. You can change it here: https://github.com/mainflux/mainflux/blob/master/docker/.env#L370
As @dborovcanin proposed the best is to start from a fresh install. Can you try docker-compose -f docker/docker-compose down --rmi all -v --remove-orphans
and then docker-compose -f docker/docker-compose up
?
ok, I already done docker rm $(docker ps -a -q) -f
, docker volume rm $(docker volume ls | grep "mainflux") -f
and docker network rm docker_mainflux-base-net
as @dborovcanin said
This docker-compose -f docker/docker-compose down --rmi all -v --remove-orphans
does not work
ERROR: .FileNotFoundError: [Errno 2] No such file or directory: './docker/docker-compose'
works next:
docker-compose -f docker/docker-compose.yml down --rmi all -v --remove-orphans
On my .env file:
# Message Broker
MF_BROKER_TYPE=nats
## Nats
MF_NATS_PORT=4222
MF_NATS_URL=nats://broker:${MF_NATS_PORT}
.....
# Docker image tag
MF_RELEASE_TAG=latest```
Next stage
sudo docker-compose -f docker/docker-compose.yml up
Creating network "docker_mainflux-base-net" with driver "bridge"
Creating volume "docker_mainflux-auth-db-volume" with default driver
Creating volume "docker_mainflux-users-db-volume" with default driver
Creating volume "docker_mainflux-things-db-volume" with default driver
Creating volume "docker_mainflux-keto-db-volume" with default driver
Creating volume "docker_mainflux-auth-redis-volume" with default driver
Creating volume "docker_mainflux-es-redis-volume" with default driver
Creating volume "docker_mainflux-mqtt-broker-volume" with default driver
Pulling keto-db (postgres:13.3-alpine)...
13.3-alpine: Pulling from library/postgres
29291e31a76a: Pull complete
c7f8a1ea71cb: Pull complete
64d8912b293d: Pull complete
0d265a24fb71: Downloading [=================> ] 25.73MB/72
What I've got, all containers created:
but still
and
mainflux-vernemq | 10:33:02.164 [error] can't reconfigure mqtts listener({127,0,0,1}, 1883) with Options
[{max_connections,10000},{nr_of_acceptors,10},{mountpoint,[]},{depth,1},{eccs,
[secp521r1,brainpoolP512r1,brainpoolP384r1,secp384r1,brainpoolP256r1,secp256k1,secp256r1,secp224k1,secp224r1,secp
192k1,secp192r1,secp160k1,secp160r1,secp160r2]},{require_certificate,false},{tls_version,'tlsv1.2'},
{use_identity_as_username,false},{allowed_protocol_versions,[3,4,131]},
{allow_anonymous_override,false}] due to {already_started,<0.435.0>}
@kitty7c6 I'm not sure what's going on with latest images... You can try building images locally with make dockers
and run the composition again.
Is Postgres container ever run? If not, maybe you have a locally installed Postgres that has taken the port and prevents a container to boot.
@drasko I don't see postger container in docker-compose.yml (don't find this container_name). I found only image: postgres:13.3-alpine
. And for example in log it looks like:
Pulling keto-db (postgres:13.3-alpine)...
13.3-alpine: Pulling from library/postgres
29291e31a76a: Pull complete
c7f8a1ea71cb: Pull complete
64d8912b293d: Pull complete
0d265a24fb71: Pull complete
06559c1681e8: Pull complete
ed849f5f685e: Pull complete
3a646df07e94: Pull complete
1e40d492b730: Pull complete
Digest: sha256:e98a69a836391fe94d889a6ccfbb21257b93f47b2794da114a82ef23e342342f
Status: Downloaded newer image for postgres:13.3-alpine
Containers with image:postgres
- mainflux-things-db, mainflux-users-db, mainflux-auth-db, mainflux-keto-db - was created.
mainflux-auth-db | 2022-08-25 08:39:14.978 UTC [1] LOG: database system is ready to accept connections
mainflux-keto-db | 2022-08-25 08:38:37.543 UTC [1] LOG: database system is ready to accept connections
mainflux-things-db | 2022-08-25 08:39:15.194 UTC [1] LOG: database system is ready to accept connections
mainflux-users-db | 2022-08-25 08:39:15.331 UTC [1] LOG: database system is ready to accept connections
@manuio done, but nothing changed
@manuio @drasko I again did all steps from the very beginning, writed them in steps.txt and had to logs - from master repo (this version stuck) and from 0.13.0 version
Both variates give me one same error (about postgres):
mainflux-auth | {"level":"error","message":"Failed to connect to postgres: dial tcp 172.18.0.8:5432: connect: connection refused","ts":"2022-08-26T07:11:01.842162995Z"}
mainflux-auth | {"level":"error","message":"Failed to connect to postgres: dial tcp 172.18.0.4:5432: connect: connection refused","ts":"2022-08-26T07:32:48.00783636Z"}
How and where I need to get postgres?
@kitty7c6 Did you find a solution?
@manuio @drasko @dborovcanin No, still don't. Hello again. I try another PC, but same progs and steps: windows10 + virtualbox6.1.38 + ubuntu2204server + docker(version 20.10) + docker-compose (version 1.29) + mainflux (git master) And got warnings like this:
mainflux-auth-redis | 1:C 14 Sep 2022 09:03:59.138 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
And errors:
mainflux-vernemq | 09:04:53.011 [error] can't reconfigure mqtts listener({127,0,0,1}, 1883) with Options [{max_connections,10000},{nr_of_acceptors,10},{mountpoint,[]},{dep th,1},{eccs,[secp521r1,brainpoolP512r1,brainpoolP384r1,secp384r1,brainpoolP256r1,secp256k1,secp256r1,secp224k1,secp224r1,secp192k1,secp192r1,secp160k1,secp160r1,secp160r2] },{require_certificate,false},{tls_version,'tlsv1.2'},{use_identity_as_username,false},{allowed_protocol_versions,[3,4,131]},{allow_anonymous_override,false}] due to {alre ady_started,<0.435.0>}
mainflux-keto-migrate | Error: unable to initialize service registry: failed to connect to `host=keto-db user=mainflux database=keto`: dial error (dial tcp 172.20.0.8:5432 : connect: connection timed out)
mainflux-keto | Error: unable to initialize service registry: failed to connect to `host=keto-db user=mainflux database=keto`: dial error (dial tcp 172.20.0.8:5432: conn ect: connection timed out)
mainflux-http | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.3:4222: i/o timeout","ts":"2022-09-14T09:12:38.956707402Z"}
mainflux-coap | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.3:4222: i/o timeout","ts":"2022-09-14T09:12:39.341177675Z"}
Anf info, that broker is not ready:
mainflux-mqtt | {"level":"info","message":"Broker not ready: Get \"http://vernemq:8888/health\": dial tcp 172.20.0.10:8888: i/o timeout, next try in 8.383673429s","ts":" 2022-09-14T09:24:06.927363948Z"}
Upper you said - it looks like network problems. I check ifconfic
and found
br-b7dac1004a78: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.20.0.1 netmask 255.255.0.0 broadcast 172.20.255.255
and
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
Net b7dac1004a78 is docker_mainflux-base-net, and all conteiners there. (checked by
napster@salmon:~/mainflux$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
6ea55d6ad8fa bridge bridge local
b7dac1004a78 docker_mainflux-base-net bridge local
441fbf35c703 host host local
80bea4f43c58 none null local
and sudo docker network inspect -v NETWORK docker_mainflux-base-net
I can ping to ant ip from errors (for exaple, 172.20.0.3, and all ok)
Then I read this https://github.com/zalando/zalenium/issues/440 about com.docker.network.bridge.enable_icc
Check sudo iptables -t filter -L -v
and find drop in second chain
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- br-b7dac1004a78 !br-b7dac1004a78 anywhere anywhere
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 anywhere anywhere
2502 150K RETURN all -- any any anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- any br-b7dac1004a78 anywhere anywhere
0 0 DROP all -- any docker0 anywhere anywhere
22 1320 RETURN all -- any any anywhere anywhere
Can my problem be here? I can't find how to check enable_icc option in docker_mainflux-base-net
@manuio Hello! Today I've got some news: First - I try to do docker_mainflux-base-net with com.docker.network.bridge.enable_icc=true by adding to yml file next
mainflux-base-net:
driver: bridge
driver_opts:
com.docker.network.bridge.enable_icc: "true"
It turns off DROP, but didn't solve this problem.
Second - I read on https://mainflux.com/faq.html recommended os - debian or centOs, and install debian 11.5 (windows10 + virtualbox6.1.38 + debian 11.5 + latest docker and docker-compose1.29.2 ) and mainflux. And problems like mainflux-http | {"level":"error","message":"Failed to connect to message broker: dial tcp 172.20.0.3:4222: i/o timeout","ts":"2022-09-14T09:12:38.956707402Z"}
gone.
I still have errors:
mainflux-keto-db | 2022-09-20 03:48:17.003 UTC [56] ERROR: relation "keto_namespace_0000000000_migrations" does not exist at character 67
and at characters 47 , 21
mainflux-keto-migrate | time=2022-09-20T03:48:17Z level=debug msg=An error occurred while checking for the legacy migration table, maybe it does not exist yet? Trying to create. audience=application error=map[message:ERROR: relation "keto_namespace_0000000000_migrations" does not exist (SQLSTATE 42P01)] migration_table=keto_namespace_0000000000_migrations service_name=ORY Keto service_version=master
mainflux-users | {"level":"error","message":"failed to create admin user: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 172.18.0.14:8181: connect: connection refused\"","ts":"2022-09-20T03:48:16.736275376Z"}
and
mainflux-vernemq | 03:48:26.927 [error] can't reconfigure mqtts listener({127,0,0,1}, 1883) with Options [{max_connections,10000},{nr_of_acceptors,10},{mountpoint,[]},{depth,1},{eccs,[secp521r1,brainpoolP512r1,brainpoolP384r1,secp384r1,brainpoolP256r1,secp256k1,secp256r1,secp224k1,secp224r1,secp192k1,secp192r1,secp160k1,secp160r1,secp160r2]},{require_certificate,false},{tls_version,'tlsv1.2'},{use_identity_as_username,false},{allowed_protocol_versions,[3,4,131]},{allow_anonymous_override,false}] due to {already_started,<0.435.0>}
Are these problems critical?
@kitty7c6 Yes, its' critical if you can't create the admin user. I'm not sure why you have all this network issues. I recommend you to try without a virtual machine.
@manuio admin user = admin@example.com 1245678 ? Today I run it again, and tried to get token admin@example.com 1245678, and I got it
@kitty7c6 Are other endpoints also working?
@manuio I creaded few users, two things + channel, and send message thing2->thing1 by mosquitto_sub/_pub. Also connect from windows10 to jaeger (http://salmon.local:16686/) and saw smthng (like things:identify)
@kitty7c6 great! You can close this issue if all is working. Feel free to describe how you solved it.
I didn't solve problem with Ubuntu, but i ran it to working sample.
I used windows10 + virtualbox6.1.38
Installed Debian11.5 (on virtualbox) - https://www.debian.org/CD/live/
And solved two problems with debian: 2.1 # USER IS NOT IN THE SUDOERS FILE : https://losst.ru/oshibka-user-is-not-in-the-sudoers-file-v-ubuntu 2.2 debian package has no installation candidate: https://www.cyberithub.com/solved-package-has-no-installation-candidate-in-debian/
(All of this we do in terminal1)
Installed Docker + docker-compose 1.29.2
https://docs.docker.com/engine/install/debian/ docker main link https://www.bundleapps.io/blog/docker-series/pt-1-installing-docker-and-docker-compose link to install (Ru lang)
3.1. Uninstall any previous versions if you have installed them.
$ sudo apt-get remove docker docker-engine docker.io containerd runc
3.2. Update your system and install the required dependencies. (I didn't do the second line)
$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
3.3. Set up a repository
$ sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release
3.4. For security purposes, add the official GPG Docker key.
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
3.5. Set up a stable repository.
$ echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
3.6. Install the latest version of Docker along with all its dependencies.
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
3.7. Instead of the last line, you can use below command and install latest docker-compose right away BUT!! you need 1.29.2 compose anyway!!
$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
3.8. But you will probably need to install docker-compose of a certain version, for this we use
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
3.9. Give execute permissions to the binary file:
$ sudo chmod +x /usr/local/bin/docker-compose
Note: If the
docker-compose
command does not work after installation, check the installation path. You can also create a symbolic link to /usr/bin or any other directory. Example:$ sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
3.10. Checking the installation docker-compose
$ docker-compose --version
answer: docker-compose version , build 1110ad01
Installed Mainflux
4.1. install the latest version from github
$ sudo apt update
$ sudo git clone https://github.com/mainflux/mainflux.git
4.2. Check if there is a clone
$ ls
answer: папка1 папка2 ... mainflux ...
4.3. Go to the mainflux folder and run
$ cd mainflux
$ sudo make run
Mainflux Cli - open another terminal2 (don't close terminal1 with mainflux)
5.1. Download file
$ wget https://github.com/mainflux/mainflux/releases/download/0.13.0/mainflux-cli_0.13.0_linux-amd64.tar.gz
5.2. Checking what has been downloaded
$ ls
Exapmle answer:
Desktop mainflux Pictures Videos
Documents mainflux-cli_0.13.0_linux-amd64.tar.gz Public
Downloads Music Templates
5.3. Extracted from the archive
$ tar xvf mainflux-cli_0.13.0_linux-amd64.tar.gz
Checking:
Desktop mainflux Music Templates
Documents mainflux-cli_0.13.0_linux-amd64 Pictures Videos
Downloads mainflux-cli_0.13.0_linux-amd64.tar.gz Public
5.4. Addressing through the name mainflux-cli_0.13.0_linux-amd64 is long and inconvenient, we will rename it to cli
$ mv mainflux-cli_0.13.0_linux-amd64 cli
$ ls
answer:
cli Desktop Documents Downloads mainflux mainflux-cli_0.13.0_linux-amd64.tar.gz Music Pictures Public Templates Videos
5.5. Calling
$ ./cli
answer:
Usage:
mainflux-cli [command]
Available Commands:
bootstrap Bootstrap management
certs Certificates management
channels Channels management
completion Generate the autocompletion script for the specified shell
groups Groups management
health Health Check
help Help about any command
keys Keys management
messages Send or read messages
provision Provision things and channels from a config file
things Things management
users Users management
Flags:
-a, --auth-url string Auth service URL (default "http://localhost")
-b, --bootstrap-url string Bootstrap service URL (default "http://localhost")
-e, --certs-url string Certs service URL (default "http://localhost")
-c, --config string Config path
-y, --content-type string Message content type (default "application/senml+json")
-h, --help help for mainflux-cli
-p, --http-url string HTTP adapter URL (default "http://localhost/http")
-i, --insecure Do not check for TLS cert
-l, --limit uint Limit query parameter (default 100)
-n, --name string Name query parameter
-o, --offset uint Offset query parameter
-r, --raw Enables raw output mode for easier parsing of output
-t, --things-url string Things service URL (default "http://localhost")
-u, --users-url string Users service URL (default "http://localhost")
Use "mainflux-cli [command] --help" for more information about a command.
Then you can do by ./cli what you need
Hello, I use Windows-> virtualbox+ubuntuserver (22.04.1) + docker(20.10.17) + docker-compose(1.29.2) -> trying to run mainflux
Do steps:
And then Mainflux running, and fisrt error - mainflux-auth trying to connect to postgres and failed, then same error: mainflux-coap trying to connect to broker and failed, etc And I don't understand how to fix this. Why they can't connect to each other? I check: are all of containers stay in one net - yes, all of them in docker_mainflux-base-net
all log is long (~500 lines)