Closed noelmcloughlin closed 5 years ago
@wisererik Please take a look at this issue
I think this is known Docker issue. One suggested workaround is to use --force-recreate
flag with docker-compose.
In my earlier testing I never saw this issue on CentOS or SuSE. However it started happening after I had added cleanup-disk-space task as final stage and docker system prune -a -f
gets executed.
But systemd had not finished running docker compose up. That maybe root cause. I will test and fix.
PR raised upstream: https://github.com/saltstack-formulas/opensds-formula/pull/86/files
Still investigating whether --force-recreate
flag is required or not. With above PR things look better but Kafka container died. So I rebooted OS and on restart systemd ran docker-compose up --force-recreate
for gelato and status is good.
vagrant-openSUSE-Leap:/home/vagrant # journalctl -u opensds-multi-cloud --follow
-- Logs begin at Wed 2019-03-13 18:20:52 MDT. --
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_s3_1 ...
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_datastore_1 ...
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_s3_1
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_datastore_1
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_backend_1 ...
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_zookeeper_1 ...
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_zookeeper_1
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_backend_1
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_api_1 ...
Mar 13 18:21:26 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_api_1
Mar 13 18:21:32 vagrant-openSUSE-Leap docker-compose[1239]: [89B blob data]
Mar 13 18:21:32 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_kafka_1
Mar 13 18:21:35 vagrant-openSUSE-Leap docker-compose[1239]: [298B blob data]
Mar 13 18:21:35 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_datamover_1
Mar 13 18:21:35 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_dataflow_1 ...
Mar 13 18:21:35 vagrant-openSUSE-Leap docker-compose[1239]: Recreating multicloud_dataflow_1
Mar 13 18:21:40 vagrant-openSUSE-Leap docker-compose[1239]: [294B blob data]
Mar 13 18:21:40 vagrant-openSUSE-Leap docker-compose[1239]: zookeeper_1 | ZooKeeper JMX enabled by default
Mar 13 18:21:40 vagrant-openSUSE-Leap docker-compose[1239]: zookeeper_1 | Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mar 13 18:21:40 vagrant-openSUSE-Leap docker-compose[1239]: zookeeper_1 | 2019-03-14 00:21:31,850 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mar 13 18:21:40 vagrant-openSUSE-Leap docker-compose[1239]: zookeeper_1 | 2019-03-14 00:21:31,854 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
vagrant-openSUSE-Leap:/home/vagrant # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bf94443f5a27 opensdsio/multi-cloud-datamover "/datamover" 3 minutes ago Up 3 minutes multicloud_datamover_1
cb3a7e417f3f opensdsio/multi-cloud-dataflow "/dataflow" 3 minutes ago Up 3 minutes multicloud_dataflow_1
1dba1887a853 wurstmeister/kafka:2.11-2.0.1 "start-kafka.sh" 4 minutes ago Up 4 minutes 0.0.0.0:9092->9092/tcp multicloud_kafka_1
122d63570e46 opensdsio/multi-cloud-backend "/backend" 4 minutes ago Up 4 minutes multicloud_backend_1
ae9de147e427 opensdsio/multi-cloud-api "/api" 4 minutes ago Up 4 minutes 0.0.0.0:8089->8089/tcp multicloud_api_1
4155f64dcff7 opensdsio/multi-cloud-s3 "/s3" 4 minutes ago Up 4 minutes multicloud_s3_1
db2982dae939 mongo "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:27017->27017/tcp multicloud_datastore_1
1e6ded0243cd wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 4 minutes ago Up 4 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp multicloud_zookeeper_1
a53cd4c86468 opensdsio/dashboard:latest "/bin/sh -c /opt/das…" 30 minutes ago Up 4 minutes dashboard
115c94ea891c lvm-debian-cinder "bash -c /scripts/lv…" 32 minutes ago Up 4 minutes blockbox_cinder-volume_1
c43597740ad0 debian-cinder "cinder-scheduler" 32 minutes ago Up 4 minutes blockbox_cinder-scheduler_1
036eed406f87 debian-cinder "sh /scripts/cinder-…" 32 minutes ago Up 4 minutes blockbox_cinder-api_1
2d9e3d070a71 rabbitmq "docker-entrypoint.s…" 32 minutes ago Up 4 minutes 4369/tcp, 5671/tcp, 25672/tcp, 0.0.0.0:5672->5672/tcp blockbox_rabbitmq_1
e4b44b92ef83 mariadb "docker-entrypoint.s…" 32 minutes ago Up 4 minutes 0.0.0.0:3307->3306/tcp blockbox_mariadb_1
3e456c7e4727 quay.io/coreos/etcd:latest "etcd -name osdsdb -…" About an hour ago Up 4 minutes osdsdb
vagrant-openSUSE-Leap:/home/vagrant # osdsctl pool list
WARNING: Not found Env OPENSDS_AUTH_STRATEGY, use default(noauth)
+--------------------------------------+-----------------+-------------+--------+------------------+---------------+--------------+
| Id | Name | Description | Status | AvailabilityZone | TotalCapacity | FreeCapacity |
+--------------------------------------+-----------------+-------------+--------+------------------+---------------+--------------+
| 3d28df2a-78a6-500a-9c52-347301396de7 | opensds-volumes | | | default | 1 | 1 |
+--------------------------------------+-----------------+-------------+--------+------------------+---------------+--------------+
PR merged upstream. Flag --force-recreate
is not needed for this issue.
Describe the bug The
docker compose
of gelato works on Ubuntu but fails on SuSE/CentOS. Has anyone see this problem or have suggestions to resolve?To Reproduce This behaviour was seen while testing #13 but can probably be replicated by ansible installer.
Expected behavior
docker compose
of gelato should work on other OS not just Ubuntu.Additional context