containers / podman-compose

a script to run docker-compose.yml using podman
GNU General Public License v2.0
4.98k stars 474 forks source link

Podman compose is starting containers attached by default #563

Open UnKulMunki opened 1 year ago

UnKulMunki commented 1 year ago

Describe the bug the command : podman-compose -f=compose.yml is actually appending the '-a' switch to the start command: podman start -a <CONTAINER_NAME>

This is a problem because I am stuck staring at the the last container's log entries as they pop onto the screen. Only a CTRL-C gets me out of that and also kills the last container. or doing a CTRL+Z and then bg to send to to background processing. This is of course a problem because I am trying to script automated service building.

To Reproduce Steps to reproduce the behavior:

  1. The working directory has a docker compose file name 'compose.yml' and a mysql.env file that specify the mysql container details like default DB and admin username and password.
  2. Type in the following: where DIR_NAME is the directory the compose.yaml and mysql.env file is located: podman-compose -f=<DIR_NAME>/compose.yml up

Expected behavior containers start in a detached mode by default unless a podman-start-args switch is given for -a or --attached

Actual behavior the '-a' switch is being appended to the start commands for each container: podman start -a <CONTAINER_NAME> So I am stuck attached to the last container started

Output

devops@podbox:~$ podman-compose version
['podman', '--version', '']
using podman version: 3.4.2
podman-composer version  1.0.3
podman --version
podman version 3.4.2
exit code: 0

devops@podbox:~$ podman-compose -f=/vagrant/podman/compose.yml up
['podman', '--version', '']
using podman version: 3.4.2
** excluding:  set()
['podman', 'network', 'exists', 'podman_appnet']
['podman', 'network', 'create', '--label', 'io.podman.compose.project=podman', '--label', 'com.docker.compose.project=podman', 'podman_appnet']
['podman', 'network', 'exists', 'podman_appnet']
podman create --name=MySQL --label io.podman.compose.config-hash=123 --label io.podman.compose.project=podman --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=podman --label com.docker.compose.project.working_dir=/vagrant/podman --label com.docker.compose.project.config_files=/vagrant/podman/dev-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=mysqld --env-file /vagrant/podman/mysql.env -v /home/vagrant/mysql/server:/var/lib/mysql --net podman_appnet --network-alias mysqld -p 63306:3306 docker.io/mysql/mysql-server:8.0 mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --init-connect=SET NAMES UTF8;
88073d203397476b6b0925284822667b74e0ecd06a94d1fdc369ede6e58af916
exit code: 0
['podman', 'network', 'exists', 'podman_appnet']
podman create --name=Krakend --label io.podman.compose.config-hash=123 --label io.podman.compose.project=podman --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=podman --label com.docker.compose.project.working_dir=/vagrant/podman --label com.docker.compose.project.config_files=/vagrant/podman/dev-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=krakend -e KRAKEND_CONFIG=/etc/krakend/krakend.json -v /vagrant/podman/krakend:/etc/krakend/ --net podman_appnet --network-alias krakend -p 8000:8080 docker.io/devopsfaith/krakend:latest run -c=/etc/krakend/krakend.json
c7c1a532e4444e3326522c362fb021300240208fbc0bab17a40d01a1dc7a9e5d
exit code: 0
['podman', 'network', 'exists', 'podman_appnet']
podman create --name=Keycloak --label io.podman.compose.config-hash=123 --label io.podman.compose.project=podman --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=podman --label com.docker.compose.project.working_dir=/vagrant/podman --label com.docker.compose.project.config_files=/vagrant/podman/dev-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=ch@ng3M# -v /vagrant/podman/keycloak:/usr/src/shared --net podman_appnet --network-alias keycloak -p 8010:8080 quay.io/keycloak/keycloak:latest start-dev
8ab6e8be696d6d4513a1d8c7fbdb6959631f5d1e155d05013b1f7e1bc0c3dfac
exit code: 0
podman start -a MySQL
[Entrypoint] MySQL Docker Image 8.0.30-1.2.9-server
[Entrypoint] Starting MySQL 8.0.30-1.2.9-server
2022-09-28T17:13:28.323116Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
2022-09-28T17:13:28.325412Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30) starting as process 1
2022-09-28T17:13:28.339914Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2022-09-28T17:13:28.442012Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2022-09-28T17:13:28.532871Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2022-09-28T17:13:28.532891Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2022-09-28T17:13:28.548126Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.30'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server - GPL.
2022-09-28T17:13:28.548187Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
podman start -a Krakend
2022/09/28 17:13:28 KRAKEND ERROR: [SERVICE: Logging] Unable to create the logger: getting the extra config for the krakend-gologging module
Parsing configuration file: /etc/krakend/krakend.json
2022/09/28 17:13:28 KRAKEND INFO: Starting the KrakenD instance
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login] Building the proxy pipe
2022/09/28 17:13:28 KRAKEND DEBUG: [BACKEND: /system-ctlr/login] Building the backend pipe
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login][Static] Adding a static response using 'incomplete' strategy. Data: {"new_field_b":["arr1","arr2"],"new_field_c":{"obj":"obj1"},"static_field_a":"generic reponse-123"}
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login] Building the http handler
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login][JWTSigner] Signer disabled
2022/09/28 17:13:28 KRAKEND INFO: [ENDPOINT: /system/auth/v1/login][JWTValidator] Validator disabled for this endpoint
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login] Building the proxy pipe
2022/09/28 17:13:28 KRAKEND DEBUG: [BACKEND: /system-ctlr/login] Building the backend pipe
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login][Static] Adding a static response using 'incomplete' strategy. Data: {"new_field_b":["arr1","arr2"],"new_field_c":{"obj":"obj1"},"static_field_a":"generic reponse-123"}
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login] Building the http handler
2022/09/28 17:13:28 KRAKEND DEBUG: [ENDPOINT: /system/auth/v1/login][JWTSigner] Signer disabled
2022/09/28 17:13:28 KRAKEND INFO: [ENDPOINT: /system/auth/v1/login][JWTValidator] Validator disabled for this endpoint
2022/09/28 17:13:28 KRAKEND INFO: [SERVICE: Gin] Listening on port: 8080
podman start -a Keycloak
Updating the configuration and installing your custom providers, if any. Please wait.
2022/09/28 17:13:33 KRAKEND DEBUG: [SERVICE: Telemetry] Registering usage stats for Cluster ID F3tHOUdULVCRtTvANjm9L3XBR6efeE076+WLDQCAI2o=
2022-09-28 17:13:34,230 INFO  [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 3414ms
2022-09-28 17:13:35,146 INFO  [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: false
2022-09-28 17:13:35,616 INFO  [org.keycloak.common.crypto.CryptoIntegration] (main) Detected crypto provider: org.keycloak.crypto.def.DefaultCryptoProvider
2022-09-28 17:13:36,399 WARN  [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-09-28 17:13:36,487 WARN  [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-09-28 17:13:36,558 INFO  [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-09-28 17:13:36,710 INFO  [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.9.Final
2022-09-28 17:13:37,288 INFO  [org.keycloak.quarkus.runtime.storage.legacy.liquibase.QuarkusJpaUpdaterProvider] (main) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
2022-09-28 17:13:38,292 INFO  [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: node_616621, Site name: null
2022-09-28 17:13:38,347 INFO  [org.keycloak.services] (main) KC-SERVICES0050: Initializing master realm
2022-09-28 17:13:39,496 INFO  [io.quarkus] (main) Keycloak 19.0.2 on JVM (powered by Quarkus 2.7.6.Final) started in 5.191s. Listening on: http://0.0.0.0:8080
2022-09-28 17:13:39,497 INFO  [io.quarkus] (main) Profile dev activated.
2022-09-28 17:13:39,497 INFO  [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
2022-09-28 17:13:39,714 INFO  [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master'
2022-09-28 17:13:39,717 WARN  [org.keycloak.quarkus.runtime.KeycloakMain] (main) Running the server in development mode. DO NOT use this configuration in production.
... ... ...

Environment:

Additional context I am trying to use podman-compose in automated container building for a CI / CD pipeline. Therefore getting stuck with attached containers is sub-optimal.

Thank you for your attention. G...C

UnKulMunki commented 1 year ago

I think it's on line 2059-2065 of podman_compose.py:

thread = Thread(
            target=compose.podman.run,
            args=[[], "start", ["-a", cnt["name"]]],
            kwargs={"obj": obj, "log_formatter": log_formatter},
            daemon=True,
            name=cnt["name"],
        )

But I can't understand WHY you would force an -a if that wasnt added to the podman start args???

G...C

UnKulMunki commented 1 year ago

Removing the "-a" does in fact fix the issue for me.

UnKulMunki commented 1 year ago

Suggested fix in Pull Request: https://github.com/containers/podman-compose/pull/564

defanator commented 1 year ago

@UnKulMunki I came across this issue while investigating another one I'm having with podman-compose, but not with docker-compose. I have a couple of YAMLs describing a set of services, and my intention is to run single service (container that runs some tests with pytest framework) that requires a few dependencies like memcached and mysqld.

With docker-compose my command is like:

docker-compose -f docker-compose.yml -f test.yml up --exit-code-from backend-test backend-test

and it effectively starts my test container (backend-test) attached, and everything else detached. Once backend-test finishes, docker-compose exits with its status code.

With current devel version of podman-compose, I'm getting all containers (backend-test + its dependencies) started in attached mode, and when backend-test finishes, podman-compose will wait forever for remaining dependency containers (memcached, mysqld).

I tried to apply your patch from https://github.com/containers/podman-compose/pull/564 and it expectedly changes the behavior - all my containers run detached, and podman-compose exits immediately. This is better but still do not match docker-compose behavior.

(I'm incorporating some tooling into CI pipelines, and having test output in stdout/stderr is absolutely essential.)

muayyad-alsadi commented 1 year ago

why not just pass -d like this podman-compose up -d; then use logs

defanator commented 1 year ago

@muayyad-alsadi -d 1) won't allow to get exit code from specified container, 2) will exit immediately - such behavior will require additional logic around implementing extra "wait until it's done" cycle.

The same scenario implemented surprisingly good in docker-compose, and it would be really great to mirror the same approaches.