Open leonidas-o opened 2 years ago
This is an old issue so sorry for reviving it if it has been fixed, but I am getting similar issues
Red Hat Enterprise Linux release 8.8 (Ootpa)
podman-compose version: 1.0.7
using podman version: 4.4.1
mine "start". then stop almost immediately when trying to use systemd, however work when just doing podman-compose up -d
{"@timestamp":"2023-08-29T19:54:49.986Z", "log.level": "WARN", "message":"Unable to lock JVM Memory: error=12, reason=Cannot allocate memory", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.JNANatives","elasticsearch.node.name":"elasticsearch01","elasticsearch.cluster.name":"es-cluster"}
{"@timestamp":"2023-08-29T19:54:50.033Z", "log.level": "WARN", "message":"This can result in part of the JVM being swapped out.", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.JNANatives","elasticsearch.node.name":"elasticsearch01","elasticsearch.cluster.name":"es-cluster"}
{"@timestamp":"2023-08-29T19:54:50.033Z", "log.level": "WARN", "message":"Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.JNANatives","elasticsearch.node.name":"elasticsearch01","elasticsearch.cluster.name":"es-cluster"}
{"@timestamp":"2023-08-29T19:54:50.033Z", "log.level": "WARN", "message":"These can be adjusted by modifying /etc/security/limits.conf, for example:\n\t# allow user 'elasticsearch' mlockall\n\telasticsearch soft memlock unlimited\n\telasticsearch hard memlock unlimited", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.JNANatives","elasticsearch.node.name":"elasticsearch01","elasticsearch.cluster.name":"es-cluster"}
podman create --name=appname-app_elasticsearch01_1 --pod=pod_appname-app --requires=appname-app_elasticsearch-setup_1 --label io.podman.compose.config-hash=9755aa9656e06c68e4e6a2867d226c86e4db88968f91d66a7ec33390f58b509a --label io.podman.compose.project=appname-app --label io.podman.compose.version=1.0.7 --label PODMAN_SYSTEMD_UNIT=podman-compose@appname-app.service --label com.docker.compose.project=appname-app --label com.docker.compose.project.working_dir=/data/appname/appname-app --label com.docker.compose.project.config_files=podman-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=elasticsearch01 --env-file /data/appname/appname-app/envs/elasticsearch01.env -v /data/appname/appname-app/config/elastic.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro -v /data/appname/appname-app/logs:/usr/share/elasticsearch/logs -v appname-app_elasticsearch01-data:/usr/share/elasticsearch/data -v appname-app_certs:/usr/share/elasticsearch/config/certs --net appname-app_default --network-alias elasticsearch01 -p 9200:9200 --ulimit host docker.elastic.co/elasticsearch/elasticsearch:8.9.1
Good News it works on the following:
Red Hat Enterprise Linux release 9.2 (Plow)
podman-compose version: 1.0.7
['podman', '--version', '']
using podman version: 4.4.1
podman-compose version 1.0.7
podman --version
podman version 4.4.1
Now to get my systems upgraded...
Describe the bug
While
podman-compose up -d
works when executed manually (rootless), all containers are started. One container (Elasticsearch, the only one which hasulimit
andcap_add
in docker-compose.yml) is not started when using systemd.To Reproduce Steps to reproduce the behavior:
Used the following files for metasfresh (docker-compose modified): https://docs.metasfresh.org/installation_collection/EN/How_do_I_setup_the_metasfresh_stack_using_Docker.html
docker-compose.yml
```yaml services: db: build: db restart: always volumes: - ./volumes/db/data:/var/lib/postgresql/data:z - ./volumes/db/log:/var/log/postgresql:z - /etc/localtime:/etc/localtime:ro #- /etc/timezone:/etc/timezone:ro environment: - METASFRESH_USERNAME=metasfresh - METASFRESH_PASSWORD=metasfresh - METASFRESH_DBNAME=metasfresh - DB_SYSPASS=System - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} networks: - metasfresh app: build: app hostname: app links: - db:db - rabbitmq:rabbitmq - search:search expose: - "8282" - "8788" restart: always volumes: - ./volumes/app/log:/opt/metasfresh/log:rw,z - ./volumes/app/heapdump:/opt/metasfresh/heapdump:rw,z - ./volumes/app/external-lib:/opt/metasfresh/external-lib:rw,z - /etc/localtime:/etc/localtime:ro #- /etc/timezone:/etc/timezone:ro environment: - METASFRESH_HOME=/opt/metasfresh networks: - metasfresh webapi: build: webapi links: - app:app - db:db - rabbitmq:rabbitmq - search:search expose: - "8789" # to access the webui-api directly # (eg. for debugging or connecting your app to the metasfresh api) # uncomment following port: #ports: #- "8080:8080" restart: always volumes: - ./volumes/webapi/log:/opt/metasfresh-webui-api/log:rw,z - ./volumes/webapi/heapdump:/opt/metasfresh-webui-api/heapdump:rw,z - /etc/localtime:/etc/localtime:ro #- /etc/timezone:/etc/timezone:ro networks: - metasfresh webui: build: webui links: - webapi:webapi ports: - "8080:80" - "4430:443" restart: always volumes: - /etc/localtime:/etc/localtime:ro #- /etc/timezone:/etc/timezone:ro #uncomment and set to URL where metasfresh will be available from browsers environment: - WEBAPI_URL=https://metasfresh.my-domain.com networks: - metasfresh rabbitmq: build: rabbitmq expose: - "5672" restart: always volumes: - ./volumes/rabbitmq/log:/var/log/rabbitmq/log:z - /etc/localtime:/etc/localtime:ro #- /etc/timezone:/etc/timezone:ro environment: RABBITMQ_DEFAULT_USER: "metasfresh" RABBITMQ_DEFAULT_PASS: "metasfresh" RABBITMQ_DEFAULT_VHOST: "/" networks: - metasfresh search: build: search ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 cap_add: - IPC_LOCK # to access the search api directly # (e.g. if you did docker-compose up search to have the deachboard with your locally running metasfresh services) # uncomment following ports: # ports: # - "9200:9200" # - "9300:9300" volumes: - ./volumes/search/data:/usr/share/elasticsearch/data:z - /etc/localtime:/etc/localtime:ro #- /etc/timezone:/etc/timezone:ro environment: - "ES_JAVA_OPTS=-Xms128M -Xmx256m" restart: always networks: - metasfresh networks: metasfresh: {} ```In the beginning without
LimitMEMLOCK
,LimitNOFILE
andLimitNPROC
then added these entries to the systemd service file (same behaviour, does not help).metasfresh.service
[Unit] Description=Podman-compose metasfresh.service Wants=network.target After=network-online.target [Service] Type=oneshot RemainAfterExit=true WorkingDirectory=/srv/metasfresh/metasfresh-docker EnvironmentFile=/srv/metasfresh/access.txt LimitMEMLOCK=infinity LimitNOFILE=65536 LimitNPROC=65536 ExecStart=/home/myuser/.local/bin/podman-compose up -d ExecStop=/home/myuser/.local/bin/podman-compose down [Install] WantedBy=default.targetHard and soft limits set for myUser in:
/etc/security/limits.conf
(
podman-compose build
the first time)systemctl --user start metasfresh
I compared the generated commands seen in
journalctl
, where I once executepodman-compose up -d
and the other timesystemctl --user start metasfresh
, both generate and execute exactly the same command for thesearch
container:Expected behavior The same behaviour between
podman-compose up -d
andsystemctl --user start metasfresh
, because systemctl ExecStart is executing/home/myuser/.local/bin/podman-compose up -d
in the that same working dir.Actual behavior
podman-compose up -d
-> workssystemctl --user start metasfresh
-> starts all other containers exceptsearch
(elasticsearch)Output
Environment: NAME="Rocky Linux" VERSION="8.6 (Green Obsidian)"