docker / compose

Define and run multi-container applications with Docker
https://docs.docker.com/compose/
Apache License 2.0
33.62k stars 5.19k forks source link

[BUG] Regression: array items[0,1] must be unique starting from 2.24.1 #11371

Closed paolomainardi closed 8 months ago

paolomainardi commented 8 months ago

Description

As per the subject, starting from 2.24.1, I am encountering this issue when there are overrides.

Steps To Reproduce

Create 2 files:

  1. docker-compose.yaml
version: "3.8"

services:
  test:
    image: ubuntu:latest
    command: sleep infinity
    volumes:
      - ./src:/src
  1. docker-compose.override.yaml
services:
  test:
    volumes:
      - ./src:/src

WIth 2.23.3:

❯ dc version
Docker Compose version 2.23.3
❯ dc down -v

With 2.24.1:

❯ ./dc-2.24.1 version
Docker Compose version v2.24.1
❯ ./dc-2.24.1 down -v
validating /home/paolo/temp/dc-compose/docker-compose.override.yml: services.test.volumes array items[0,1] must be unique

Compose Version

Docker Compose version v2.24.1

Docker Environment

❯ docker info
Client:
 Version:    24.0.7
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  0.12.1
    Path:     /usr/lib/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  2.23.3
    Path:     /usr/lib/docker/cli-plugins/docker-compose

Server:
 Containers: 15
  Running: 9
  Paused: 0
  Stopped: 6
 Images: 141
 Server Version: 24.0.7
 Storage Driver: btrfs
  Btrfs:
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 71909c1814c544ac47ab91d2e8b84718e517bb99.m
 runc version:
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.6.11-2-lts
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 24
 Total Memory: 30.49GiB
 Name: paolo-cto-arch-wood
 ID: ZRJM:NTZC:JCYV:OSU3:VB2H:N2CW:ZCLD:PCGW:JGT5:B2BR:445A:GEHV
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Anything else?

No response

ndeloof commented 8 months ago

Thanks for reporting. This only applies as both base and override compose files declare the exact same volume. Can you please explain why you do so ?

paolomainardi commented 8 months ago

Thank you for getting back to me so promptly, @ndeloof. In my case, there seems to be no valid reason for the issue I'm facing. It appears to be a chain of docker-compose files where the last one is adding the same volumes again. Unfortunately, I cannot modify it as the base docker-compose files are managed by a custom framework.

In version <= 2.23, errors were ignored or overwritten silently. However, this is no longer the case, resulting in a breaking change.

ndeloof commented 8 months ago

ok, just wanted to check I was not missing a hack-ish usecase :) A fix is on its way

paolomainardi commented 8 months ago

Thanks @ndeloof :)

freyjadomville commented 8 months ago

I'm also getting a (possibly similar) regression with the following as a single file, but the same error message. I can also build this successfully with 2.23.3 - the uniqueness constraint here is too strict as the two OpenSearch containers in this compose file are for different services:

version: '3.8'

services:
  postgres:
    image: postgres:15
    networks:
      - client-portal
    environment:
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
      POSTGRES_USER: ${DATABASE_USERNAME}
      POSTGRES_DB: ${DATABASE_NAME}
      PG_DATA: /var/lib/postgresql/data
    volumes:
      - portalpgdata:/var/lib/postgresql/data
    ports:
      - '${DATABASE_PORT}:5432'
    extra_hosts:
      - "host.docker.internal:host-gateway"

  pgadmin:
    extends:
      file: docker-compose.excel.yml
      service: pgadmin
    networks:
      - client-portal
    depends_on:
      - postgres
      - data-postgres
    extra_hosts:
      - "host.docker.internal:host-gateway"

  cms:
    build:
      dockerfile: cms/Dockerfile.dev
    volumes:
      - ./cms/config:/opt/app/config
      - ./cms/src:/opt/app/src
      - ./cms/package.json:/opt/package.json
      - portalcmsmedia:/opt/app/public/uploads
      - ./cms/types:/opt/app/types
      - /opt/app/src/plugins # comment this line out (and the one below) to do plugin development
    ports:
      - '${CMS_API_PORT}:1337'
      #- '8000:8000' # uncomment this line (and the one above) to do plugin development and connect to localhost:8000
    environment:
      HOST: '0.0.0.0'
      PORT: '1337'
      CMS_URL: 'http://localhost:${CMS_API_PORT}'
      APP_KEYS: <value>
      API_TOKEN_SALT: <value>
      ADMIN_JWT_SECRET: <value>
      JWT_SECRET: <value>
      TRANSFER_TOKEN_SALT: <value>
      DATABASE_CLIENT: 'postgres'
      DATABASE_HOST: 'postgres'
      DATABASE_PORT: '5432'
      DATABASE_NAME: ${DATABASE_NAME}
      DATABASE_USERNAME: ${DATABASE_USERNAME}
      DATABASE_PASSWORD: ${DATABASE_PASSWORD}
      DATABASE_SSL: 'false'
      DATABASE_POOL_MIN: '0'
      NEXT_REVALIDATE_TOKEN: ${NEXT_REVALIDATE_TOKEN}
      AUTH_SERVICE_HOST: ${AUTH_SERVICE_HOST}
      AUTH_SERVICE_PORT: ${AUTH_SERVICE_PORT}
      STRAPI_SES_AWS_ACCESS_KEY_ID: ${STRAPI_SES_AWS_ACCESS_KEY_ID}
      STRAPI_SES_AWS_SECRET_ACCESS_KEY: ${STRAPI_SES_AWS_SECRET_ACCESS_KEY}
      IS_LOCAL: 'true'
      REPORTS_BUCKET: ${REPORTS_BUCKET}
      REPORTS_QUEUE: ${REPORTS_QUEUE}
      REGION: ${REGION}
      LOCALSTACK_ENDPOINT: 'http://host.docker.internal:4566'
      CMS_PREVIEW_TOKEN: ${CMS_PREVIEW_TOKEN}
      FRONTEND_URL: ${FRONTEND_URL}
      FRONTEND_HOST: ${FRONTEND_HOST}
      FRONTEND_PORT: ${FRONTEND_PORT}
    networks:
      - client-portal
    command: npm run develop # -- --watch-admin # uncomment this line for plugin development
    depends_on:
      - postgres
      - localstack
      - auth-service
    extra_hosts:
      - "host.docker.internal:host-gateway"

  frontend:
    build:
      dockerfile: frontend/Dockerfile
      target: develop
    volumes:
      - ./frontend:/usr/src/app
      - ./common-data-client:/usr/src/common-data-client
      - portalcmsmedia:/usr/src/app/public/uploads
      - /usr/src/app/.next
      - /usr/src/app/node_modules # Anonymous volume to prevent the container's node_modules from being overwritten by the local one
    ports:
      - '${FRONTEND_PORT}:${FRONTEND_PORT}'
      - 0.0.0.0:9232:9229
      - 0.0.0.0:9233:9230
    environment:
      NODE_ENV: 'development'
      WATCHPACK_POLLING: 'true'

      PORT: ${FRONTEND_PORT}
      AUTH_SERVICE_HOST: ${AUTH_SERVICE_HOST}
      AUTH_SERVICE_PORT: ${AUTH_SERVICE_PORT}
      PUBLISHED_DATA_API_URL: ${PUBLISHED_DATA_API_URL}
      THOUGHTSPOT_HOST: ${THOUGHTSPOT_HOST}
      THOUGHTSPOT_SECRET_KEY: ${THOUGHTSPOT_SECRET_KEY}
      THOUGHTSPOT_DATA_SOURCE: ${THOUGHTSPOT_DATA_SOURCE}
      THOUGHTSPOT_OVERRIDE_SUBSCRIPTION: ${THOUGHTSPOT_OVERRIDE_SUBSCRIPTION}
      NEXTAUTH_URL: 'http://localhost:${FRONTEND_PORT}'
      NEXT_REVALIDATE_TOKEN: ${NEXT_REVALIDATE_TOKEN}
      NEXTAUTH_SECRET: '0CaFHo7J6Q9xzYRPkjtLn5FEmgLl7Cp86MJTVjEzuUI='
      MIXPANEL_PROJECT_TOKEN: ${MIXPANEL_PROJECT_TOKEN}
    networks:
      - client-portal
    command: npm run dev
    depends_on:
      - auth-service
      - cms
      - published-data-api
    extra_hosts:
      - "host.docker.internal:host-gateway"

  auth-service:
    build:
      dockerfile: auth-service/Dockerfile
      target: development
    volumes:
      - ./auth-service:/usr/src/app
      - /usr/src/app/node_modules # Anonymous volume to prevent the container's node_modules from being overwritten by the local one
    ports:
      - '${AUTH_SERVICE_PORT}:${AUTH_SERVICE_PORT}'
      - '9231:9229' # debug port
    environment:
      LOCALSTACK_ENDPOINT: 'http://host.docker.internal:4566'
      IS_LOCAL: 'true'
    env_file:
      - .env
    networks:
      - client-portal
    command: npm run start:debug
    depends_on:
      opensearch:
        condition: service_healthy
    extra_hosts:
      - "host.docker.internal:host-gateway"

  opensearch:
    image: opensearchproject/opensearch:2.9.0
    container_name: portal-opensearch
    environment:
      - compatibility.override_main_response_version=true
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
      - "DISABLE_INSTALL_DEMO_CONFIG=true"
      - "DISABLE_SECURITY_PLUGIN=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data:/usr/share/opensearch/data
      - ./auth-service/src/opensearch/synonyms:/usr/share/opensearch/config/analysis
    ports:
      - ${OPENSEARCH_PORT_1}:9200 
      - ${OPENSEARCH_PORT_2}:9600
    networks:
      - client-portal
    healthcheck:
      test: "curl -s http://opensearch:9200 > /dev/null || exit 1"
      interval: 2s
      timeout: 30s
      retries: 50
      start_period: 1s
    extra_hosts:
      - "host.docker.internal:host-gateway"

  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:2.9.0
    container_name: portal-opensearch-dashboards
    ports:
      - 0.0.0.0:${OPENSEARCH_DASHBOARDS_PORT}:5601
    expose:
      - "5601"
    environment:
      - OPENSEARCH_HOSTS=["http://opensearch:9200"]
      - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
    networks:
      - client-portal
    extra_hosts:
      - "host.docker.internal:host-gateway"

  localstack:
    image: localstack/localstack:2.3.2
    hostname: localstack
    restart: always
    healthcheck:
      test: [ "CMD", "curl", "http://_localstack/health?reload" ]
    environment:
      - SERVICES=s3,sqs,sns
      - DATA_DIR=/tmp/localstack/data
      - DEBUG=1
      - AWS_ACCESS_KEY_ID=test
      - AWS_SECRET_ACCESS_KEY=test
      - AWS_DEFAULT_REGION=eu-west-2
      - DOCKER_HOST=unix:///var/run/docker.sock
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
    ports:
      - "4566:4566"
    volumes:
      - localstack-data:/tmp/localstack:rw
      - ./setup/entrypoints/create_localstack_resources.sh:/etc/localstack/init/ready.d/init-aws.sh
    extra_hosts:
      - "host.docker.internal:host-gateway"

  published-data-api:
    extends:
      file: docker-compose.excel.yml
      service: published-data-api
    networks:
      - client-portal
    depends_on:
      - data-postgres

  data-postgres:
    extends:
      file: docker-compose.excel.yml
      service: data-postgres
    networks:
      - client-portal

  tracking-opensearch:
    extends:
      file: docker-compose.excel.yml
      service: opensearch
    networks:
      - client-portal

  tracking-opensearch-dashboards:
    extends:
      file: docker-compose.excel.yml
      service: opensearch-dashboards
    environment:
      - OPENSEARCH_HOSTS=["http://tracking-opensearch:9200"]
      - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
    networks:
      - client-portal

  mock-auth-server:
    extends:
      file: docker-compose.excel.yml
      service: mock-auth-server
    networks:
      - client-portal

networks:
  client-portal:
  published-data:
volumes:
  portalpgdata:
  portalpgadmin:
  portalcmsmedia:
  opensearch-data:
  localstack-data:
  pgdata:
  pgadmin:
  tracking-opensearch-data:
$ docker-compose up --build -V
validating /home/freyjadomville/git/project/docker-compose.yml: services.tracking-opensearch-dashboards.environment array items[1,3] must be unique
ndeloof commented 8 months ago

@freyjadomville same PR will fix your issue, but in the meantime you just can remove redefinition of environment in tracking-opensearch-dashboards service declaration, as by extends it will already get them set.

logopk commented 8 months ago

I get the same for ports.

port 1514/tcp in both docker-compose.yml AND override.

ndeloof commented 8 months ago

@logopk same fix will apply. Any reason you use this duplicated declaration ?

logopk commented 8 months ago

@ndeloof : I can not tell you exactly. As this is on my test environment this port may have been in the override first and then later got into the regular prod compose file.

same applies to the volume problem - however there it was also needed for the external:true declaration.

paolomainardi commented 8 months ago

Thanks a lot @ndeloof

radim-ek commented 8 months ago

So, this is same thing?

validating /root/qfieldcloud/docker-compose.override.local.yml: services.app.environment array items[1,46] must be unique

version: '3.9'

services:

  app:
    build:
      args:
        - DEBUG_BUILD=1
    ports:
      # allow direct access without nginx
      - ${DJANGO_DEV_PORT}:8000
      - ${DEBUG_DEBUGPY_APP_PORT:-5678}:5678
    volumes:
      # mount the source for live reload
      - ./docker-app/qfieldcloud:/usr/src/app/qfieldcloud
    environment:
      DEBUG: 1
    command: python3 -m debugpy --listen 0.0.0.0:5678 manage.py runserver 0.0.0.0:8000
    depends_on:
      - db

  worker_wrapper:
    scale: ${QFIELDCLOUD_WORKER_REPLICAS}
    build:
      args:
        - DEBUG_BUILD=1
    ports:
      - ${DEBUG_DEBUGPY_WORKER_WRAPPER_PORT:-5679}:5679
    environment:
      QFIELDCLOUD_LIBQFIELDSYNC_VOLUME_PATH: ${QFIELDCLOUD_LIBQFIELDSYNC_VOLUME_PATH}
    volumes:
      # mount the source for live reload
      - ./docker-app/qfieldcloud:/usr/src/app/qfieldcloud
      - ./docker-app/worker_wrapper:/usr/src/app/worker_wrapper
    command: python3 -m debugpy --listen 0.0.0.0:5679 manage.py dequeue

  smtp4dev:
    image: rnwood/smtp4dev:v3
    restart: always
    ports:
      # Web interface
      - ${SMTP4DEV_WEB_PORT}:80
      # SMTP server
      - ${SMTP4DEV_SMTP_PORT}:25
      # IMAP
      - ${SMTP4DEV_IMAP_PORT}:143
    volumes:
        - smtp4dev_data:/smtp4dev
    environment:
      # Specifies the server hostname. Used in auto-generated TLS certificate if enabled.
      - ServerOptions__HostName=smtp4dev

  db:
    image: postgis/postgis:13-3.1-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    ports:
      - ${HOST_POSTGRES_PORT}:5432
    command: ["postgres", "-c", "log_statement=all", "-c", "log_destination=stderr"]

  memcached:
    ports:
      - "${MEMCACHED_PORT}:11211"

  qgis:
    volumes:
      # allow local development for `libqfieldsync` if host directory present; requires `PYTHONPATH=/libqfieldsync:${PYTHONPATH}`
      - ./docker-qgis/libqfieldsync:/libqfieldsync:ro

  geodb:
    image: postgis/postgis:12-3.0
    restart: unless-stopped
    volumes:
      - geodb_data:/var/lib/postgresql
    environment:
      POSTGRES_DB: ${GEODB_DB}
      POSTGRES_USER: ${GEODB_USER}
      POSTGRES_PASSWORD: ${GEODB_PASSWORD}
    ports:
      - ${GEODB_PORT}:5432

  minio:
    image: minio/minio:RELEASE.2023-04-07T05-28-58Z
    restart: unless-stopped
    volumes:
      - minio_data1:/data1
      - minio_data2:/data2
      - minio_data3:/data3
      - minio_data4:/data4
    environment:
      MINIO_ROOT_USER: ${STORAGE_ACCESS_KEY_ID}
      MINIO_ROOT_PASSWORD: ${STORAGE_SECRET_ACCESS_KEY}
      MINIO_BROWSER_REDIRECT_URL: http://${QFIELDCLOUD_HOST}:${MINIO_BROWSER_PORT}
    command: server /data{1...4} --console-address :9001
    healthcheck:
        test: [
          "CMD",
          "curl",
          "-A",
          "Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0",
          "-f",
          "${STORAGE_ENDPOINT_URL}/minio/index.html"
        ]
        interval: 5s
        timeout: 20s
        retries: 5
    ports:
      - ${MINIO_BROWSER_PORT}:9001
      - ${MINIO_API_PORT}:9000

  createbuckets:
    image: minio/mc
    depends_on:
      minio:
        condition: service_healthy
    entrypoint: >
      /bin/sh -c "
      /usr/bin/mc config host add myminio ${STORAGE_ENDPOINT_URL} ${STORAGE_ACCESS_KEY_ID} ${STORAGE_SECRET_ACCESS_KEY};
      /usr/bin/mc mb myminio/${STORAGE_BUCKET_NAME};
      /usr/bin/mc policy set download myminio/${STORAGE_BUCKET_NAME}/users;
      /usr/bin/mc version enable myminio/${STORAGE_BUCKET_NAME};
      exit 0;
      "

volumes:
  postgres_data:
  geodb_data:
  smtp4dev_data:
  minio_data1:
  minio_data2:
  minio_data3:
  minio_data4:
ErjanGavalji commented 8 months ago

Hello all,

This is just in case this scenario was not covered by PR 533. Here is a simple case where docker compose complains about the environments setting (services.myservice.environment array items[0,1] must be unique):

main.yml

services:
  myservice:
    image: alpine:latest
    environment: 
      - MYVAR=MyVarValue

secondary.yml:

name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

command:

docker compose -f main.yml -f secondary.yml up

Edit: If that matters, I removed everything just to make the scenario simple. I need the second file to declare volumes to the service that are not always needed.

danstewart commented 8 months ago

This is working fine with v2.24.2 - the duplicate environment entries is working too, so the fix did cover that.

rappzons commented 8 months ago

My company are running Docker Desktop on Mac and we got affected by this bug without even changing docker desktop version, how is that possible? The latest version used by Docker Desktop according to their release notes are: v2.23.3 https://docs.docker.com/desktop/release-notes/

ndeloof commented 8 months ago

@rappzons seems like you got docker compose installed manually, check docker desktop menu : Capture d’écran 2024-01-23 à 11 17 21

rappzons commented 8 months ago

Update: This was our docker-in-docker setup that had downloaded the latest version of docker-compose, sorry for the confusion.

Thanks for the response @ndeloof . Seems like I don't have that option. Perhaps because I've got the free version. image

It really looks like I'm running 2.23.3 of compose.

image

image

Perhaps this is not the best thread for this :D but I found it really weird that I'm affected by this issue.

ErjanGavalji commented 7 months ago

This is working fine with v2.24.2 - the duplicate environment entries is working too, so the fix did cover that.

Right. The issue continues appearing with links and profiles though. Here is the repro:

main.yml:

name: my-project
services:
  myfirstservice:
    image: alpine:latest
  myservice:
    image: alpine:latest
    environment:
      - MYVAR=MyVarValue
    links:
      - myfirstservice
    profiles:
      - profile1
      - profile2

secondary.yml:

cat secondary.yml
name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

command:

docker compose -f main.yml -f secondary.yml up
matanmarciano commented 7 months ago

This is working fine with v2.24.2 - the duplicate environment entries is working too, so the fix did cover that.

Right. The issue continues appearing with links and profiles though. Here is the repro:

main.yml:

name: my-project
services:
  myfirstservice:
    image: alpine:latest
  myservice:
    image: alpine:latest
    environment:
      - MYVAR=MyVarValue
    links:
      - myfirstservice
    profiles:
      - profile1
      - profile2

secondary.yml:

cat secondary.yml
name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

command:

docker compose -f main.yml -f secondary.yml up

Yeha, also here...

solarlodge commented 7 months ago

Here the very same issue with links and tmpfs...

giorgiabosello commented 7 months ago

Even with extra_hosts.

RTahiiev commented 7 months ago

Have same issue with additional_contexts

piurafunk commented 7 months ago

Same issues with cap_add.

$ docker compose config
validating <...>/docker-compose.override.yml: services.docker.host.internal.cap_add array items[0,2] must be unique
solarlodge commented 7 months ago

How many "same"s does it take to have this issue re-opened? :) Or shall we raise another issue?

ndeloof commented 7 months ago

@solarlodge we try to address those on https://github.com/compose-spec/compose-go/pull/548

ErjanGavalji commented 7 months ago

@solarlodge we try to address those on compose-spec/compose-go#548

Respect! 👏

rantanlan commented 7 months ago

same for me with aliases ...network.aliases array items[0,1] must be unique

sprankhub commented 7 months ago

In my case, it is services.nginx.labels array items[2,7] must be unique and still happens with Docker Compose 2.24.2... Shouldn't it be fixed with this version?

glours commented 7 months ago

Hello everyone 👋

We need your help to verify that we managed most of the problems you faced with the v2.24.x releases

I created a PR with the current fixes of merge issues, can you try it and let us know if you still have problem we didn't already solved? Unfortunately you'll have to download all the binaries but choose the right one for your platform and add it to your ~/.docker/cli-plugins/ directory with the name docker-compose

We already know there is a problem with the networks.config.subnet a fix is in progress for this one

ErjanGavalji commented 7 months ago

@glours, Thanks!

I confirm our full scenario works properly with the build you provided (Docker Compose version 41161da)

solarlodge commented 7 months ago

The issue with the services.*.labels array seems to be gone. The services.*.tmpfs array still gives the error msg validating *override*.yml: services.*.tmpfs array items[0,3] must be unique. Tested with Docker Compose version 41161da.

glours commented 7 months ago

@solarlodge I'm looking at this right now, thanks for your time testing it and your feedback 🙏

glours commented 7 months ago

@solarlodge and all the others having issue with tmpfs, those binaries should fix your issue

solarlodge commented 7 months ago

@solarlodge and all the others having issue with tmpfs, those binaries should fix your issue

@glours I can confirm that Docker Compose version d4fb179 does indeed fix all issues we had with the tmpfs array. so far. Every other aspect of our docker compose overwrite setup seems to work again. Many many thanks for your precious and highly appreciated work :pray:

matanmarciano commented 7 months ago

@glours is there any estimation where that fix will be released?

ihor-sviziev commented 7 months ago

Just upgraded to Docker Desktop 4.27.0 (135262) on my mac, and now having this issue. @glours, the binaries from you doesn't pass developer verification on macOS for some reason (docker-compose-darwin-aarch64) :(

image
ndeloof commented 7 months ago

@ihor-sviziev this is a local build, not signed/certified. You need to go to system preference/security to approve running such "unsecure" software - or wait for next release delivered by Docker Desktop :)

glours commented 7 months ago

@ihor-sviziev yes because the signature of the binary for MacOs is done as part of the Docker Desktop release, you have to approve it manually in System settings>Privacy & Security You could use those binaries which use the v2.0.0-rc.3 release of compose-go

ihor-sviziev commented 7 months ago

@glours I can confirm, the fixed version fixes this issue for me.

glours commented 7 months ago

@ihor-sviziev thanks for the feedback

matanmarciano commented 7 months ago

@glours the new fixes should be included in https://github.com/docker/compose/releases/tag/v2.24.4?

glours commented 7 months ago

@matanmarciano yes

matanmarciano commented 7 months ago

@glours it is still not released to: https://download.docker.com/linux/ubuntu/dists/focal/pool/stable/amd64/

glours commented 7 months ago

@matanmarciano no indeed, a release of https://github.com/docker/docker-ce-packaging is planned later this week

ihor-sviziev commented 7 months ago

@glours, I just received the Docker Desktop update to v4.27.1, but unfortunately, the fixed version of docker-compose wasn't included for some reason. When can we expect it?

glours commented 7 months ago

@ihor-sviziev Yes they decided to focus on the security fixes for this release, a next patch release of Docker Desktop is planned for next week... Sorry for the delay, anyway you can add manually the binary of Compose v2.24.5 to your ~/.docker/cli-plugins directory with the name docker-compose

indjeto commented 7 months ago

I had services.php.extra_hosts array items[0,1] must be unique error and after upgrade docker-compose-plugin (2.24.5-1~ubuntu.20.04~focal) over (2.24.2-1~ubuntu.20.04~focal) the problem is gone.

ErjanGavalji commented 7 months ago

Hello again.

I'm afraid there is now a port conflict error when the containers get into running mode. Try this:

main.yml:

name: my-project
services:
  myfirstservice:
    image: node:latest
  myservice:
    image: node:latest
    environment:
      - MYVAR=MyVarValue
    ports:
      - 8080:8080
    links:
      - myfirstservice
    command:
      - node
      - -e
      - "require('http').createServer((req, res) => res.end(`Hello World! $${new Date()}`)).listen(8080);"

secondary.yml:

name: my-project
services:
  myservice:
    extends:
      file: ${PWD}/main.yml
      service: myservice

If you run the command of docker compose -f main.yml, the service will start as expected.

If you run the command by using both the configuration files, docker compose -f main.yml -f secondary.yml, you will get the error of

[+] Running 1/0
 ✔ Container my-project-myfirstservice-1  Created                                                                                      0.0s 
Attaching to myfirstservice-1, myservice-1
Error response from daemon: driver failed programming external connectivity on endpoint my-project-myservice-1 (6e5afc6bd5546dc89feb0f4a019ba2f783e6b39d535fa4ed36a1eefb67664621): Bind for 0.0.0.0:8080 failed: port is already allocated

Versions used:

Docker version 25.0.2, build 29cf629
Docker Compose version v2.24.5
visuallization commented 7 months ago

Docker Desktop 4.27.2 luckily fixed the issues for us!

k1w1m8 commented 7 months ago

Is this released? No milestone assigned...

glours commented 7 months ago

@k1w1m8 those fixes have been released in Compose v2.24.4