coollabsio / coolify

An open-source & self-hostable Heroku / Netlify / Vercel alternative.
https://coolify.io
Apache License 2.0
34.44k stars 1.87k forks source link

[Bug]: Source git branch doesn't update #2895

Open lorenzomigliorero opened 3 months ago

lorenzomigliorero commented 3 months ago

Description

The source git branch doesn't update on a public repository / docker-compose. I didn't test other build-packs/resource types.

Minimal Reproduction

https://www.loom.com/share/46359b1ba1864ac3a8cad54c64e04de0

Exception or Error

No response

Version

317

Cloud?

SapphSky commented 3 months ago

I've encountered this issue as well just now on the cloud version

SapphSky commented 3 months ago

Hey, following up after doing a bit of trial and error. I got my scenario to work by pasting the branch tree URL as the repo URL when creating the resource. In your case, it would be https://github.com/verdaccio/verdaccio/tree/master. The tooltip on the Repository URL explains how the branch is selected this way. Hope this helps

replete commented 3 months ago

Since 4 beta .317 added Preserve Repository During Deploymentit works once and then never seems to update the git repo, so the webhook rebuilds are running an old version of the app. Presumably it is not clearing the mounted volume. Giving up volumes for a bit until this is work as expected

Nneji123 commented 1 month ago

Has this been fixed @replete ? Or what did you do to work around this issue? I've encountered the same issue with a multi service deployment and only one of the services' is updated. The other two services do not get updated.

Here's my compose file as an example:

services:
  web:
    build:
      context: .
      dockerfile: docker/Dockerfile.dev
    container_name: trendibble-api
    command: scripts/start_server.sh
    ports:
      - ${WEB_PORT:-8000}:8000
    environment:
      ENVIRONMENT: ${ENVIRONMENT}
      RABBITMQ_USER: ${RABBITMQ_USER:-user}
      RABBITMQ_PASS: ${RABBITMQ_PASS:-password}
      RABBITMQ_HOST: ${RABBITMQ_HOST:-rabbitmq}
      SERVICE_TYPE: "web"
    env_file: .env
    restart: on-failure
    volumes:
      - ./:/app
      - ./logs:/app/logs

  redis:
    image: redis:7.2.4-alpine3.19
    ports:
      - "${REDIS_PORT:-6379}:6379"
    restart: always
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

  rabbitmq:
    image: rabbitmq:3-management
    container_name: rabbitmq
    ports:
      - "${RABBITMQ_PORT:-5672}:5672"
      - "${RABBITMQ_MANAGEMENT_PORT:-15672}:15672"
    environment:
      RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-user}
      RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-password}
    env_file:
      - .env
    healthcheck:
      test: ["CMD", "rabbitmq-diagnostics", "status"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 40s

  scheduler:
    container_name: trendibble-api-scheduler
    build:
      context: .
      dockerfile: docker/Dockerfile.dev
    command: scripts/start_celery_beat.sh
    volumes:
      - .:/app
    depends_on:
      - rabbitmq
    environment:
      ENVIRONMENT: ${ENVIRONMENT}
      RABBITMQ_USER: ${RABBITMQ_USER:-user}
      RABBITMQ_PASS: ${RABBITMQ_PASS:-password}
      RABBITMQ_HOST: ${RABBITMQ_HOST:-rabbitmq}
      SERVICE_TYPE: "scheduler"
    restart: on-failure

  worker:
    container_name: trendibble-api-worker
    build:
      context: .
      dockerfile: docker/Dockerfile.dev
    command: scripts/start_celery_worker.sh
    volumes:
      - .:/app
    depends_on:
      - rabbitmq
      - mjml-server
    environment:
      ENVIRONMENT: ${ENVIRONMENT}
      RABBITMQ_USER: ${RABBITMQ_USER:-user}
      RABBITMQ_PASS: ${RABBITMQ_PASS:-password}
      RABBITMQ_HOST: ${RABBITMQ_HOST:-rabbitmq}
      SERVICE_TYPE: "worker"
    env_file: .env
    restart: on-failure

  mjml-server:
    image: danihodovic/mjml-server:4.15.3
    ports:
      - "${MJML_SERVER_PORT:-15500}:15500"
    restart: unless-stopped

volumes:
  data:
replete commented 1 month ago

@Nneji123 no idea, I moved on because it was not reliable and will revisit in a year or so. Shame because this workflow is the best feature IMO

ejscheepers commented 3 weeks ago

I am also experiencing this issue. Docker Compose with 1 service that is not updating. If I copy my master branch to a "test" branch and use that one to deploy it works (at least the first time).

VaarunSinha commented 2 weeks ago

Is their a workaround to this? I am using via Github application.

renanmoretto commented 3 days ago

i'm also getting this bug with githubapp and deploy keys

any updates on this?

also, how do i manually force a git pull on a project? i think this would be the simplest/easiest fix for now.

sdezza commented 2 days ago

same issue here. Trying to find a solution

simonjcarr commented 2 days ago

I have the same issue. I'm going to see if deleting the app and recreating it works. Not brilliant but everything else is so good I don't want to throw the baby out with the bathwater, so will wait for the developer to fix. As a matter of interest my issue is with a project running on a second server, would be interesting to know if this is common to any one else having this issue?

renanmoretto commented 2 days ago

same issue here. Trying to find a solution

i tried everything and gave up

mine was deploying a private directory with a docker-compose file.

i tried with github app and deploy keys and neither worked. basically coolify doesnt fetch new commits/update the vps branch, no matter what. it was random because some redeploys/deploys fetched new updates, but then it gets stuck.

i talked about it on discord, few people tried to help but nothing worked so i gave it up.

if i had a button to manually force a branch update / git pull i would be fine, but i dont see any options to do that.

@andrasbacsai sorry for tagging but is this being looked into? this bug has been around for months and yet no signs of being fixed. i think it's a pretty important bug because it makes coolify basically unusable.

djsisson commented 2 days ago

A lot of the time I see this they have a volume mount, which overwrites what gets built in the container

So first question does yours have a volume mount?

ejscheepers commented 2 days ago

When I had this issue a few weeks ago, it ended up being a container running on server built using the same branch i.e. Docker Compose. So it was using the same ports etc. So when you deploy, your changes are pushed, but the container being "served" is the old container, so you don't see any changes.

I fixed it by running docker ps in the terminal, finding duplicate containers and deleting.

Mine was caused by restoring a backup and Coolify not "knowing" the old container still existed and redeploying.

So yeah, maybe check if you have any "unmanaged resources" on your server.

sdezza commented 2 days ago

After checking the logs, the GitHub commit ID was correct, so the problem didn't come from there.

The solution: delete the Coolify resource and recreate it. Before deleting, remember to save your environment variables. Then recreate the resource and deploy. This forced Coolify to take into account the latest code and create a new image without using old caches (which I had deleted...).

djsisson commented 2 days ago

@sdezza did u have a volume mount like .:./app ?

sdezza commented 2 days ago

yes:

services:
  django:
    build:
      context: .
    command: ["/usr/src/app/entrypoint.sh"]
    volumes:
      - .:/usr/src/app
    ports:
      - "8000:8000"

  celery-worker:
    build:
      context: .
    command: celery -A core worker --loglevel=info
    depends_on:
      - django

  celery-beat:
    build:
      context: .
    command: celery -A core beat --loglevel=info
    depends_on:
      - django

  flower:
    image: mher/flower:latest
    ports:
      - "5555:5555"
    depends_on:
      - celery-worker
    environment:
      - FLOWER_BASIC_AUTH=${FLOWER_BASIC_AUTH}
    volumes:
      - flower_data:/usr/src/app

volumes:
  flower_data:

I deleted all the volumes (UI and docker command), same issue.

djsisson commented 2 days ago

@sdezza you cant have this

.:/usr/src/app

it just overwrites whats built

sdezza commented 2 days ago

@djsisson make sense! Should be like the flower mount?

djsisson commented 2 days ago

@sdezza no, you do not need any mounts here, if you need some files from your repo, just copy them in inside your dockerfile, all youre doing otherwise is overwriting what has been built

if there is some static files you need then u can mount ot a directory within /usr/src/app/staticdir

this mount is usually used during dev, where its using your local repo, for prod you should not have such mounts