mcuadros / ofelia

A docker job scheduler (aka. crontab for docker)
MIT License
3.1k stars 178 forks source link

Jobs don't respect the container #156

Open avataru opened 3 years ago

avataru commented 3 years ago

I have two almost identical projects. When I start the second project Ofelia seems to also run the jobs on the first project.

first project's docker-compose.yml

version: '3'
services:

  app:
    build:
      context: .
      dockerfile: ./php/Dockerfile
    container_name: app
    restart: unless-stopped
    tty: true
    environment:
      SERVICE_NAME: app
      SERVICE_TAGS: dev
    working_dir: /var/www
    volumes:
      - ./src:/var/www
    networks:
      - frontend
      - backend
  web:
    image: nginx:stable-alpine
    container_name: web    
    restart: unless-stopped
    tty: true
    [ ... more settings ... ]
  scheduler:
    image: mcuadros/ofelia:latest
    container_name: scheduler
    depends_on:
      - web
    command: daemon --docker
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./logs/scheduler:/var/log/scheduler
    labels:
      ofelia.job-exec.myjob.container: "app"
      ofelia.job-exec.myjob.schedule: "@every 1m"
      ofelia.job-exec.myjob.command: "command"
    networks:
      - backend
[ ... more containers ... ]

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

second project's docker-compose.yml

version: '3'
services:

  second-app:
    build:
      context: .
      dockerfile: ./php/Dockerfile
    container_name: second-app
    restart: unless-stopped
    tty: true
    environment:
      SERVICE_NAME: second-app
      SERVICE_TAGS: second-dev
    working_dir: /var/www
    volumes:
      - ./src:/var/www
    networks:
      - second-frontend
      - second-backend
  web:
    image: nginx:stable-alpine
    container_name: second-web    
    restart: unless-stopped
    tty: true
    [ ... more settings ... ]
  scheduler:
    image: mcuadros/ofelia:latest
    container_name: scheduler
    depends_on:
      - second-web
    command: daemon --docker
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./logs/scheduler:/var/log/scheduler
    labels:
      ofelia.job-exec.myotherjob.container: "second-app"
      ofelia.job-exec.myotherjob.schedule: "@every 1m"
      ofelia.job-exec.myotherjob.command: "command"
    networks:
      - second-backend
[ ... more containers ... ]

networks:
  second-frontend:
    driver: bridge
  second-backend:
    driver: bridge

Docker's log for the first container will look like this:

2021-07-08T14:25:28.382871261Z scheduler.go:35 ▶ NOTICE New job registered "myjob" - "command" - "@every 1m"
2021-07-08T14:25:28.383118394Z scheduler.go:55 ▶ DEBUG Starting scheduler with 1 jobs

While the second:

2021-07-08T14:44:49.606306404Z scheduler.go:35 ▶ NOTICE New job registered "myjob" - "command" - "@every 1m"
2021-07-08T14:44:49.606356396Z scheduler.go:35 ▶ NOTICE New job registered "myotherjob" - "command" - "@every 1m"
2021-07-08T14:44:49.606362133Z scheduler.go:55 ▶ DEBUG Starting scheduler with 2 jobs

What do I need to set so this doesn't happen and each Ofelia container is only concerned about their project?

Thank you

hemenkapadia commented 3 years ago

Hi @avataru,

Since you are using job labels (https://github.com/mcuadros/ofelia/blob/master/docs/jobs.md) you should apply those labels to the target container (when using job-exec) so in the case of your first project, the Ofelia labels should be on the app service definition instead of the scheduler service definition.

Additionally, when applying labels to the target container you do not need to provide the target container name (https://github.com/mcuadros/ofelia/blob/master/docs/jobs.md#job-exec).

avataru commented 3 years ago

Already tried that and didn't make any difference. The second ofelia container log will show the jobs from both containers (myjob and myotherjob) even if they are set on their respective containers.

brohon commented 3 years ago

Any update to this? I've also tried what you suggested @hemenkapadia and the jobs are still logging across containers.

ddebie commented 3 years ago

Same issue here

ddebie commented 3 years ago

Below is my workaround. On startup it just writes to a config file and runs ofelia with that instead of using labels. I wanted to keep the schedule in the same docker-compose file rather than maintaining a separate .ini file.

ofelia:
    image: mcuadros/ofelia:v0.3.4
    entrypoint: []
    command: /bin/sh -c 'echo "$$OFELIA_CONFIG" >> /tmp/ofelia.ini && ofelia daemon --config=/tmp/ofelia.ini'
    volumes:
        - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
        OFELIA_CONFIG: |-
                [job-run "some_job"]
                schedule = 0 42 * * * *
                delete = false
                container = some_container
avataru commented 3 years ago

Thanks @ddebie, I will try your workaround

sjiampojamarn commented 2 years ago

Any update on this., I have exactly the same issue. It does look like not only logging but the second one runs both jobs!! instead of respecting the container that specifies the label with 'ofelia.job-exec'

sjiampojamarn commented 2 years ago

Thanks @ddebie. Solved my problem with a slightly modification with job-exec so that the job runs in the container bringing up with the docker-compose.

            OFELIA_CONFIG: |-
                [job-exec "remove_old_video"]
                schedule = @every 24h
                container = record-video
                command = find /root/video/ -type f -mtime +60 -exec rm -f {} +
CybotTM commented 2 years ago

This is a feature of Ofelia, you only need one Ofelia Instance running per Host, it will take care of every container/project having Ofelia labels. There is no need to run multiple Ofelia instances per host. It is the same as with SSL Proxies (Traefik, jwilder, nginx, ...) - you will run only one per host, not one per project.