raskitoma / pls_admin

A tool designed to manage and control your withdrawal history
https://raskitoma.com
1 stars 2 forks source link

plsworker error: Temporary failure in name resolution... #9

Open terry-sydaus opened 1 month ago

terry-sydaus commented 1 month ago

Hi,

Hope you are well!

Back again after a while and am now interested in using your latest fork and migrating over my database from the fork that I used that did not have celery/redis functionality.

In order to get the docker containers up and running without any major plsworker / plsbeat errors regarding not being able to connect to the postgres database, I had to add network_mode: "host" to all container configuration sections, except for the redis container, of the docker-compose.yml file.

When I do this the docker containers persist and don't constantly restart as they do when I do not include the network_mode: "host". However, I still am seeing the following error occurring every 32 seconds.

plsworker  | [2024-07-29 20:47:23,140: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379//: Error -3 connecting to redis:6379. Temporary failure in name resolution..
plsworker  | Trying again in 32.00 seconds... (16/100)
plsworker  | 
plsworker  | [2024-07-29 20:47:55,173: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379//: Error -3 connecting to redis:6379. Temporary failure in name resolution..
plsworker  | Trying again in 32.00 seconds... (16/100)

Are these errors "normal"?

If no, can you please offer some clues as to why they are occurring?

My docker-compose.yml file is shown below for completeness:

services:
  adminpls:
    build: .
    image: adminpls:v1
    container_name: adminpls
    hostname: adminpls
    restart: always
    ports:
      - 8123:5000
    environment:
      - TZ=America/Guayaquil
      - APP_SETTINGS_MODULE=config.prod
    network_mode: host
    # networks setup depends on intercomm between other container stacks. Depends on declaration near the end of this file. This is optional
    # it's required if you setup a proxy reverse like nginx or connection to a DB inside the same machine
    logging:
      driver: "json-file"
      options:
        max-file: "5"
        max-size: "10m"

  plsworker:
    build: .
    image: adminpls:v1
    container_name: plsworker
    hostname: plsworker    
    restart: always    
    environment:    
      - TZ=America/Guayaquil    
      - APP_SETTINGS_MODULE=config.prod    
    network_mode: host    
    command: celery -A app.scheduler.celery worker --loglevel=info -E    
    depends_on:     
      - redis       
      - adminpls    
    logging:    
      driver: "json-file"    
      options:             
        max-file: "5"        
        max-size: "10m"    

  plsbeat:      
    build: .    
    image: adminpls:v1    
    container_name: plsbeat    
    hostname: plsbeat     
    restart: always    
    environment:    
      - TZ=America/Guayaquil  
      - APP_SETTINGS_MODULE=config.prod    
    network_mode: host    
    command: celery -A app.scheduler.celery beat --loglevel=info    
    depends_on:
     - redis
      - adminpls
    logging:
      driver: "json-file"
      options:
        max-file: "5"
        max-size: "10m"

  redis:
    image: redis:latest
    container_name: redis
    hostname: redis
    restart: always
    #network_mode: "host"
    ports:
      - 6379:6379
    logging:
      driver: "json-file"
      options:
        max-file: "5"
        max-size: "10m"

    # networks:
      #  master_network:
        #  external: True
terry-sydaus commented 1 month ago

Update:

I had to change the configuration of my prod.py file as it pertained to the redis configuration variables, as follows:

 # base broker config
REDIS_HOST = 'localhost'
REDIS_BROKER_URL = 'redis://localhost:6379'
REDIS_RESULT_BACKEND = 'redis://localhost:6379'

Referencing "redis" in my app prevented the plsworker and plsbeat applications from being able to interact with the redis instance. There are now no more errors as described above and all is good. Even received an OK for the ping in the redis console that I have accessed by the admin web interface.

Just need to check now that the task scheduler works for some test tasks before I attempt to port my database from the old "scheduler" version of the app that I forked originally from your repository.

terry-sydaus commented 1 month ago

Got the scheduler up and running with a new task that I added, which required manual modification to the underlying postgresql tables, as the scheduler-reset command failed (#10), so I am very pleased about getting the celery/redis/flask stuff all playing nicely with docker on my host, as it means I no longer need to be worried about the schedule package/library being deprecated.

Thank you raskitoma.