alerta / docker-alerta

Run alerta in a docker container
https://hub.docker.com/r/alerta/alerta-web
MIT License
155 stars 137 forks source link

Consistent Missing or Invalid config.json file Error #145

Closed BenB196 closed 5 years ago

BenB196 commented 5 years ago

Issue Summary I am trying to setup Alerta in docker, but am consistently getting the error when going to the web ui:

ERROR: Failed to connect to Alerta API due to missing or invalid config.json file.

Please confirm a config.json file exists, contains an "endpoint" setting and is in the same directory as the application index.html file.

Environment

Here is my setup:

I have a Linux server that has Docker + Docker-compose running on it. The Linux server is in DNS as dockerhost01.domain.com

Docker-compose for Alerta and Postgres:

version: '3.7'
services:
  alertadb01:
    image: postgres:latest
    container_name: alertadb01
    environment:
      POSTGRES_DB: monitoring
      POSTGRES_USER: alertaUser
      POSTGRES_PASSWORD: superStrongPassword123
    volumes:
      - /alertadb/data:/var/lib/postgresql/data:rw
    networks:
      - alerta
    logging:
      driver: "json-file"
      options:
        max-size: "5m"
        max-file: "1"
  alerta01:
    depends_on:
      - alertmanager01
      - alertadb01
    image: alerta/alerta-web:latest
    container_name: alerta01
    volumes:
      - /alertadb/conf/alertad.conf:/app/alertad.conf
      - /alertadb/conf/alertad.conf:/app/alerta.conf
      - /alertadb/conf/config.json:/web/config.json
    environment:
      - DATABASE_URL=postgres://alertaUser:superStrongPassword123@alertadb01:5432/monitoring
    ports:
      - "8082:8080"
    networks:
      - alerta
    logging:
      driver: "json-file"
      options:
        max-size: "5m"
        max-file: "1"
networks:
  alerta:
    driver: bridge

alertad.conf

DEBUG = True
SECRET = "^Bpa%i8_nCAc8fI4l9)nhn2EG2!@Gdwad"
AUTH_REQUIRED = False

SEVERITY_MAP = {
    'fatal': 0,
    'critical': 1,
    'major': 2,
    'minor': 3,
    'warning': 4,
    'indeterminate': 5,
    'cleared': 5,
    'normal': 5,
    'ok': 5,
    'informational': 6,
    'debug': 7,
    'trace': 8,
    'unknown': 9
}
DEFAULT_NORMAL_SEVERITY = 'normal', 'ok', 'cleared'
DEFAULT_PREVIOUS_SEVERITY = 'indeterminate'

PLUGINS = ['reject', 'blackout', 'prometheus', 'normalise']

alerta.conf

[DEFAULT]
sslverify = no
output = psql
endpoint = http://dockerhost01.domain.com:8082/api
timezone = America/New_York

config.json

{"endpoint": "http://dockerhost01.domain.com:8082/api"}

Alerta runs fine, however when I go to the web ui at: http://dockerhost01.domain.com:8082 I get the error mentioned above.

I have tried many different configurations and looked through the docs, but I must be missing something. Any help would be appreciated in getting this to work as Alerta seems like a really cool and useful tool.

satterly commented 5 years ago

What happens if you don't define a custom config.json file? The default one should work without you defining one.

BenB196 commented 5 years ago

@satterly commenting out:

      - /alertadb/conf/config.json:/web/config.json

And rebuilding Alerta results in the same error.

Here are the logs when the error occurs:

2019-05-30 20:47:45,198 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET / HTTP/1.1" \304 \0 "\-" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /index.html | > GET / HTTP/1.1

2019-05-30 20:47:45,262 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET /css/chunk-vendors.dd687e16.css HTTP/1.1" \200 \193158 "\http://dockerhost01.domain.com:8082/" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /css/chunk-vendors.dd687e16.css | > GET /css/chunk-vendors.dd687e16.css HTTP/1.1

2019-05-30 20:47:45,331 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET /css/app.8424264f.css HTTP/1.1" \304 \0 "\http://dockerhost01.domain.com:8082/" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /css/app.8424264f.css | > GET /css/app.8424264f.css HTTP/1.1

2019-05-30 20:47:45,440 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET /js/chunk-vendors.a8ef6473.js HTTP/1.1" \304 \0 "\http://dockerhost01.domain.com:8082/" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /js/chunk-vendors.a8ef6473.js | > GET /js/chunk-vendors.a8ef6473.js HTTP/1.1

2019-05-30 20:47:45,492 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET /js/app.6912cdcc.js HTTP/1.1" \304 \0 "\http://dockerhost01.domain.com:8082/" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /js/app.6912cdcc.js | > GET /js/app.6912cdcc.js HTTP/1.1

2019-05-30 20:47:45,692 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET /config.json HTTP/1.1" \404 \143 "\http://dockerhost01.domain.com:8082/" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /config.json | > GET /config.json HTTP/1.1

2019-05-30 20:47:45,819 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET /favicon-196x196.png HTTP/1.1" \200 \14155 "\-" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /favicon-196x196.png | > GET /favicon-196x196.png HTTP/1.1

2019-05-30 20:47:45,823 DEBG 'nginx' stdout output:
ip=\- [\30/May/2019:20:47:45 +0000] "\GET /favicon-16x16.png HTTP/1.1" \200 \241 "\-" "\Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
/web | /favicon-16x16.png | > GET /favicon-16x16.png HTTP/1.1

2019-05-30 20:47:51,009 DEBG 'nginx' stdout output:
2019/05/30 20:47:51 [info] 26#26: *2 client closed connection while waiting for request, client: 172.18.208.234, server: 0.0.0.0:8080

I also tried deleting the Database and well as completely removing the Alerta container and Postgres container and rebuilding from scratch in case there was any lingering data causing issues.

I went to a private mode to also prevent browser caching from causing issues.

An interesting thing to note is that the first time I try to start the Alerta container, it always dies with the error:

sed: cannot rename /app/sedclONQC: Device or resource busy

I am not sure if this has anything to do with it, but a restart of the container allows it to start.

satterly commented 5 years ago

That error is not good. Can you post the entire log output for the container on first start. There's definitely a problem there which config.json is only a symptom.

BenB196 commented 5 years ago

@satterly Below are the logs from a fresh build of Alerta. Note that the first line is of the initial start, and all other lines are after I restart the container.

Note: I let the container run for about a minute, as I noticed that there are some processes that start a bit late.

sed: cannot rename /app/sedsqTsMt: Device or resource busy
2019-05-30 22:04:45,801 INFO supervisord started with pid 1
2019-05-30 22:04:46,803 INFO spawned: 'heartbeats' with pid 16
2019-05-30 22:04:46,804 INFO spawned: 'housekeeping' with pid 17
2019-05-30 22:04:46,806 INFO spawned: 'uwsgi' with pid 18
2019-05-30 22:04:46,807 INFO spawned: 'nginx' with pid 19
2019-05-30 22:04:46,820 DEBG 'uwsgi' stdout output:
[uWSGI] getting INI configuration from /app/uwsgi.ini

2019-05-30 22:04:46,821 DEBG 'uwsgi' stdout output:
*** Starting uWSGI 2.0.18 (64bit) on [Thu May 30 22:04:46 2019] ***
compiled with version: 6.3.0 20170516 on 29 May 2019 17:21:55
os: Linux-4.19.15-2.ph3-esx #1-photon SMP Mon Feb 25 14:46:16 UTC 2019
nodename: 037b4f9ad632
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /
detected binary path: /venv/bin/uwsgi
chdir() to /app
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes

2019-05-30 22:04:46,821 DEBG 'uwsgi' stdout output:
thunder lock: disabled (you can enable it with --thunder-lock)

2019-05-30 22:04:46,821 DEBG 'uwsgi' stdout output:
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3

2019-05-30 22:04:46,822 DEBG 'nginx' stdout output:
2019/05/30 22:04:46 [notice] 19#19: using the "epoll" event method
2019/05/30 22:04:46 [notice] 19#19: nginx/1.10.3
2019/05/30 22:04:46 [notice] 19#19: OS: Linux 4.19.15-2.ph3-esx
2019/05/30 22:04:46 [notice] 19#19: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2019/05/30 22:04:46 [notice] 19#19: start worker processes
2019/05/30 22:04:46 [notice] 19#19: start worker process 22
2019/05/30 22:04:46 [notice] 19#19: start worker process 23

2019-05-30 22:04:46,822 DEBG 'uwsgi' stdout output:
Python version: 3.6.8 (default, May  8 2019, 05:35:00)  [GCC 6.3.0 20170516]

2019-05-30 22:04:46,822 DEBG 'nginx' stdout output:
2019/05/30 22:04:46 [notice] 19#19: start worker process 24
2019/05/30 22:04:46 [notice] 19#19: start worker process 25

2019-05-30 22:04:46,856 DEBG 'uwsgi' stdout output:
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x55ec594f0590
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds

2019-05-30 22:04:46,857 DEBG 'uwsgi' stdout output:
mapped 462096 bytes (451 KB) for 5 cores
*** Operational MODE: preforking ***
mounting /app/wsgi.py on /api

2019-05-30 22:04:47,555 DEBG 'uwsgi' stdout output:
WARNING:flask.app:('normal', 'ok', 'cleared')

2019-05-30 22:04:47,661 DEBG 'uwsgi' stdout output:
DEBUG:raven.base.Client:Configuring Raven for host: None
INFO:raven.base.Client:Raven is not configured (logging is disabled). Please see the documentation for more information.

2019-05-30 22:04:47,671 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'zabbix' found.

2019-05-30 22:04:47,672 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'twilio_sms' found.

2019-05-30 22:04:47,672 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'telegram' found.

2019-05-30 22:04:47,672 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'sns' found.

2019-05-30 22:04:47,672 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'slack' found.

2019-05-30 22:04:47,673 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'acked_by' found.

2019-05-30 22:04:47,673 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'blackout' found.
DEBUG:alerta.plugins:Server plugin 'heartbeat' found.

2019-05-30 22:04:47,673 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'reject' found.
DEBUG:alerta.plugins:Server plugin 'remote_ip' found.

2019-05-30 22:04:47,673 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'rocketchat' found.

2019-05-30 22:04:47,674 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'pushover' found.

2019-05-30 22:04:47,674 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'pubsub' found.

2019-05-30 22:04:47,674 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'prometheus' found.

2019-05-30 22:04:47,674 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'pagerduty' found.

2019-05-30 22:04:47,675 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'opsgenie' found.

2019-05-30 22:04:47,675 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'op5' found.

2019-05-30 22:04:47,675 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'normalise' found.

2019-05-30 22:04:47,675 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'msteams' found.

2019-05-30 22:04:47,676 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'mattermost' found.

2019-05-30 22:04:47,676 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'logstash' found.

2019-05-30 22:04:47,676 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'syslog' found.

2019-05-30 22:04:47,677 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'influxdb' found.

2019-05-30 22:04:47,677 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'hipchat' found.

2019-05-30 22:04:47,677 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'geoip' found.

2019-05-30 22:04:47,677 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'forward' found.

2019-05-30 22:04:47,677 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'enhance' found.

2019-05-30 22:04:47,678 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'cachet' found.

2019-05-30 22:04:47,678 DEBG 'uwsgi' stdout output:
DEBUG:alerta.plugins:Server plugin 'amqp' found.

2019-05-30 22:04:47,853 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:

    Default reject policy will block alerts that do not have the following
    required attributes:
    1) environment - must match an allowed environment. By default it should
       be either "Production" or "Development". Config setting is `ALLOWED_ENVIRONMENTS`.
    2) service - must supply a value for service. Any value is acceptable.

2019-05-30 22:04:47,853 INFO success: heartbeats entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-30 22:04:47,853 INFO success: housekeeping entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-30 22:04:47,854 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-30 22:04:47,854 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-30 22:04:47,854 DEBG 'uwsgi' stdout output:

2019-05-30 22:04:47,854 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:Server plugin 'reject' loaded.

2019-05-30 22:04:47,858 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:

    Default suppression blackout handler will drop alerts that match a blackout
    period and will return a 202 Accept HTTP status code.

    If "NOTIFICATION_BLACKOUT" is set to ``True`` then the alert is processed
    but alert status is set to "blackout" and the alert will not be passed to
    any plugins for further notification.

2019-05-30 22:04:47,858 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:Server plugin 'blackout' loaded.

2019-05-30 22:04:47,864 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:Server plugin 'prometheus' loaded.

2019-05-30 22:04:47,865 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:Server plugin 'normalise' loaded.

2019-05-30 22:04:47,865 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:All server plugins enabled: reject, blackout, prometheus, normalise

2019-05-30 22:04:47,893 DEBG 'uwsgi' stdout output:
INFO:alerta.plugins:No plugin routing rules found. All plugins will be evaluated.

2019-05-30 22:04:47,893 DEBG 'uwsgi' stdout output:
DEBUG:flask.app:Server webhook 'cloudwatch' found.

2019-05-30 22:04:47,899 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Amazon CloudWatch notifications via SNS HTTPS endpoint subscription
    See https://docs.aws.amazon.com/sns/latest/dg/sns-http-https-endpoint-as-subscriber.html

2019-05-30 22:04:47,900 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'cloudwatch' loaded.

2019-05-30 22:04:47,900 DEBG 'uwsgi' stdout output:
DEBUG:flask.app:Server webhook 'grafana' found.

2019-05-30 22:04:47,905 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Grafana Alert alert notification webhook
    See http://docs.grafana.org/alerting/notifications/#webhook

2019-05-30 22:04:47,905 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'grafana' loaded.
DEBUG:flask.app:Server webhook 'graylog' found.

2019-05-30 22:04:47,910 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Graylog Log Management HTTP alert notifications
    See http://docs.graylog.org/en/3.0/pages/streams/alerts.html#http-alert-notification

2019-05-30 22:04:47,910 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'graylog' loaded.
DEBUG:flask.app:Server webhook 'newrelic' found.

2019-05-30 22:04:47,913 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    New Relic webhook notification channel
    See https://docs.newrelic.com/docs/alerts/new-relic-alerts/managing-notification-channels/notification-channels-control-where-send-alerts

2019-05-30 22:04:47,914 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'newrelic' loaded.
DEBUG:flask.app:Server webhook 'pagerduty' found.

2019-05-30 22:04:47,918 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    PagerDuty incident webhook
    See https://v2.developer.pagerduty.com/docs/webhooks-v2-overview

2019-05-30 22:04:47,919 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'pagerduty' loaded.
DEBUG:flask.app:Server webhook 'pingdom' found.

2019-05-30 22:04:47,923 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Pingdom state change webhook
    See https://www.pingdom.com/resources/webhooks/

2019-05-30 22:04:47,923 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'pingdom' loaded.

2019-05-30 22:04:47,923 DEBG 'uwsgi' stdout output:
DEBUG:flask.app:Server webhook 'prometheus' found.

2019-05-30 22:04:47,935 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Prometheus Alertmanager webhook receiver
    See https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver

2019-05-30 22:04:47,936 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'prometheus' loaded.
DEBUG:flask.app:Server webhook 'riemann' found.

2019-05-30 22:04:47,940 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Riemann HTTP client
    http://riemann.io/clients.html

INFO:flask.app:Server webhook 'riemann' loaded.

2019-05-30 22:04:47,940 DEBG 'uwsgi' stdout output:
DEBUG:flask.app:Server webhook 'serverdensity' found.

2019-05-30 22:04:47,944 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Server Density notification webhook
    See https://support.serverdensity.com/hc/en-us/articles/360001067183-Setting-up-webhooks

2019-05-30 22:04:47,944 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'serverdensity' loaded.
DEBUG:flask.app:Server webhook 'slack' found.

2019-05-30 22:04:47,949 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Slack apps
    See https://api.slack.com/slack-apps

2019-05-30 22:04:47,949 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'slack' loaded.
DEBUG:flask.app:Server webhook 'stackdriver' found.

2019-05-30 22:04:47,953 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    StackDriver Notification webhook
    See https://cloud.google.com/monitoring/support/notification-options#webhooks

2019-05-30 22:04:47,953 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'stackdriver' loaded.
DEBUG:flask.app:Server webhook 'telegram' found.

2019-05-30 22:04:47,958 DEBG 'uwsgi' stdout output:
INFO:alerta.webhooks:

    Telegram Bot API
    See https://core.telegram.org/bots/api

2019-05-30 22:04:47,958 DEBG 'uwsgi' stdout output:
INFO:flask.app:Server webhook 'telegram' loaded.

2019-05-30 22:04:48,050 DEBG 'uwsgi' stdout output:
WSGI app 0 (mountpoint='/api') ready in 2 seconds on interpreter 0x55ec594f0590 pid: 18 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 18)

2019-05-30 22:04:48,051 DEBG 'uwsgi' stdout output:
spawned uWSGI worker 1 (pid: 30, cores: 1)

2019-05-30 22:04:48,052 DEBG 'uwsgi' stdout output:
spawned uWSGI worker 2 (pid: 31, cores: 1)

2019-05-30 22:04:48,052 DEBG 'uwsgi' stdout output:
spawned uWSGI worker 3 (pid: 32, cores: 1)

2019-05-30 22:04:48,053 DEBG 'uwsgi' stdout output:
spawned uWSGI worker 4 (pid: 33, cores: 1)

2019-05-30 22:04:48,053 DEBG 'uwsgi' stdout output:
spawned uWSGI worker 5 (pid: 34, cores: 1)

2019-05-30 22:05:47,112 DEBG 'housekeeping' stdout output:
Traceback (most recent call last):
  File "/venv/bin/alerta", line 10, in <module>

2019-05-30 22:05:47,112 DEBG 'housekeeping' stdout output:
    sys.exit(cli())
  File "/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__

2019-05-30 22:05:47,113 DEBG 'housekeeping' stdout output:
    return self.main(*args, **kwargs)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 717, in main

2019-05-30 22:05:47,113 DEBG 'housekeeping' stdout output:
    rv = self.invoke(ctx)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 1134, in invoke

2019-05-30 22:05:47,117 DEBG 'housekeeping' stdout output:
    Command.invoke(self, ctx)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/venv/lib/python3.6/site-packages/alertaclient/cli.py", line 52, in cli
    config = Config(config_file)
  File "/venv/lib/python3.6/site-packages/alertaclient/config.py", line 33, in __init__
    self.parser.read(os.path.expanduser(self.options['config_file']))
  File "/usr/local/lib/python3.6/configparser.py", line 697, in read
    self._read(fp, filename)
  File "/usr/local/lib/python3.6/configparser.py", line 1080, in _read

2019-05-30 22:05:47,124 DEBG 'housekeeping' stdout output:
    raise MissingSectionHeaderError(fpname, lineno, line)
configparser.MissingSectionHeaderError: File contains no section headers.
file: '/app/alerta.conf', line: 1
'DEBUG = True\n'

2019-05-30 22:05:47,145 DEBG fd 9 closed, stopped monitoring <POutputDispatcher at 140234306079776 for <Subprocess at 140234306040536 with name housekeeping in state RUNNING> (stdout)>
2019-05-30 22:05:47,145 INFO exited: housekeeping (exit status 1; not expected)
2019-05-30 22:05:47,146 DEBG received SIGCLD indicating a child quit
2019-05-30 22:05:47,162 DEBG 'heartbeats' stdout output:
Traceback (most recent call last):
  File "/venv/bin/alerta", line 10, in <module>
    sys.exit(cli())
  File "/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__

2019-05-30 22:05:47,164 INFO spawned: 'housekeeping' with pid 38
2019-05-30 22:05:47,165 DEBG 'heartbeats' stdout output:
    return self.main(*args, **kwargs)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 1134, in invoke
    Command.invoke(self, ctx)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/venv/lib/python3.6/site-packages/alertaclient/cli.py", line 52, in cli
    config = Config(config_file)
  File "/venv/lib/python3.6/site-packages/alertaclient/config.py", line 33, in __init__
    self.parser.read(os.path.expanduser(self.options['config_file']))
  File "/usr/local/lib/python3.6/configparser.py", line 697, in read
    self._read(fp, filename)
  File "/usr/local/lib/python3.6/configparser.py", line 1080, in _read
    raise MissingSectionHeaderError(fpname, lineno, line)
configparser.MissingSectionHeaderError: File contains no section headers.
file: '/app/alerta.conf', line: 1
'DEBUG = True\n'

2019-05-30 22:05:47,199 DEBG fd 6 closed, stopped monitoring <POutputDispatcher at 140234306042840 for <Subprocess at 140234306041112 with name heartbeats in state RUNNING> (stdout)>
2019-05-30 22:05:47,200 INFO exited: heartbeats (exit status 1; not expected)
2019-05-30 22:05:47,200 DEBG received SIGCLD indicating a child quit
2019-05-30 22:05:48,202 INFO spawned: 'heartbeats' with pid 40
2019-05-30 22:05:48,203 INFO success: housekeeping entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-05-30 22:05:49,204 INFO success: heartbeats entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
satterly commented 5 years ago

This is wrong...

 - /alertadb/conf/alertad.conf:/app/alerta.conf

You're mapping alertad.conf to alerta.conf

BenB196 commented 5 years ago

@satterly I ended up fixing that line, but that did not solve the problem. Though I think it has something to do with the OS, as I decided to test the configuration on a different server, and it worked right away.

I am not quiet sure what is causing the problem, but I don't think it is too important as I am going to be decomming the server in favor of using Ubuntu instead as it is more standard.

satterly commented 5 years ago

I didn't think this was an issue with the Alerta docker image. Closing.