Open donnoman opened 7 years ago
I've run into this issue as well - I forked a seperate version to attempt a fix but could not get the pid file to cleanup. I suspect that either installing the package or registering the host with logentries is starting the daemon.
The fact that the LE agent does not clean up it's own PID file is a whole seperate issue to be raised with the LE team.
Here's my workaround, I've added a task to stop the Logentries service right before handler gets executed:
- name: stop logentries daemon to prevent pid file already exists error when handler tries to restart it
become: yes
service: name=logentries state=stopped enabled=yes
I get the same thing. Any plan to fix this?
RUNNING HANDLER [ricbra.logentries : Restart logentries] ***********************
fatal: [appnode-prod-04]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to restart service logentries: Job for logentries.service failed because the control process exited with error code. See \"systemctl status logentries.service\" and \"journalctl -xe\" for details.\n"}
It was supposed to be fixed in #20
I'm not using this role anymore currently. If you could provide a fix it would be awesome @seenickcode .
Hey @ricbra , sure I'd be glad to fix but not an expert on Ansible per se.
So I guess I would simply add this as a step to tasks/main.yml
before this last step you already have there
- name: Follow logs
logentries: path={{ item.path }} state={{ item.state | default('present') }} name={{ item.name | default(item.path) }}
notify:
- Restart logentries
with_items: '{{ logentries_logs }}'
and add this 👇 before that step? ☝️
- name: Stop logentries daemon to prevent 'pid file already exists' error when handler tries to restart it
become: yes
service: name=logentries state=stopped enabled=yes
Yes, that should be the workaround. Though I think it's not idempotent. When the logentries_logs
var isn't changed. the restart isn't notified. But the service will be stopped on every run. So on the first run everything goes fine. On second and subsequent runs (with the same variables) it will break. But I've not tried this, so that's one thing to verify.
To reproduce this problem, simply stop then attempt a restart. You'll get this error:
ubuntu@ip-172-31-36-93:~$ sudo service le stop
ubuntu@ip-172-31-36-93:~$ sudo service le restart
Job for logentries.service failed because the control process exited with error code. See "systemctl status logentries.service" and "journalctl -xe" for details.
It appears it's safer to do a cold restart with a pause in between:
ubuntu@ip-172-31-36-93:~$ sudo service le stop
ubuntu@ip-172-31-36-93:~$ sudo service le start
Here is a fix #27
Each time I try to use this module the first run fails with
Packer Running Ansible Provisioner
or ansible running with an inventory
when I checked after the inital failure there was no pid at the pidfile.
I went through the log, and there is no start, or previous restart. It must come from the package.
a subsequent run corrects the problem, unfortunately that doesn't work with packer, its always first run with packer, and it always hits this issue then aborts.
DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"
ansible 2.2.1.0
packer 0.12.2