Open hoonetorg opened 8 years ago
I could create a PR if you want.
Looks a bit convoluted to me, and doesn't seem to be taking systemd in to account?
service.(running|dead|enabled): also works with systemd.
Regarding convoluted: using a default.yml
service:
name: template
state: running
enable: True
and then use variables instead of that long salt['pillar.get']... doesn't look that bad anymore :).
# -*- coding: utf-8 -*-
# vim: ft=sls
{% from "template/map.jinja" import template with context %}
template_service__service:
service.{{ template.service.state }}:
- name: {{ template.service.name }}
{% if template.service.state in [ 'running', 'dead' ] %}
- enable: {{ template.service.enable }}
{% endif %}
What do you think?
I could first assign all service related stuff to variables and then it would look similar.
Also service.dead doesn't support reload option. I would fix that before creating a PR.
It's the whole file.replace deal I was targeting. Anyway, best way would be:
Then service can be used to always reload, set autostart to the value of service:autostart and start or kill the service depending on service:run.
This way, the terminology is somewhat more explicit, which is important since we are deviating from the normal process (which is: let the OS decide and start+enable by default).
I think I'll patch in map.jina and defaults.yaml so this can be overridden in a more standardised way using the lookup sub-pillar.
these are the 2 rendered sls snippets which would make 99% of people happy.
non-cluster (that is already working):
haproxy.service:
service.running:
- name: haproxy
- enable: True
- reload: True
- require:
- pkg: haproxy
cluster(not possible yet):
haproxy.service:
service.disabled:
- name: haproxy
- require:
- pkg: haproxy
The changes to the file.replace /etc/default/haproxy part were only done to make the regex more failsafe (If somebody puts blanks/tabs before/after ENABLED
@johnkeates: Will you prepare something on that?
Remark: I can imagine what you want to achieve, but I'd not recommend to rename enable to autostart (because it would destroy backward compatibility): pillar:
haproxy:
enable: (which defaults to True, and only means: do start on boot/autostart)
running: (which defaults also to True and in case of a change in haproxy.cfg triggers a reload).
Good point, I guess it stays ;-)
See this commit and comment: https://github.com/hoonetorg/haproxy-formula/commit/cc242d799ce0201e91d15153f41e6d4fda1db35a
Very short the current situation: enable: True -> haproxy starts at boot and runs enable: False -> haproxy doesn't start at boot and will be stopped at salt run if started.
Clustermanager needs: