Open ChristianKniep opened 6 years ago
The workaround is to deactivate the render altogether:
export DOCKERAPP_RENDERERS="none"
Which leads to not being able to use a render...
Behaviour today is a little different from reported, but still not working:
$ curl -LOJ "https://github.com/qnib/service-orchestration/raw/master/Analytics/data-pipeline/kafka/kafka.dockerapp"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 179 100 179 0 0 648 0 --:--:-- --:--:-- --:--:-- 648
100 1186 100 1186 0 0 3478 0 --:--:-- --:--:-- --:--:-- 3478
$ cat kafka.dockerapp
---
version: 0.1.1
name: kafka
description: "Kafka stack for development purposes (downscaling impossible)"
maintainers:
- name: Christian Kniep
email: christian@qnib.org
targets:
swarm: true
kubernetes: false
---
version: '3.6'
services:
zookeeper:
image: qnib/${zookeeper.name}:${zookeeper.tag}
zkui:
image: qnib/plain-zkui:${zkui.tag}
ports:
- "${zkui.port}:9090"
broker:
image: qnib/${kafka.name}:${kafka.tag}
hostname: "{{.Service.Name}}.{{.Task.Slot}}.{{.Task.ID}}"
deploy:
mode: ${kafka.deploy.mode}
{{if eq .kafka.deploy.mode "replicated"}}
replicas: ${kafka.deploy.replicas}
{{end}}
environment:
KAFKA_BROKER_ID: {{.Task.Slot}}
LOG_MESSAGE_FORMAT_VERSION: ${kafka.log_msg_format}
manager:
image: qnib/plain-kafka-manager:${kmanager.tag}
ports:
- "${kmanager.port}:9000"
environment:
ZOOKEEPER_HOSTS: "tasks.zookeeper:2181"
---
zookeeper:
name: plain-zookeeper
tag: 2018-04-25
kafka:
deploy:
mode: replicated
replicas: 3
name: plain-kafka
tag: 1.1.1
log_msg_format: 1.0-IV0
zkui:
tag: 8d3441d
port: 9090
kmanager:
tag: 1.3.3.18
port: 9000
$ ./bin/docker-app render kafka.dockerapp
Error: failed to load composefiles: failed to parse Compose file
version: '3.6'
services:
zookeeper:
image: qnib/${zookeeper.name}:${zookeeper.tag}
zkui:
image: qnib/plain-zkui:${zkui.tag}
ports:
- "${zkui.port}:9090"
broker:
image: qnib/${kafka.name}:${kafka.tag}
hostname: "{{.Service.Name}}.{{.Task.Slot}}.{{.Task.ID}}"
deploy:
mode: ${kafka.deploy.mode}
{{if eq .kafka.deploy.mode "replicated"}}
replicas: ${kafka.deploy.replicas}
{{end}}
environment:
KAFKA_BROKER_ID: {{.Task.Slot}}
LOG_MESSAGE_FORMAT_VERSION: ${kafka.log_msg_format}
manager:
image: qnib/plain-kafka-manager:${kmanager.tag}
ports:
- "${kmanager.port}:9000"
environment:
ZOOKEEPER_HOSTS: "tasks.zookeeper:2181"
: yaml: line 16: could not find expected ':'
Do I understand correctly that the ask here is to escape (and not barf on) the {{...}}
stuff such that it comes out of the docker-app render
unscathed (to be processed by some tool further down the line)?
Yes, indeed. Services like Kafka or other clustered DB fancy to keep the same hostname so that they can identify which partition of data belongs to them and is already local to the hosts storage.
Thus, the hostname should be evaluated at schedule time and not while rendering the dockerapp
"{{.Service.Name}}.{{.Task.Slot}}.{{.Task.ID}}"
.
Thanks.
Your kafka.dockerapp
seems to go one further than just deferring the evaluation of some placeholders like the hostname since it uses e.g. {{if eq .kafka.deploy.mode "replicated"}}
to make entire bits of yaml optional, this add to the complexity since it means the document is not valid yaml while it is in docker-app's hands, it's not until the final step further down the line that it gets fully realised.
I reckon I want to have the cake (templating) and eat it (skip evaluation for certain parts of the template). :) Not sure if that is a solvable problem; can I somehow quote some template code to not have it evaluated?
Description In order to resume operations for a given tasks in case the tasks is rescheduled, I like to use
{{.Tasks.Slot}}
. HereKAFKA_BROKER_ID={{.Task.Slot}}
andhostname: "{{.Service.Name}}.{{.Task.Slot}}.{{.Task.ID}}"
.Otherwise each kafka-broker would pick up a new broker-ID, which does not make sense. The hostname makes it easy to spot which broker-id is used by which tasks.
Using the experimental templating flag,
{{.Task.Slot}}
is interpreted as configuration.Describe the results you received: When rendering, I get:
As the renderer fails when interpreting the hostname.
Describe the results you expected:
I would like to use those templates - maybe even agnostic to the orchestrator, so that I use a generic template that will be filled out depending on the orchestrator. Alternatively a condition can be made, which let's me drop it in case I do not use SWARM.
Output of
docker version
:Output of
docker-app version
:Output of
docker info
: