Open kevindrosendahl opened 6 years ago
I've used text/template
a bit with Hugo and I strongly agree with the Helm blog post that it was hard to read.
My concern with jsonnet is if it's tough to use with yaml. Also does this mean we'll want lattice.yml
be lattice.jsonnet
instead?
yes this would replace the current yaml templating we have. we could still allow users to supply .yaml
or .json
files if they dont need any templating. another potential solution could be to support jsonnet and a very simple non-nested parameter expansion in .yaml
and .json
files
so you could write
type: v1/service
...
exec:
environment:
NODE_ENV: {{ node_env }}
...
or
{
"type": "v1/service",
...
"exec": {
"environment": {
"NODE_ENV": "{{ node_env }}"
}
}
...
}
but for anything more complicated you need to use jsonnet.
Put a different way, you can use .yaml
or .json
, but they have to be valid YAML/JSON as is, but we'll crawl the object and replace r-vals with passed in parameters.
Also to be clear, I still propose breaking the resolution into a sidecar service regardless of the templating solution chosen.
consider replacing our current primitive
${param}
with an external more fully featured templating solutionTemplating Solution Contenders
text/template
aymerick/raymond
text/template
in particular outlined in https://sweetcode.io/a-first-look-at-the-helm-3-planShopify/go-lua
)Implementation Considerations
DOS
Currently, template resolution occurs in the
build
controller.The jsonnet documentation nicely summarizes some of the issues that we would face when using any external solution that has the potential to consume large amounts of resources. Indeed a version of this problem somewhat already exists in the component resolver as it is today. One example manifestation is if a reference references itself, we don't detect the cycle and you can stall the controller.
Proposed solution
Run a sidecar definition resolution service alongside the
controller-manager
. This would be agrpc
service defined as:There are a few architectures this service could take, but a likely one would be:
The above architecture should be pretty easy to implement.
The whole container itself could have its resources limited and the request handlers could themselves further limit the resources available to an individual resolution as well as timing out and killing a long-running resolution.
Eventually if needed the request handlers could be smarter and queue up based on available resources, but definitely not necessary to start.
The
build
controller would then instead of callingc.componentResolver.Resolve
: https://github.com/mlab-lattice/lattice/blob/532b4e54ab70c1e630767317291da6c62f66fae9/pkg/backend/kubernetes/controller/build/pending_build.go#L90 would simply callc.componentResolverService.Resolve
.