Closed mumoshu closed 5 years ago
I think that yes, many plugins of this style are simply needing to add to the cloud-config units or files section, with this PR this can be done in a generic way. https://github.com/kubernetes-incubator/kube-aws/pull/510 ;
There is the need for additional IAM configuration in the PR for S3 dumps. Perhaps IAM is the other configuration option that could be implemented generically.
@mumoshu I think plugins or addons have been discussed in a few places. The ones I can think of:
Many kubernetes addons or even some application kube-aws users deploy require them to edit various aspects of the cluster. It could be edited IAM, adding RBAC policies, adding CF resources or various other things. Ultimately as we know, we need to decide how many of these we wish to actively support as top tier features. There's ways to do all of the above with @redbaron's git workflow without changing kube-aws at all. So one option is to only support that method.
I've always thought plugins would be good so we get a bit of both worlds - 1) we stop having to encapsulate each and every feature/addon into kube-aws and 2) we still allow users to more easily connect their additions in, not just well know addons like we support but anything they wish.
I would go as far as saying some things currently implemented in kube-aws as top tier features should actually be removed and just use the plugin system. For example, the rescheduler and kube2iam probably belong there.
If we go the plugin way, I think we should provide a contrib
folder with plugins that are at least validated at a basic level.
As for the designs, I'm not sure I fully understand the intentions of 1 but I think I prefer 2.
@jeremyd @c-knowles Thanks for feedbacks 👍
Regarding the IAM configuration part of the imaginary plugin system, would you like the iam-policies
directory in the above examples?
Seems a reasonable start to me but also happy if we start with a single use case.
I'm extending the design 2 of the plugin system like shown below - @c-knowles @jeremyd Would you like it? Which part do you need for the first release of the "plugin system" feature?
<kube-aws root>/builtin/plugins/
<unique name of a plugin e.g. cluster-dump>/
plugin.yaml
# plugin.yaml defines "parameters" correspond to `arguments` passed via top-level/controller's/worker's `pluginConfgurations.<pluginName>` in cluster.yaml
controller:
parameters:
myVar:
type: string
worker:
parameters:
myVar:
type: string
worker/
worker specific customizations
controller/
but its manifests/
directorycontroller/
controller specific customizations
node/
controller node specific customizations
manifests/
k8s manifests persisted under /srv/kubernetes/manifests to be deployed to k8s via kubectl apply -fcluster-dump.yaml.tmpl
static-pods/
k8s static pods persistend under /etc/kubernetes/manifests to be deployed via kubelet --pod-manifest-pathyour-static-pod-spec.yaml.tmpl
cfn-resources/
your-single-required-cfn-resource.json.tmpl
iam-policies/
iam policies added to the managed IAMRoleController
allow-puts-to-s3-path.json.tmpl
your work tree/
adhoc-plugins/
<unique name of your adhoc plugin e.g. myCustomizations>
/cluster.yaml
With the new configuration key to allow enabling and customizing plugins
clusterName: ***
externalDNSName: ***
kmsKeyARN: ***
*snip*
# All the plugins needs to be enabled under the `enablePlugins` key in a cluster.yaml
enabledPlugins:
#- <unique name of a plugin e.g. cluster-dump in lowerCamelCase>:
# myKeyDefinedInThePluginYaml: foo
- clusterDump:
s3Prefix: <customs3prefix_to_where_cluster_is_dumped>
- rescheduler:
enabled: true
# If you'd like to customize settings not for the whole cluster but just for controller nodes...
controller:
pluginConfigurations:
clusterDump: #Only a plugin enabled under enablePlugins can be configured
dumpInterval: 1h
keptRevisions: 3
# If you'd like to customize settings passed to a plugin per node pool...
worker:
nodePools:
- name: pool1
pluginConfigurations:
nameOfPlugin:
myVar: foo
- name: pool2
pluginConfigurations:
nameOfPlugin:
myVar: bar
Thanks @c-knowles,
Seems a reasonable start to me but also happy if we start with a single use case.
Anything specific which seems missing/redundant for now? For example, the original Design 2 seemed necessary and sufficient for rescheduler for me. Your thoughts?
The rescheduler is a very simple case in that it just needs a way to deploy a manifest file to /srv/kubernetes/manifests/
of the controllers. Above you asked about iam-policies
which I think we'll need for another use cases but not if we first target rescheduler. Are we also targeting the export tool or something else with first attempt?
On the structure, could the built in worker plugin structure mirror the controller one? I think it would be better if worker/
and controller/node/manifests/
is the same. Also, will we have other builtin
files? The /builtin/plugins/
seems to imply we plan on adding something else? (it's different to adhoc-plugins
style)
Ah, your are correct, iam-policies
is too much for the rescheduler plugin.
I agree that we should start with a single use-case.
I just want to design the structures of plugins before we start so that we don't need to change the plugin interface a lot later.
On the structure, could the built in worker plugin structure mirror the controller one?
Yes
I think it would be better if worker/ and controller/node/manifests/ is the same.
As manifests are deployed cluster-wide by kubectl apply -f
on controller nodes as of today, I assumed it would not make sense to place it under worker/
hence to run the command in worker nodes.
Also, will we have other builtin files? The /builtin/plugins/ seems to imply we plan on adding something else?
Nothing in particular but just wanted not to name it like builtin-plugins
(not a valid golang package name AFAIK) or builtinplugin
(I want to avoid this one as long as plugin
is unique enough as the golang package name)
(it's different to adhoc-plugins style)
I assumed that we just have a different naming convention and structure between a work tree(managed by users) and the kube-aws project's file tree(managed by developers). For example, there has been no hyphens in directory names inside the project tree. I also assumed that if we make some plugins "builtin", it will be bundled into kube-aws binaries so making directory names valid go package names would make a life a bit easier 😄
I see ok, I hadn't realised you intended for the built in files to be a go package I thought it was just some template files we'd be bundling. If it makes live easier to use a valid go package name then I'm all for it.
@mumoshu @camilb I was thinking about the Dex support from https://github.com/kubernetes-incubator/kube-aws/pull/568 and various other places we've added support for addons like kube2iam. I was wondering if some items like Dex can almost be covered by Helm and it seems like that's mostly true.
So, my idea is to allow the plugin system to bootstrap Helm charts generically then all we would need to enable for Dex judging from the changelog is some hooks to take the few --oidc
flags in the controller setup. That could also be coded generically, to allow Helm Chart values to be passed into cluster bootstrap. i.e. just like we are allowing a folder with plugin values to be specified we also also Helm values.yml files to be specified for each chart that kube-aws would preinstall.
I'm not saying we do it in the Dex PR as I'm really looking forward to that feature! However, I'd like to get your thoughts on the idea? Using Helm as the installation mechanism has a few advantages including:
--oidc
flags don't changeSo, my idea is to allow the plugin system to bootstrap Helm charts generically
+1
Opens us up to the Helm and Helm Charts community so more likely that each Chart gets the attention it deserves
+1
Btw, so - can we deploy tiller by default in all the clusters created by kube-aws? (which I personally like to 😄
I would say yes for the default to install Tiller. We could make it optional and error out if any Charts have been added but Tiller is off. Or we could install it if and only if any Charts are being bootstrapped.
@c-knowles My first idea was only to enable the flags in the API server and deploy dex
using helm
or kubectl
. I agree with you to use helm
and reduce the complexity in kube-aws
.
Great, I'm going to have a think about exact design here and propose something.
Interesting CoreOS mentions an Operator controlled Dex so maybe we'd like to support Operators as plugins as well.
@c-knowles Sorry for the long silence but did you come up with any design? 😃
@mumoshu still thinking about how best to structure it, I will post something soon.
@c-knowles Thanks! I've tried to sketch a design doc myself in my free time. You can view it in my gist. Please feel free to fork, rewrite, comment on it. One thing I believe I'm missing in the doc is an implementation plan - what steps we can take to implement a complete plugin architecture incrementally.
Updated my design proposal with expected use-cases. Any comments on it are welcomed!
Should have something in the next few days
@mumoshu sorry for the delay! I've been thinking a lot about how we can improve kube-aws to be more modular. The good news is I've posted my ideas here. The "bad" news is that it's quite a lot wider in scope than I had originally anticipated but we could still pick it up gradually rather than all in one go if we wished to proceed with it. I'm happy to take feedback and work out what best fits with the roadmap goals, let's discuss soon on Slack perhaps. I took inspiration from various other projects with plugins or that are quite modular like kops. I've also read your proposal as well and it makes sense, my proposal is a bit more about a key change in how kube-aws is structured whereas yours is a more specific about how we'd implement it.
@c-knowles Thanks for the great write-up! Let me think a bit more about how we'd get started, perhaps by writing the first experimental implementation of the plugin system until v0.9.8-rc.1.
Also, would you mind submitting your great design proposal to this repo under a new proposals/
directory like CA do?
@c-knowles Do you think that we need to provide users ability to run arbitrary scripts(or golang programs?) in any of stages?
After reading explanations for the stages 1. initialise
, 2. render
and 4. validate
, it seems like those stages may require user-provided scripts/programs.
Built-in plugins would be implemented in go whereas contrib and adhoc plugins would be implemented in yaml and/or with go-plugin. Thoughts?
@mumoshu Sorry for the delay, I've created a PR above now. I think we could proceed with a plugin in experimental mode next. I think each stage allows for custom code but not all plugins have to execute something at each stage. I'd like to keep the same structure/pattern for different types of plugins so we do not maintain multiple patterns, however I'm less sure about how well it's going to work in practice if we use go-plugin for everything.
I'm going to drop the custom validations part of the plugin spec for now, mainly because I could not come up with a good-enough detailed spec and implementation of those 😢
looks like plugins can be a justification for ground-up rewrite of whole kube-aws. if we do so, we might as well take kube-like resource approach to a config: have a Swig (openapi) schema defining all resources, all code regarding parsing basic validation will be generated then. Plugins will receive those resources as an input and produce modified versions of them as an output.
Then resources can be evaluated (executed), with hooks to call plugins at a different stages to alter evaluation process.
It is totally not backwards compatible, but can bring kube-aws codebase in a much better shape. It is very risky however as if such rewrite wont be done in ~month it is likely to become never ending project with low momentum :(
What do you think?
@redbaron Forgive me for replying this after the long silence !
Then resources can be evaluated (executed), with hooks to call plugins at a different stages to alter evaluation process.
Sounds like a great idea. Yeah but making it a never ending project should be avoided. So I'd suggest making gradual changes towards that.
Firstly, I'd introduce a "naive"(sry to anyone felt unconfortable. thats not my intention) but practical plugin system like "helm plugins" as documented in https://github.com/kubernetes/helm/blob/master/docs/plugins.md.
I'd implement several hooks and extention point so that a plugin could a kube-aws subcommand and/or inject automatically executed script(s) before/after each fine-grained step of kube-aws. Communication among plugin-provided commands and kube-aws would be made via environment variables or local filesystem or anything accessible thru e.g. shell script.
Btw, we already have a hidden plugin system which allows you to write plugin.yaml
to inject various cfn snippets into the stack-tempates. My suggested feature would compliment it.
The next step would be implementing what you suggested. More formally defined, serious plugin system provides near infinite extensibility to kube-aws.
WDYT?
@mumoshu Sounds good to me. I agree about the neverending part, nothing to stop us only providing some use cases we need sooner rather than later.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Would it be useful/possible to introduce "kube-aws plugins" to define sets of configurations and resources(settings, k8s manifests, required iam policies, etc) used to extend kube-aws, without complicating the core of kube-aws?
For example, I guess with the feature, #507 can be implemented as a plugin composed of:
Design 1
plugins/enabled/
<unique name of an instance of plugin e.g. cluster-dump>/
controller
manifests/
cluster-dump.yaml.tmpl
with the content of cluster-dump.yamliam-policies/
allow-puts-to-s3-path.json.tmpl
with the content of the iam policy required by cluster-dumpIntentions:
plugins/enabled/<unique name of an instance of plugin>
so thatplugins/available/<unique name of a plugin>
for defining a plugin=a template of instances of plugincontroller
directory is there to place controller node specific files and settingsDesign 2
plugins/available/
<unique name of a plugin e.g. cluster-dump>/
controller
manifests/
cluster-dump.yaml.tmpl
iam-policies/
allow-puts-to-s3-path.json.tmpl
cluster.yaml
and we bundle default plugins into kube-aws binaries, but putting files to
plugins/available
would allow customizing a default plugin.@c-knowles I can't locate the exact link to it but if I remember correctly, we've discussed before about introducing "plugins" to make kube-aws extendable by users?