Closed mattmoor closed 7 years ago
@sebgoa @dlorenc @jmhodges FYI
@r2d4 @dlorenc FYI...
I've been playing around a bit here, and have a prototype of some rules for managing a k8s Deployment
here. These are highly experimental.
I've been playing around with using these to deploy different environments from a single template in my bazel-grpc "Hello World" app. You can explore the README
in mattmoor/rules_k8s
, but what I'd expect to become the main workhorse for development would be:
bazel run :dev.replace
At least for this relatively simple app, if I make some edits the above command takes <10 seconds to have the new app running on my cluster (including C++ compilation, image packaging, image pushing, and kubectl replace
). Clearly this will degrade with slower compilation, a bigger "app" layer, and/or more containers, but it is likely even faster for uncompiled languages whose "app" layer is essentially a handful of source files.
It is notable that there is very little Deployment
specific logic in this, but a handful of commands take a kind. There is likely extensive opportunity for code re-use on other resource types.
Errata / TODO / Stuff I still don't like:
minikube
. Maybe we can make this a degenerate case of "which cluster?" (above).docker save
with all referenced images + the instantiated yaml.FYI I see 5.
above as something to go hand-in-hand with a kubectl load
or minikube load
command.
FWIW, I had a demo bug (I hadn't dropped :image.tar
), so this is actually ~6 seconds :)
I'd like to be able to build a container, push it to the docker daemon running in a minikube cluster, and then create a deployment using that container.
I've tried out the following rule from mattmoor/rules_k8s (with the address to the minikube VM hardcoded while testing):
k8s_object(
name = "foo_deploy",
cluster = "minikube",
images = {
"192.168.99.100:2376/test:latest": ":foo_build",
},
kind = "deployment",
substitutions = {
"name": "test",
"replicas": "1",
"port": "50053",
},
template = ":deployment.yaml.tpl",
)
Running eval $(minikube docker-env) followed by this rule gives me the following error:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/__main__.py", line 133, in <module>
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/__main__.py", line 120, in main
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/client/v2_2/docker_session_.py", line 71, in __init__
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/client/v2_2/docker_http_.py", line 177, in __init__
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/client/v2_2/docker_http_.py", line 199, in _Ping
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/containerregistry/transport/transport_pool_.py", line 62, in request
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1659, in request
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1399, in _request
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1319, in _conn_request
File "../io_bazel_rules_k8s/k8s/push_and_resolve.par/httplib2/__init__.py", line 1092, in connect
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
usage: resolver.par [-h] [--override OVERRIDE]
resolver.par: error: argument --override: expected one argument
which I assume suggests that the DOCKER environment variables with the path to certs etc is currently not picked up. Is there a way around this, or does google/containerregistry not support this?
Omitting the "images" in the rules, the deployment is successfully created in minikube (however it of course fails to fetch an image).
@dlorenc @r2d4 FYI.
Indeed, I have not fully worked out the appropriate interaction with minikube in my prototype. Frankly, I am glad it worked as nearly as you describe! However, minikube is clearly one of the core scenarios I'd like this to support in a real rules_k8s
.
For minikube, one of the options I'd considered previously was side-loading the containers into the Docker daemon (e.g. via docker load
), but if we can achieve minikube support without a fork in the path this would certainly be preferable. Does minikube natively have a registry running on it that folks use as you describe?
The google/containerregistry
library does not support the environment variables you describe currently, but it should probably be made to support them. Certainly if that's the biggest blocker for minikube support.
We don't have a docker registry by default, but it's possible to run one in minikube. You then need to make sure your pods all reference the in-cluster registry namespace for containers, though.
The end vars probably make the most sense.
As far as I know, minikube only has a docker daemon running by default (see https://github.com/kubernetes/minikube/blob/master/docs/reusing_the_docker_daemon.md).
You can then use the minikube docker-env
to get access to this daemon:
$ minikube docker-env export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="...." export DOCKER_API_VERSION="1.23"
Using docker load
would be an option, but it would indeed be very neat if k8s_object/deploy
was able to load it to a daemon directly (a local daemon or according to the environment variables above)
@niclaslockner Sure, what I'd meant was that I'd want k8s_object
to support docker load
when targeting minikube and docker_push
when targeting a proper cluster.
Considering we already have users specify the cluster
name, we could have a separate / analogous configuration for minikube that triggers this path automatically, I was just hoping to avoid the dual logic internally.
What I'm thinking is something like:
k8s_defaults(
name = "k8s_local_deploy",
kind = "deployment",
minikube = True, # Use minikube CLI to determine the rest.
)
If juggling multiple minikubes is a thing (and they are distinguished by cluster name) then perhaps a better interface would be a parallel minikube_defaults
rule with an identical signature:
minikube_defaults(
name = "k8s_local_deploy",
kind = "deployment",
cluster = "my-local-cluster",
)
I wanted to surface my current thinking around how these rules will manifest in the near term, and solicit feedback.
My current prototype conflates three things:
build
w/ substitutions
), run
w/o images
), andrun
w/ images
).I think that the most immediate value of these rules is delivering on #3
, and enabling tight iteration. I believe #2
can be viewed as a slight extension to this.
I want to punt on substitution in v1 for a few reasons:
rules_jsonnet
or rules_ksonnet
, so inline substitution isn't required.rules_[jk]sonnet
, I think we may want this handled via an external rule.So in the immediate term, I think the surface I will target (with each bullet as an increment of functionality) is:
build :foo
: largely a no-op that returns the .yaml it is passed.run :foo
(w/o images): resolve tags to digests.run :foo
(w/ image): publish listed images, resolve the rest tag => digest.run :foo.{bar}
: for {bar}
in create/replace/delete
(available iff cluster=""
is specified).run :foo.describe
run foo.expose
: and other ad hoc actions.How important do folks think templating is? Does my logic here make sense? I'd appreciate any feedback here.
build :foo: largely a no-op that returns the .yaml it is passed.
Would this also build any images referenced in the yaml?
run :foo (w/o images): resolve tags to digests. run :foo (w/ image): publish listed images, resolve the rest tag => digest.
I'm not sure I understand the difference here. Is this about whether :foo is an image, or if it references one?
run foo.expose: and other ad hoc actions.
Is the idea to completely wrap kubectl with these extra actions?
@dlorenc I'm not sure that in the first increment I'll even expose the kwarg, but once it's there it would because they'd be runfiles of the executable version.
The difference is:
images
the tag => digest resolution is based on what's currently published.
images
the tag => digest resolution is an output of publishing images
.Technically, with multiple image references (and a partial images
override) you could get a mix of behaviors.
We don't need to fully wrap kubectl
, but it was convenient when paired with templating because the deployment name could/would vary. In this more static world, perhaps we stop at the basics.
Regarding templating, I think that if we head down that route (in a further increment), we should adhere to this accepted K8s design proposal.
I have created the repo: https://github.com/bazelbuild/rules_k8s
I will start adding some of the elements of my prototype there as I break off pieces and clean them up. I have enabled issues on that repo, so let's discuss further topics there.
I am opening this issue to track discussions around what shape
rules_k8s
might take, and to enumerate the kinds of scenarios folks would like to seerules_k8s
cover.