xtruder / kubenix

Replaced by https://github.com/hall/kubenix
MIT License
300 stars 34 forks source link

wip k3s support #18

Closed xavierzwirtz closed 1 year ago

xavierzwirtz commented 4 years ago

Very very draft k3s support. This adds a distro setting to the test, allowing you to swap between nixos and k3s. k3s is mostly working, however there are a few issues:

I think other than that the bones of k3s is working. Happy to work on this more to get it up to snuff.

xavierzwirtz commented 4 years ago

Only issue that keeps the k8s-deployment test from passing with k3s is the dns now. Not sure what the issue is there yet.

xavierzwirtz commented 4 years ago

DNS issue is now resolved. Single node tests can now be run on k3s by adding distro = "k3s"; to the tests config. I duplicated the k8s-deployment test as k8s-deployment-k3s. @offlinehacker I would be interested to hear your thoughts on a backend independent testing abstraction. The way this is implemented now works, but feels kludgy.

offlinehacker commented 4 years ago

Thank you for initial implementation of k3s integration. While the whole k3s/nixos integration is generally good, I think testing framework needs to be replaced. I was thinking to do the following:

Use kubetest, python framework for testing:

Using nixos specific framework has different limitations, as being coupled on nixos and also perl based nixos tests are being replaced with python tests. Using kubetest, would not only allow to run tests without running nixos inside vm, but to also be completely independent of k8s distro. Also all different libraries in Python can be used (imagine connecting to database or queue within test).

Integrate telepresence

Telepresence allows to make a tunnel to kubernetes and make a service behave like it's running inside kubernetes. We will have to set socks proxy inside python process and use LD_PRELOAD for spawned processes. Integration of kubetest with telepresence would also be nice to have.

Docker?

One question is why do we need to use docker as additional layer, since docker is already using containerd and just adds additional complexity. I am completely ok with running k3s using containerd, or did you had some other issues?

I am currently on vacation for roughly one more week and will be able to work on test refactoring when I get back and integrate your work.

xavierzwirtz commented 4 years ago

Containerd worked great, I didn't want to break compatibility with any of your existing tests that were using docker commands though. Swapping back to containerd is easy, just have to remove --docker and switch over to the containerd commands for loading images.

Kubetest and telepresence look interesting, I don't think I have used kubenix enough yet to give an educated opinion though. I know what I really love about how things currently work is not having to worry about any state carrying over between test runs. It has made learning kubernetes so much easier.

xavierzwirtz commented 4 years ago

The more I use the current testing framework, the more frustrated I become with it. The current issue I am having, is k3s does not include all images needed to use local-path storage classes in its airgap image set. This took a while to discover though, because NixOS testing library disables the airgap firewall rules when you are using .driver to run the tests with a debug repl. It feels like for a pleasant user experience, first class support for running tests on a real kubernetes cluster is critical. Otherwise there will be a constant friction between the kubernetes distro you use for testing, and the kubernetes distro you are deploying to. This is probably just reiterating thoughts you have already had yourself.

blaggacao commented 3 years ago

I'd love to see this move forward as k3s is (not only) my go to choice for k8s deployments.

offlinehacker commented 3 years ago

I am not working on this actively, as I use other tooling for kubernetes deployments currently (pukumi), but I am happy to review and merge if some has interest in this project

blaggacao commented 3 years ago

I am not working on this actively, as I use other tooling for kubernetes deployments currently (pukumi), but I am happy to review and merge if some has interest in this project

I see, would you be open to cede maintainership / ownership on kubenix to somebody willing to evolve it further?

offlinehacker commented 3 years ago

Yes, I would be very open for that, and I would would be very happy someone continues efforts 🙂 I would be also open to discuss ideas, as I have some experience building and refactoring this project and different issues I had.

adrian-gierakowski commented 3 years ago

@offlinehacker did you mean https://www.pulumi.com? May I ask what was the reason for choosing it over kubenix? Thanks!

adrian-gierakowski commented 3 years ago

Btw I’d be happy to work on developing kubenix as I use it at work and should be able to commit a significant amount of time to it. Happy to hop on a call to discuss this further.

colemickens commented 3 years ago

@offlinehacker I'm also curious if you're using Pulumi + Nix in any novel way, or just regular Pulumi+TS to manage resources?

(re: kubenix: I've always been on the edge of adopting kubenix but wasn't sure if it had other (non-@offlinehacker) users or a future. Knowing others are interested, even just this much, helps alleviate some of that fear.

blaggacao commented 3 years ago

I'm going to go all in on kubenix. It is "brilliant" (quote zimbatm). In my opinion, a generalized nix (and later nickel) DSL is strategically superior to (special purpose) pulumi ts based "DSL", I think. (espeically so in the context of for example: https://github.com/divnix/devos/issues/130 — where people not only would want to manage k8s, but the whole environment).

blaggacao commented 3 years ago

Btw I’d be happy to work on developing kubenix as I use it at work and should be able to commit a significant amount of time to it. Happy to hop on a call to discuss this further.

@offlinehacker You could just transfer the repo to @adrian-gierakowski (Adrian would need to rename his current fork first). Would that be something? I'm currently very engaged in the offline world, but you can definitely expect input similar to what I'm currently doing on divnix/devos.

offlinehacker commented 3 years ago

@colemickens

I am just using pulumi and mostly using kubernetes operators. I gave up on trying to maintain my own nix based ecosystem, as I don't see that many benefits. I had ideas to not only build static resources using kubenix, but to also have dynamic kubernetes operators running using nix expressions, but I lose some motivation. I hope someone else can continue these efforts and make it more usable.

Here is quite advanced pulumi example I use https://github.com/xtruder/pulumi-extra/blob/master/resources/k8s/postgres-operator.ts This is something that is not possible using only static generation kubenix does, as it does quite some dynamic orchestration.

blaggacao commented 3 years ago

@offlinehacker Does this operator run within the cluster? I believe anything that goes into the direction of an operator is out-of-reach for a (declarative) configuration language, while deploying that operator would probably be in-scope.

adrian-gierakowski commented 3 years ago

@blaggacao I've renamed my fork, but maybe it would be better to create a kubenix org? Unfortunately the name seems to be taken. @offlinehacker have you created that org? Btw. I'd be able to start working on this next week. Shall we arrange a call to discuss the direction in which we'd like to take this project?

blaggacao commented 3 years ago

Shall we arrange a call to discuss the direction in which we'd like to take this project?

Plese set a time that is ok for you, I can adapt completely.

Click the following link to join the meeting: https://meet.jit.si/kubenix

=====

Click this link to see the dial in phone numbers for this meeting: https://meet.jit.si/static/dialInInfo.html?room=kubenix

blaggacao commented 3 years ago

Maybe it could go under nix-community?

adrian-gierakowski commented 3 years ago

Maybe it could go under nix-community?

sounds ok, what do you think @offlinehacker

Shall we arrange a call to discuss the direction in which we'd like to take this project?

Plese set a time that is ok for you, I can adapt completely.

I very flexible as well. @offlinehacker do you think you'd be able to find some time for this anytime this or next week? If so, I'd defer to you regarding picking the time for the meeting.

blaggacao commented 3 years ago

https://nix-community.org/#how-do-i-get-my-project-included

adrian-gierakowski commented 3 years ago

@offlinehacker I’m planning to dedicate 2-3 days next week to working on kubenix and I think it would be really helpful to get some input from you before kicking off. Do you think we could have a chat sometime next Tuesday/Wednesday? Thanks!

adrian-gierakowski commented 3 years ago

@offlinehacker I understand that you might be busy, so it would be great if you could at least give your blessing for me to post a message on NixOS Discourse announcing that I’m going to work on developing kubenix and asking for feedback from the community regarding the roadmap.

I also really don’t mind in which repo the development continues. Nix-community or kubenix gh org seem preferable, but I’d be just as happy for the project to stay where it is, as long as I’m added as a maintainer.

Thanks!

blaggacao commented 3 years ago

I combined this with #27: https://github.com/xtruder/kubenix/pull/29

blaggacao commented 3 years ago

@xavierzwirtz What was the original reason not to use nixpkgs' services.k3s? Didn't exist at the time of writing?

xavierzwirtz commented 3 years ago

Been awhile, I don't remember a good reason either way.

On Wed, May 5, 2021, at 8:39 PM, David Arnold wrote:

@xavierzwirtz https://github.com/xavierzwirtz What was the original reason not to use nixpkgs' services.k3s https://github.com/NixOS/nixpkgs/blob/nixos-unstable/nixos/modules/services/cluster/k3s/default.nix? Didn't exist at the time of writing?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xtruder/kubenix/pull/18#issuecomment-833162211, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADTZZ25BKB3HO2QXVLZABTTMHXOLANCNFSM4KM3MOGQ.

offlinehacker commented 1 year ago

This repo has been deprecated, since I stopped maintaining it some time ago. There is a fork maintained by @hall available at https://github.com/hall/kubenix, that has better documentation and looks like a way further.