Closed acdoussan closed 1 year ago
Thank you. I think this is a good middle ground to allow someone to install a list of packages without too many changes or dependencies. Why stop at just workers? What about servers?
Also, it would be awesome to write a molecule test for this. Install the package and verify it's there.
Whatcha mean by servers? Like the proxmox servers?
The nodes are VMs / containers, I don't know if it makes sense to define what packages you want on your proxmox servers within this playbook. This just lets you add packages to the worker nodes if you know you need them for your applications that will be running on the nodes.
I did briefly think about molecule, but it looks like the tests use the sample inventory, wasn't sure about the sample actually installing packages. Would you have a preferred place / way to set this only for a test? Or do you think it's fine for the sample to have something that would actually get installed?
oh, this is probably what the overrides file is for?
not sure if that update does anything or not or if it is the reason the default is failing, was trying to avoid downloading all of the stuff and just let the CI handle it but might have to
This is failing because the package isn't available on rocky, only Ubuntu so you'll have to modify the tests to only install on node1
, not node2
.
Also, when I mentioned the servers, I meant the master
nodes since I think this is going to be the next things someone asks for.
This is failing because the package isn't available on rocky, only Ubuntu so you'll have to modify the tests to only install on
node1
, notnode2
.
ah, that would explain it...
I'm not sure if that is even the best way to write that test, lets see if @sleiner has a better alternative. Probably better to use a package that is available on both distros though.
Also, when I mentioned the servers, I meant the
master
nodes since I think this is going to be the next things someone asks for.
Oh gotcha, no user workloads run on the control nodes, right? I would think that any packages that need to be installed on the control nodes are the responsibility of the playbook, not the user. I do like the idea @sleiner proposed though with extra_packages
but I think this depends on the mindset of allowing this.
My thought process is that the playbook is setting up a k3s cluster to run my applications, and my applications need X packages to run, so it's the responsibility of the playbook to make sure these are installed. Making sure that other packages are installed so that I can use the hosts for more than just k3s isn't really in this scope. I'm not sure we really want to offer to install extra packages on the control nodes, but admittedly it would probably be relatively easy to do it for any host, so I'm happy to take either approach.
Hey, sorry but I am going to close this PR, I should have when it was opened however the more I think about it the more it doesn't make a lot of sense for this playbook to take on the dependencies of something that's not required to run k3s. While there might be features and changes that are seemingly easy to add, at the end of the day it introduces complexity to this playbook that is either unnecessary or untested. Thank you again for the suggestion and thank you @sleiner for putting in time to do a code review even though this isn't a potential merge candidate. Thank for understanding.
No problem! Sorry for going MIA for a while, work has been consuming most of my tech energy recently.
I've also been thinking about this and I think the real problem is that the playbook is the "interface" rather than the roles. This means if you want to do anything custom, you have to diverge from the upstream and figure out how to pull down changes while also keeping your changes. Not a huge deal with a separate branch and rebases, but can lead to merge conflicts and just general headaches.
gerlingguy has a sort of similar mac dev playbook, but the roles there are the "interface". This is accomplished by publishing the roles to ansible galaxy as a collection, and instead of forking his playbook, there is a link to a repo template to get you started. This way you still get all of the setup like forking, but you actually own the repo, and can just add tasks like any other playbook without worry about diverging. Updates are pulled by pulling the new role versions.
I think there is already an open issue for this (https://github.com/techno-tim/k3s-ansible/issues/198) so this is probably where effort should be focused to allow this 🙂
Added some closing thoughts to the discussion if anyone comes along and is wondering what their options are.
Discussion can be found here: https://github.com/techno-tim/k3s-ansible/discussions/259
@acdoussan I just saw that there were some open review questions for me that I forgot about - sorry about that :(
no problem, probably better this way since you didn't waste any extra time :)
Proposed Changes
Tested by deploying to a test cluster with the following extra packages, and verifying they were present by using
apt list --installed | grep <package name>
. Also ran the reset script and verified that they were uninstalled using the same method.Checklist
site.yml
playbookreset.yml
playbook