Closed pizzapim closed 7 months ago
Could you share a code example and the command you ran?
Here is a code example (imported as a module):
{
kubernetes.resources.namespaces.test-namespace = { };
kubernetes.resources.pods.example = {
metadata.namespace = "test-namespace";
spec.containers.nginx.image = "nginx";
};
}
Running this gives the error (:
$ nix run .#kubenix.x86_64-linux
Error from server (NotFound): namespaces "test-namespace" not found
Doing something like this in YAML does work though:
apiVersion: v1
kind: Namespace
metadata:
name: test-namespace
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: test-namespace
spec:
containers:
- name: nginx
image: nginx
$ kubectl apply -f ./repro.yaml
namespace/test-namespace created
pod/nginx created
Maybe this is caused by the diff
s?
Ah wait, changing the order in the YAML file does matter. If I put the pod definition before the namespace definition, you get the same error.
Changing the kubernetes.resourceOrder
option to the following also doesn't fix this issue:
kubernetes.resourceOrder = [
"Namespace"
"Pod"
];
Default order should already put the namespace first, so maybe something messes up with the ordering on the way from nix to final yaml?
https://github.com/hall/kubenix/blob/76b8053b27b062b11f0c9b495050cc55606ac9dc/modules/k8s.nix#L328
Could you run nix eval —json
on generated to see if the order is right? If it’s not then it’s something wrong with ordering logic
How do I do that exactly? I'm using Kubenix as a flake output with outputs.kubenix = kubenix.packages.${system}.default .override {}
.
ok, so if you run:
nix build .#kubenix
you can then inspect the generated manifest with:
jq . result/manifest.json
which should show something like:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubenix/k8s-version": "1.27",
"kubenix/project-name": "kubenix"
},
"labels": {
"kubenix/hash": "dc73d66262ab324ab024e6f3258fff7ad526113b"
},
"name": "test-namespace"
}
},
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
"kubenix/k8s-version": "1.27",
"kubenix/project-name": "kubenix"
},
"labels": {
"kubenix/hash": "dc73d66262ab324ab024e6f3258fff7ad526113b"
},
"name": "example",
"namespace": "test-namespace"
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx"
}
]
}
}
],
"kind": "List",
"labels": {
"kubenix/hash": "dc73d66262ab324ab024e6f3258fff7ad526113b",
"kubenix/k8s-version": "1.27",
"kubenix/project-name": "kubenix"
}
}
with Namespace first in the array
this works for me:
kubectl apply -f result/manifest.json
with output:
namespace/test-namespace created
pod/example created
but I got the same error as you when running:
kubectl apply --dry-run=server -f result/manifest.json
the error you are getting is from the diff
part of the script, see:
nix run .#kubenix diff
This is actually the same problem @hall talked about here: https://github.com/hall/kubenix/issues/23#issuecomment-1586549955 And to quote upstream Kubectl:
Yeah, this is hard, I have no clue how we can fix this properly, not without very complicated machinery.
Looked at Kapp as an alternative to plain kubectl. Kapp only really works when you use their kapp.k14s.io
labels, which seems unreasonable to ask of users of Kubenix in my opinion. I think I will use this for my home lab though, as it is quite convenient :)
So yeah, I don't think we should/can fix this in kubectl.
I would like to have Kubenix create namespaces, which I intent to make Kubenix deploy its resources in using
kubernetes.namespace
. However, if I do this, I get:Error from server (NotFound): namespaces "kubenix" not found
. A workaround I can think of is deploying the namespaces manually with kubectl, and then using Kubenix for the rest. Is there perhaps a better way?