kudobuilder / kuttl

KUbernetes Test TooL (kuttl)
https://kuttl.dev
Apache License 2.0
684 stars 85 forks source link

$NAMESPACE substitution does not happen with arbitrary resources #203

Open joejulian opened 4 years ago

joejulian commented 4 years ago

What happened: I tried to create a ClusterRoleBinding for the ServiceAccount created using the kuttl provided namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kuttlcrb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kuttlaccount
  namespace: $NAMESPACE

The resource was created exactly as printed, with $NAMESPACE for the namespace.

What you expected to happen: I mistakenly read https://kuttl.dev/docs/testing/steps.html to say that in any of the preceding topics on that page, $NAMESPACE would be replaced with the namespace.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

kensipe commented 4 years ago

kuttl was designed to work within a namespace... originally it created a test namespace for a test then deleted that namespace. It has evolved to allow for the running within a specified namespace in which case it doesn't own that namespace. Put another way... kuttl was built to work on namespaced resources and when used in this way, it is best to leave the namespace off the resource... kuttl will do the right thing.

We haven't taken cluster-wide resources (as in this example) into consideration yet (from a design standpoint).
It is also worth pointing out that kuttl doesn't yet parse / augment the manifest runtime objects... we don't read or process the content (which means we won't "see" the $NAMESPACE variable). I have been thinking about adding labels... which means augmenting the runtime objects. This is going to take some thinking and some infrastructure buildout.

I re-read the https://kuttl.dev/docs/testing/steps.html#running-commands The references to $NAMESPACE, $PATH, etc is under the running commands section and is specific to that. Perhaps we need to further clarification around that... it would certainly help to see an example on that page.

@joejulian it would also help to better understand the use-case here... is this to "setup" a cluster? or is this for tests? Are you testing for security elements? or what is the "need" for cluster-role? I could use thoughts on creating / managing cluster-wide resources.

current plan:

  1. Need to parse runtimeObjects
  2. add labels. (https://github.com/kudobuilder/kuttl/issues/206)
  3. look at this again
jannfis commented 3 years ago

To chime in here, having the possibility to treat the test YAMLs as templates, with some variable replacements, would be much beneficial for writing re-usable and easy to construct test cases.

I'm currently writing a test suite for the Argo CD Operator, and for a true end-to-end test, we also need to test the behavior of the workloads of the Operator (i.e. Argo CD in this scenario). Many of my tests involve creating a custom resource of Argo CD (the Application resource), and assert some of the behavior. This resource needs a namespace as destination to be defined in its spec, e.g.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: some-app
  namespace: some-namespace # Which I would have to have previously created
spec:
  source:
    repoURL: https://github.com/some/repo
    path: ./some/path
    targetRevision: HEAD
  destination:
    server: https://kubernetes.default.svc
    namespace: some-namespace # This is the item where I'd need the kuttl test namespace

So the only way to be able to test this currently is to create a custom namespace unique to the test, and then specify this namespace in the .metadata.namespace field of all resource YAMLs that get installed and asserted by the test. Finally, I have to delete this namespace in a TestStep as well.

This can become cumbersome and error prone. If above example could be written as:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: some-app
spec:
  source:
    repoURL: https://github.com/some/repo
    path: ./some/path
    targetRevision: HEAD
  destination:
    server: https://kubernetes.default.svc
    namespace: {{ env.NAMESPACE }} # Or only {{ NAMESPACE }}, not very opinionated about that

such a test would not much deviate from a standard test which uses the test namespace created (and deleted) by kuttl.

so-jelly commented 3 years ago

i have another use case with kubernetes resources specifically that i think is more relevant

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: app
spec:
  template:
    spec:
      affinity:
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app: other-app
                namespaces:
                  - $NAMESPACE
                topologyKey: kubernetes.io/hostname
              weight: 10
kensipe commented 2 years ago

great stuff... I believe we will need refactor for the inclusion of labels which may help with this as well. There is likely to be some challenges with namespaces IMO. as it is possible that a manifest file is used in a test for which we want to sub the namespace to the test namespace... but it could also be true that the intended namespace is expected to be exactly what is defined in the manifest. that said... Variable substitution as outlined in the provided examples is a reasonable thing to expect. We currently don't do much with the parsing of the manifest files and likely are in need to. The challenge there is the inconsistent between referencing a manifest file in at testsuite... and referencing a manifest file from a command. The command of kubectl apply -f xyz.yml is outside the scope of kuttl's awareness. Worth some thinking on... likely can be resolved with documentation. Thanks for driving this!

gberche-orange commented 2 years ago

Similar use case when testing crossplane: need to assert cluster wide (managed) resources whose names are not determinist. They can be differentiated using the label crossplane.io/claim-namespace: which could be expressed as $NAMESPACE to be replaced with the namespace into which kuttl test is running (and hence where the crossplane XRC resource is created).

Currently, this limitation is preventing two instances of the same kuttl test to run concurrently, as the assertion against the cluster-wide resource isn't able to target the instance related to each kuttl test execution.

I'm happy with sharing more details / code sample if ever this can be useful.

https://github.com/kudobuilder/kuttl/pull/329 Seems to be addressing my problem.