grafana / k6-operator

An operator for running distributed k6 tests.
Apache License 2.0
526 stars 143 forks source link

Multi-Cluster Execution #240

Open pears-one opened 12 months ago

pears-one commented 12 months ago

Feature Description

I have the requirement to run a load test across more than one cluster. An update to the Operator and CRD should make this possible, potentially with the addition of another CRD. I don't have an exact solution yet, but I think it would be doable and worth discussing here.

Suggested Solution (optional)

The K6 CRD can be updated to accept some new variable e.g., clusters:

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-sample
  namespace: k6
spec:
  parallelism: 2
  clusters:
    - name: trundle
      proportion: 50%
    - name: michu
      proportion: 50%
  script:
    configMap:
      name: archive
      file: archive.tar

This would start two pods, one in the trundle cluster and and one in the michu cluster. Roles on those clusters will need to be added allowing:

The k6-operator's access to these clusters can be defined using a new custom resource, e.g, K6Cluster:

apiVersion: k6.io/v1alpha1
kind: K6Cluster
metadata:
  name: trundle
  namespace: k6
spec:
  namespace: k6
  kubeconfig:
    secretRef:
      name: trundle-kubeconfig
    configMap:
      name: archive
      file: archive.tar

The kubeconfig secret will need to have access information for the cluster:

apiVersion: v1
data:
  ca.crt: LS0tLS1C...0tLQo=
  namespace: azYK
  token: ZXlKaGJH...0NV9LY0NFX1E=
kind: Secret
metadata:
  name: trundle-kubeconfig
  namespace: k6
type: kubernetes.io/service-account-token

When the k6-operator reconciles a new K6 resource, it will first check that K6Cluster resources exist for the named clusters in the K6 resource. Then clients will be created for each of those clusters so the necessary jobs can be created.

Obviously this will be a fairly big change in the operator and I may not have considered everything, but I'm happy to discuss in more detail.

Already existing or connected issues / PRs (optional)

No response

yorugac commented 11 months ago

Hi @evanfpearson, thanks for opening the issue!

Multi-cluster is an interesting use case to think through. It seems like something that might have a lot of caveats in implementation and implications as to overall project standing. Could you please share more details about your specific use case?

At the moment, it seems to me it's preferable to try to solve this issue outside of k6-operator: i.e. it would be nice if we could re-use other Kubernetes tools and seek to integrate with them instead. The problem is, of course, that there is no 'one standard' way to set up multi-cluster, and everyone is using different approaches and tools. AFAIK, multi-cluster Kubernetes is a 'hot topic' at the moment so perhaps, we can hope for some changes in the foreseeable future as well. E.g. multicluster group.

That said, we consider this issue as out of scope for the k6-operator project at this point. It is hard to predict what the future holds though. If there are opinions from the community about this topic, it'd be great to hear from you.

pears-one commented 10 months ago

Hey, thanks for getting back. I appreciate this would have been a big change to the operator. There's a smaller change that could work. Would it be possible to add a field to the k6 CRD allowing users to supply options similar to Execution segment? Maybe it would need a different name to avoid confusion between the option that is passed to the CLI.

For example:

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-sample
  namespace: k6
spec:
  parallelism: 2
  proportion: 1/2
  script:
    configMap:
      name: archive
      file: archive.tar

This would create two pods in the cluster, each of the pods would run 1/4 of the traffic. The default proportion would be 1, so this should not cause any breaking changes.

yorugac commented 10 months ago

There's an open issue for this actually: https://github.com/grafana/k6-operator/issues/95

It hasn't received much feedback from community though :shrug: I've just added some 'fresh' thoughts on topic there. Please feel free to upvote it and / or comment there.