kube-burner / kube-burner-ocp

OpenShift integrations and workloads for kube-burner
https://kube-burner.github.io/kube-burner-ocp/
Apache License 2.0
4 stars 18 forks source link

Indexing jobSummary for index job #81

Closed vishnuchalla closed 2 months ago

vishnuchalla commented 3 months ago

Type of change

Description

Indexing jobSummary for index job as well Pre-requisite PR: https://github.com/kube-burner/kube-burner-ocp/pull/81

Related Tickets & Documents

Checklist before requesting a review

Testing

Tested and verified the same in local

[
  {
    "timestamp": "2024-07-02T16:38:03Z",
    "endTimestamp": "2024-07-02T17:38:03Z",
    "elapsedTime": 3600,
    "uuid": "9c7870e1-9ba1-4e84-99f6-7364a42e6a1a",
    "metricName": "jobSummary",
    "jobConfig": {
      "name": "index"
    },
    "metadata": {
      "clusterName": "ols-perf-test-v6ntn",
      "clusterType": "self-managed",
      "k8sVersion": "v1.28.9+416ecaf",
      "masterNodesCount": 3,
      "masterNodesType": "m6i.xlarge",
      "ocpMajorVersion": "4.15",
      "ocpVersion": "4.15.15",
      "platform": "AWS",
      "region": "us-west-2",
      "sdnType": "OVNKubernetes",
      "totalNodes": 6,
      "workerNodesCount": 3,
      "workerNodesType": "m6i.xlarge"
    },
    "version": "index-jobsummary@4dcd5100d911f3c63567490936c4c8cf80a0b190",
    "passed": true
  }
]
vishnuchalla commented 3 months ago

@rsevilla87 I have just checked this change in local. clusterMetadata is not getting published which is expected, but the jobSummary looks like this

[
  {
    "timestamp": "2024-06-24T20:35:51.20547746Z",
    "endTimestamp": "2024-06-24T20:37:51.545487311Z",
    "churnStartTimestamp": "2024-06-24T20:36:47.289255654Z",
    "churnEndTimestamp": "2024-06-24T20:37:51.545475464Z",
    "elapsedTime": 120,
    "uuid": "06c70fd5-1851-47b0-8246-6360af50a5a1",
    "metricName": "jobSummary",
    "jobConfig": {
      "jobIterations": 5,
      "name": "cluster-density-v2",
      "jobType": "create",
      "qps": 5,
      "burst": 5,
      "namespace": "cluster-density-v2",
      "maxWaitTimeout": 14400000000000,
      "waitForDeletion": true,
      "waitWhenFinished": true,
      "cleanup": true,
      "namespacedIterations": true,
      "iterationsPerNamespace": 1,
      "verifyObjects": true,
      "errorOnVerify": true,
      "preLoadImages": true,
      "preLoadPeriod": 15000000000,
      "churn": true,
      "churnPercent": 10,
      "churnDuration": 60000000000,
      "churnDelay": 5000000000,
      "churnDeletionStrategy": "default"
    },
    "metadata": {
      "cloud-bulldozer": true,
      "k8sVersion": "v1.28.9+416ecaf",
      "ocpMajorVersion": "4.15",
      "ocpVersion": "4.15.15",
      "platform": "AWS",
      "sdnType": "OVNKubernetes",
      "totalNodes": 6
    },
    "version": "index-jobsummary@957f5770f8d4d38f2e7b9c01bf927a1b2cd3e61b",
    "passed": true
  }
]

which is missing on some of the critical data that we previously used to get as a part of clusterMetadata like below

    "platform": "AWS",
    "clusterType": "self-managed",
    "ocpVersion": "4.14.3",
    "ocpMajorVersion": "4.14",
    "k8sVersion": "v1.27.6+b49f9d1",
    "masterNodesType": "m6i.2xlarge",
    "masterNodesCount": 3,
    "workerNodesCount": 3,
    "totalNodes": 3,
    "sdnType": "OVNKubernetes",
    "clusterName": "ocp-ci-hq68v",
    "region": "us-west-2",
    "uuid": "55cfcc86-ad8a-447b-ac4d-dce8870bd0a5",
    "benchmark": "pvc-density",

I am not sure if this is intentional, but this is going to impact our grafana dashboarding. Or maybe we wish to capture them in the ocp-wrapper itself? cc: @jtaleric @afcollins

vishnuchalla commented 3 months ago

@rsevilla87 I have just checked this change in local. clusterMetadata is not getting published which is expected, but the jobSummary looks like this

[
  {
    "timestamp": "2024-06-24T20:35:51.20547746Z",
    "endTimestamp": "2024-06-24T20:37:51.545487311Z",
    "churnStartTimestamp": "2024-06-24T20:36:47.289255654Z",
    "churnEndTimestamp": "2024-06-24T20:37:51.545475464Z",
    "elapsedTime": 120,
    "uuid": "06c70fd5-1851-47b0-8246-6360af50a5a1",
    "metricName": "jobSummary",
    "jobConfig": {
      "jobIterations": 5,
      "name": "cluster-density-v2",
      "jobType": "create",
      "qps": 5,
      "burst": 5,
      "namespace": "cluster-density-v2",
      "maxWaitTimeout": 14400000000000,
      "waitForDeletion": true,
      "waitWhenFinished": true,
      "cleanup": true,
      "namespacedIterations": true,
      "iterationsPerNamespace": 1,
      "verifyObjects": true,
      "errorOnVerify": true,
      "preLoadImages": true,
      "preLoadPeriod": 15000000000,
      "churn": true,
      "churnPercent": 10,
      "churnDuration": 60000000000,
      "churnDelay": 5000000000,
      "churnDeletionStrategy": "default"
    },
    "metadata": {
      "cloud-bulldozer": true,
      "k8sVersion": "v1.28.9+416ecaf",
      "ocpMajorVersion": "4.15",
      "ocpVersion": "4.15.15",
      "platform": "AWS",
      "sdnType": "OVNKubernetes",
      "totalNodes": 6
    },
    "version": "index-jobsummary@957f5770f8d4d38f2e7b9c01bf927a1b2cd3e61b",
    "passed": true
  }
]

which is missing on some of the critical data that we previously used to get as a part of clusterMetadata like below

    "platform": "AWS",
    "clusterType": "self-managed",
    "ocpVersion": "4.14.3",
    "ocpMajorVersion": "4.14",
    "k8sVersion": "v1.27.6+b49f9d1",
    "masterNodesType": "m6i.2xlarge",
    "masterNodesCount": 3,
    "workerNodesCount": 3,
    "totalNodes": 3,
    "sdnType": "OVNKubernetes",
    "clusterName": "ocp-ci-hq68v",
    "region": "us-west-2",
    "uuid": "55cfcc86-ad8a-447b-ac4d-dce8870bd0a5",
    "benchmark": "pvc-density",

I am not sure if this is intentional, but this is going to impact our grafana dashboarding. Or maybe we wish to capture them in the ocp-wrapper itself? cc: @jtaleric @afcollins

Resolved

paigerube14 commented 3 months ago

Are we able to add some more cluster details that we currently have as part of the e2e-benchmarking index util to this work? This would help us be able to use this instead of duplicating work to get all these details when posting to the perf_scale_ci index

Few examples these are more fields that would we have found helpful/needed for different configurations of OCP https://github.com/cloud-bulldozer/e2e-benchmarking/blob/bf5ac71356e1f128f35cb231ad67e39729837345/utils/index.sh#L183C1-L190C53

vishnuchalla commented 3 months ago

Are we able to add some more cluster details that we currently have as part of the e2e-benchmarking index util to this work? This would help us be able to use this instead of duplicating work to get all these details when posting to the perf_scale_ci index

Few examples these are more fields that would we have found helpful/needed for different configurations of OCP https://github.com/cloud-bulldozer/e2e-benchmarking/blob/bf5ac71356e1f128f35cb231ad67e39729837345/utils/index.sh#L183C1-L190C53

Good idea. Yes, we should be able to get rid of index.sh script in e2e by running index sub-command after the workload. Looking forward to hear from others as well. cc: @rsevilla87 @jtaleric @afcollins @krishvoor @chentex @shashank-boyapally

rsevilla87 commented 3 months ago

Are we able to add some more cluster details that we currently have as part of the e2e-benchmarking index util to this work? This would help us be able to use this instead of duplicating work to get all these details when posting to the perf_scale_ci index Few examples these are more fields that would we have found helpful/needed for different configurations of OCP https://github.com/cloud-bulldozer/e2e-benchmarking/blob/bf5ac71356e1f128f35cb231ad67e39729837345/utils/index.sh#L183C1-L190C53

Good idea. Yes, we should be able to get rid of index.sh script in e2e by running index sub-command after the workload. Looking forward to hear from others as well. cc: @rsevilla87 @jtaleric @afcollins @krishvoor @chentex @shashank-boyapally

Those fields should be scraped from the go-commons ocp-metadata package, I'll open a RFE in that repo.

vishnuchalla commented 3 months ago

Are we able to add some more cluster details that we currently have as part of the e2e-benchmarking index util to this work? This would help us be able to use this instead of duplicating work to get all these details when posting to the perf_scale_ci index Few examples these are more fields that would we have found helpful/needed for different configurations of OCP https://github.com/cloud-bulldozer/e2e-benchmarking/blob/bf5ac71356e1f128f35cb231ad67e39729837345/utils/index.sh#L183C1-L190C53

Good idea. Yes, we should be able to get rid of index.sh script in e2e by running index sub-command after the workload. Looking forward to hear from others as well. cc: @rsevilla87 @jtaleric @afcollins @krishvoor @chentex @shashank-boyapally

Those fields should be scraped from the go-commons ocp-metadata package, I'll open a RFE in that repo.

So the idea is to use this subcommand instead of go-commons because at present, go-commons is a library but not cli. If we make it a cli we can use it across the tools or else we can use this option to capture additional metadata. But adding more metadata fields in go-commons code might affect all the other tools that are importing go-commons as a library to publish metadata and might lead to data replication of data that we might not even use.

vishnuchalla commented 2 months ago

@paigerube14 Any concerns before I merge this PR?

vishnuchalla commented 2 months ago

Are we able to add some more cluster details that we currently have as part of the e2e-benchmarking index util to this work? This would help us be able to use this instead of duplicating work to get all these details when posting to the perf_scale_ci index Few examples these are more fields that would we have found helpful/needed for different configurations of OCP https://github.com/cloud-bulldozer/e2e-benchmarking/blob/bf5ac71356e1f128f35cb231ad67e39729837345/utils/index.sh#L183C1-L190C53

Good idea. Yes, we should be able to get rid of index.sh script in e2e by running index sub-command after the workload. Looking forward to hear from others as well. cc: @rsevilla87 @jtaleric @afcollins @krishvoor @chentex @shashank-boyapally

Those fields should be scraped from the go-commons ocp-metadata package, I'll open a RFE in that repo.

So the idea is to use this subcommand instead of go-commons because at present, go-commons is a library but not cli. If we make it a cli we can use it across the tools or else we can use this option to capture additional metadata. But adding more metadata fields in go-commons code might affect all the other tools that are importing go-commons as a library to publish metadata and might lead to data replication of data that we might not even use.

Will bring this up in the automation meeting, to decide on next steps