ComplianceAsCode / compliance-operator

Operator providing Kubernetes cluster compliance checks
Apache License 2.0
36 stars 22 forks source link

OCPBUGS-33067: Don't fatal error when filter cannot iterate #509

Closed yuumasato closed 3 months ago

yuumasato commented 3 months ago

A jq filter may expect to iterate over a list of results, but it can happen that no result is returned. Let's not fatal error when this happens.

In an HyperShift environment, when no MC config exists, the following error occurs:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Error while filtering: cannot iterate over: null
debug: Persisting warnings to output file
FATAL:Error fetching resources: couldn't filter '{
  "metadata": {},
  "items": null 
}': cannot iterate over: null

After creating a dummy MachineConfig, the URI fetching succeeds:

$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
...
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500'
Vincent056 commented 3 months ago
Fetching URI: '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
FATAL:Error fetching resources: couldn't filter '{"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"94867"},"items":[]}
': cannot iterate over: null

I wonder if we should exclude this error, it seems like we should fail here because items should never be empty

xiaojiey commented 3 months ago

/retest-required

xiaojiey commented 3 months ago

/hold for test

xiaojiey commented 3 months ago

Still got the same failure with/without https://github.com/ComplianceAsCode/content/pull/11906. Verified with a hypershift hosted cluster + payload 4.16.0-0.nightly-2024-04-26-145258 + co code in https://github.com/ComplianceAsCode/compliance-operator/pull/509 + with/without code in https://github.com/ComplianceAsCode/content/pull/11906:

  1. Create a ssb with upstream-ocp4-pci-dss profile(from https://github.com/ComplianceAsCode/content/pull/11906):

    % cat ssb_pci_u.yaml 
    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSettingBinding
    metadata:
    name: ocp4-pci-dss-u
    namespace: openshift-compliance
    profiles:
    - name: upstream-ocp4-pci-dss
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
    settingsRef:
    name: default
    kind: ScanSetting
    apiGroup: compliance.openshift.io/v1alpha1
    % oc apply -f ~/func/ssb_pci_u.yaml 
    scansettingbinding.compliance.openshift.io/ocp4-pci-dss-u created
    % oc get suite -w
    NAME             PHASE     RESULT
    ocp4-pci-dss-d   RUNNING   NOT-AVAILABLE
    ocp4-pci-dss-u   RUNNING   NOT-AVAILABLE
    ^C%                                                                                                                                                                                                                  % oc get pod
    NAME                                                       READY   STATUS                  RESTARTS      AGE
    compliance-operator-6bcb4bf785-4gwmj                       1/1     Running                 0             33m
    ocp4-openshift-compliance-pp-784dc44c8c-hn5bb              1/1     Running                 0             33m
    rhcos4-openshift-compliance-pp-794d6bc5b5-4mlcm            1/1     Running                 0             33m
    upstream-ocp4-openshift-compliance-pp-578d4789f9-qd54z     1/1     Running                 0             8m29s
    upstream-ocp4-pci-dss-api-checks-pod                       0/2     Init:CrashLoopBackOff   5 (18s ago)   3m59s
    upstream-ocp4-pci-dss-rs-8698d97cf5-bj6d2                  1/1     Running                 0             3m59s
    upstream-rhcos4-openshift-compliance-pp-5ffbc9d7ff-vhgmm   1/1     Running                 0             8m28s
    % oc logs pod/upstream-ocp4-pci-dss-api-checks-pod --all-containers
    ...
    Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
    FATAL:Error fetching resources: couldn't filter '{
    "metadata": {},
    "items": null
    }': cannot iterate over: null
    Error from server (BadRequest): container "log-collector" in pod "upstream-ocp4-pci-dss-api-checks-pod" is waiting to start: PodInitializing
  2. Create a ssb with ocp4-pci-dss:

    % cat ssb_pci_d.yaml 
    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSettingBinding
    metadata:
    name: ocp4-pci-dss-d
    namespace: openshift-compliance
    profiles:
    - name: ocp4-pci-dss
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
    settingsRef:
    name: default
    kind: ScanSetting
    apiGroup: compliance.openshift.io/v1alpha1
    % oc apply -f ssb_pci_d.yaml  
    scansettingbinding.compliance.openshift.io/ocp4-pci-dss-d created
    % oc get suite
    NAME             PHASE     RESULT
    ocp4-pci-dss-d   RUNNING   NOT-AVAILABLE
    ocp4-pci-dss-u   RUNNING   NOT-AVAILABLE
    % oc get pod
    NAME                                                       READY   STATUS                  RESTARTS        AGE
    compliance-operator-6bcb4bf785-4gwmj                       1/1     Running                 0               43m
    ocp4-openshift-compliance-pp-784dc44c8c-hn5bb              1/1     Running                 0               43m
    ocp4-pci-dss-api-checks-pod                                0/2     Init:Error              1 (16s ago)     23s
    ocp4-pci-dss-rs-5fd8b89b49-t5rw9                           1/1     Running                 0               23s
    rhcos4-openshift-compliance-pp-794d6bc5b5-4mlcm            1/1     Running                 0               43m
    upstream-ocp4-openshift-compliance-pp-578d4789f9-qd54z     1/1     Running                 0               18m
    upstream-ocp4-pci-dss-api-checks-pod                       0/2     Init:CrashLoopBackOff   7 (2m24s ago)   14m
    upstream-ocp4-pci-dss-rs-8698d97cf5-bj6d2                  1/1     Running                 0               14m
    upstream-rhcos4-openshift-compliance-pp-5ffbc9d7ff-vhgmm   1/1     Running                 0               18m
    oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers
    ...
    Fetching URI: '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
    FATAL:Error fetching resources: couldn't filter '{"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"156778"},"items":[]}
    ': cannot iterate over: null
    Error from server (BadRequest): container "log-collector" in pod "ocp4-pci-dss-api-checks-pod" is waiting to start: PodInitializing
yuumasato commented 3 months ago
Fetching URI: '/api/v1/namespaces/-/pods?labelSelector=app%3Dkube-controller-manager'
FATAL:Error fetching resources: couldn't filter '{"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"94867"},"items":[]}
': cannot iterate over: null

I wonder if we should exclude this error, it seems like we should fail here because items should never be empty

@Vincent056 But the error happens when trying to fetch machineconfigs. On 4.16 hypershift there are no machineconfigs, zero numbered or rendered machineconfigs.

Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'                                                                                      
debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Error while filtering: cannot iterate over: null                        
debug: Persisting warnings to output file                                      
FATAL:Error fetching resources: couldn't filter '{                             
  "metadata": {},            
  "items": null                                                                                                                                                
}': cannot iterate over: null          
Error from server (BadRequest): container "log-collector" in pod "upstream-ocp4-pci-dss-api-checks-pod" is waiting to start: PodInitializing

@xiaojiey Do you know if on 4.15 or older is there any machineconfig? Also, do you know if CO 1.4.0 works on 4.15 hypershift?

BhargaviGudi commented 3 months ago

@yuumasato Compliance-operator-v1.4.0 works as expected on 4.15 hypershift hosted cluster

$ oc get csv
NAME                         DISPLAY               VERSION   REPLACES   PHASE
compliance-operator.v1.4.0   Compliance Operator   1.4.0                Succeeded
$ oc get sub
NAME                  PACKAGE               SOURCE             CHANNEL
compliance-operator   compliance-operator   redhat-operators   stable
$ oc compliance bind -N test profile/ocp4-pci-dss profile/ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get scan
NAME                       PHASE   RESULT
ocp4-pci-dss               DONE    NON-COMPLIANT
ocp4-pci-dss-node-worker   DONE    NON-COMPLIANT
$ oc get pods
NAME                                             READY   STATUS    RESTARTS   AGE
compliance-operator-df9b877bb-s4q75              1/1     Running   0          6m37s
ocp4-openshift-compliance-pp-c9b54f7fc-95zsf     1/1     Running   0          6m29s
rhcos4-openshift-compliance-pp-89dbf5867-7pj7n   1/1     Running   0          6m29s
yuumasato commented 3 months ago

@BhargaviGudi I have updated the patch.

Also, could you check if CO 1.4.0 works as expected on OCP4.16 too? Thanks a lot..

openshift-ci-robot commented 3 months ago

@yuumasato: This pull request references Jira Issue OCPBUGS-33067, which is invalid:

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to [this](https://github.com/ComplianceAsCode/compliance-operator/pull/509): >A jq filter may expect to iterate over a list of results, but it can happen that no result is returned. >Let's not fatal error when this happens. > >In an HyperShift environment, when no MC config exists, the following error occurs: > >``` >$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers >... >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Error while filtering: cannot iterate over: null >debug: Persisting warnings to output file >FATAL:Error fetching resources: couldn't filter '{ > "metadata": {}, > "items": null >}': cannot iterate over: null >``` > >After creating a dummy `MachineConfig`, the URI fetching succeeds: >``` >$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers >... >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500' >``` Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=ComplianceAsCode%2Fcompliance-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
BhargaviGudi commented 3 months ago

@BhargaviGudi I have updated the patch.

Also, could you check if CO 1.4.0 works as expected on OCP4.16 too? Thanks a lot..

@yuumasato Issue can be reproducible on with 4.16.0-0.nightly-2024-05-01-111315 + compliance-operator.v1.4.0 on hypershift hosted cluster

$ oc get csv
NAME                         DISPLAY               VERSION   REPLACES   PHASE
compliance-operator.v1.4.0   Compliance Operator   1.4.0                Succeeded
$ oc compliance bind -N test profile/ocp4-pci-dss profile/ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get pods
NAME                                             READY   STATUS                  RESTARTS      AGE
compliance-operator-df9b877bb-fj6df              1/1     Running                 0             7m42s
ocp4-openshift-compliance-pp-c9b54f7fc-tjt5r     1/1     Running                 0             7m33s
ocp4-pci-dss-api-checks-pod                      0/2     Init:CrashLoopBackOff   4 (30s ago)   2m24s
ocp4-pci-dss-rs-7c458655c-v26ll                  1/1     Running                 0             2m24s
rhcos4-openshift-compliance-pp-89dbf5867-ww5bz   1/1     Running                 0             7m33s

However, issue is not observed on normal cluster.

BhargaviGudi commented 3 months ago

Verification passed with 4.16.0-0.nightly-2024-05-01-111315 + compliance-operator from PR code https://github.com/ComplianceAsCode/compliance-operator/pull/509 + with/without https://github.com/ComplianceAsCode/content/pull/11906 code Verification done on both normal cluster and hypershift hosted cluster

Create a ssb with upstream-ocp4-pci-dss and upstream-ocp4-pci-dss-node profile(from https://github.com/ComplianceAsCode/content/pull/11906)

$ oc compliance bind -N test profile/upstream-ocp4-pci-dss profile/upstream-ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get suite
NAME   PHASE   RESULT
test   DONE    NON-COMPLIANT
$ oc get scan
NAME                                PHASE   RESULT
upstream-ocp4-pci-dss               DONE    NON-COMPLIANT
upstream-ocp4-pci-dss-node-master   DONE    COMPLIANT
upstream-ocp4-pci-dss-node-worker   DONE    COMPLIANT
$ oc get pods
NAME                                                       READY   STATUS    RESTARTS      AGE
compliance-operator-6c47bf85f9-lxjfx                       1/1     Running   1 (59m ago)   59m
ocp4-openshift-compliance-pp-54fc68479-5nxtl               1/1     Running   0             59m
rhcos4-openshift-compliance-pp-b6df7b65c-cb6gs             1/1     Running   0             59m
upstream-ocp4-openshift-compliance-pp-58664766df-bghpv     1/1     Running   0             7m48s
upstream-rhcos4-openshift-compliance-pp-8576685445-xkft8   1/1     Running   0             7m46s
$ oc get ccr -l compliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL  
NAME                                                          STATUS   SEVERITY
upstream-ocp4-pci-dss-api-server-encryption-provider-cipher   FAIL     medium
upstream-ocp4-pci-dss-audit-profile-set                       FAIL     medium

Create ssb with ocp4-pci-dss and ocp4-pci-dss-node

$ oc compliance bind -N test profile/ocp4-pci-dss profile/ocp4-pci-dss-node
Creating ScanSettingBinding test
$ oc get suite
NAME   PHASE   RESULT
test   DONE    NON-COMPLIANT
[bgudi@bgudi-thinkpadt14sgen2i compliance-operator]$ oc get pods
NAME                                             READY   STATUS    RESTARTS        AGE
compliance-operator-6c47bf85f9-lxjfx             1/1     Running   1 (2m54s ago)   2m59s
ocp4-openshift-compliance-pp-54fc68479-5nxtl     1/1     Running   0               2m52s
rhcos4-openshift-compliance-pp-b6df7b65c-cb6gs   1/1     Running   0               2m52s
$ oc get ccr -l compliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL  
NAME                                                 STATUS   SEVERITY
ocp4-pci-dss-api-server-encryption-provider-cipher   FAIL     medium
ocp4-pci-dss-audit-profile-set                       FAIL     medium
yuumasato commented 3 months ago

Thank you for testing @BhargaviGudi

For visibility I''m posting the warnings that are added to the scan:

$ oc get scan -oyaml upstream-ocp4-pci-dss
apiVersion: compliance.openshift.io/v1alpha1                            
kind: ComplianceScan                                                    
metadata:                              
...
  warnings: |-
    could not fetch /api/v1/namespaces/openshift-kube-controller-manager/configmaps/config: configmaps "config" not found
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found
    couldn't filter '{
      "metadata": {},
      "items": null
    }': Skipping empty filter result from '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)': no value was returned from the filter
    couldn't filter '{
      "metadata": {},
      "items": null
    }': Skipping empty filter result from '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)': no value was returned from the filter
    could not fetch /apis/machine.openshift.io/v1beta1/machinesets?limit=500: the server could not find the requested resource

In OCP 4.16 HyperShift no MachineConfig is available / visible:

$ oc get --raw /api/v1/namespaces/openshift-kube-apiserver/configmaps/config              
Error from server (NotFound): configmaps "config" not found
$ oc get mc
No resources found
rhmdnd commented 3 months ago

/test e2e-aws-serial

yuumasato commented 3 months ago

For comparison, on 4.15 HyperShift:

CO debug logs:

Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs'
debug: Encountered non-fatal error to be persisted in the scan: failed to list MachineConfigs: failed to get API group resources: unable to retrieve the complete list of server APIs: machineconfiguration.openshift.io/v1: the server could not find the requested resource
oc get scan -oyaml upstream-ocp4-pci-dss
apiVersion: compliance.openshift.io/v1alpha1                            
kind: ComplianceScan                                                    
metadata:                              
...
  warnings: |-
    could not fetch /api/v1/namespaces/openshift-kube-apiserver/configmaps/config: configmaps "config" not found                                               
oc get --raw /api/v1/namespaces/openshift-kube-apiserver/configmaps/config 
Error from server (NotFound): configmaps "config" not found
oc get mc
error: the server doesn't have a resource type "mc"
oc create -f ~/openshift/co/objects/machineconfig.yaml 
error: resource mapping not found for name: "50-infra" namespace: "" from "/home/wsato/openshift/co/objects/machineconfig.yaml": no matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
ensure CRDs are installed first
yuumasato commented 3 months ago

In conclusion, CO in OCP 4.16 HyperShift obtains a different response for URI /api/v1/namespaces/openshift-kube-apiserver/configmaps/config than it did on OCP 4.15 HyperShift.

While in 4.15 CO got a response that it failed to list the MachineConfigs, in 4.16 CO gets a list with no MachineConfigs.

yuumasato commented 3 months ago

/test all

rhmdnd commented 3 months ago

/test e2e-aws-parallel

openshift-ci[bot] commented 3 months ago

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rhmdnd, Vincent056, yuumasato

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files: - ~~[OWNERS](https://github.com/ComplianceAsCode/compliance-operator/blob/master/OWNERS)~~ [Vincent056,rhmdnd] Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
rhmdnd commented 3 months ago

/jira refresh

openshift-ci-robot commented 3 months ago

@rhmdnd: This pull request references Jira Issue OCPBUGS-33067, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug * bug is open, matching expected state (open) * bug target version (4.16.0) matches configured target version for branch (4.16.0) * bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact: /cc @xiaojiey

In response to [this](https://github.com/ComplianceAsCode/compliance-operator/pull/509#issuecomment-2093616573): >/jira refresh Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=ComplianceAsCode%2Fcompliance-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
openshift-ci-robot commented 3 months ago

@yuumasato: This pull request references Jira Issue OCPBUGS-33067, which is valid.

3 validation(s) were run on this bug * bug is open, matching expected state (open) * bug target version (4.16.0) matches configured target version for branch (4.16.0) * bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact: /cc @xiaojiey

In response to [this](https://github.com/ComplianceAsCode/compliance-operator/pull/509): >A jq filter may expect to iterate over a list of results, but it can happen that no result is returned. >Let's not fatal error when this happens. > >In an HyperShift environment, when no MC config exists, the following error occurs: > >``` >$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers >... >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Error while filtering: cannot iterate over: null >debug: Persisting warnings to output file >FATAL:Error fetching resources: couldn't filter '{ > "metadata": {}, > "items": null >}': cannot iterate over: null >``` > >After creating a dummy `MachineConfig`, the URI fetching succeeds: >``` >$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers >... >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500' >``` Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=ComplianceAsCode%2Fcompliance-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.
rhmdnd commented 3 months ago

adding qe approved since @BhargaviGudi verified the change.

GroceryBoyJr commented 3 months ago

/label docs-approved

openshift-ci-robot commented 3 months ago

@yuumasato: Jira Issue OCPBUGS-33067: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-33067 has been moved to the MODIFIED state.

In response to [this](https://github.com/ComplianceAsCode/compliance-operator/pull/509): >A jq filter may expect to iterate over a list of results, but it can happen that no result is returned. >Let's not fatal error when this happens. > >In an HyperShift environment, when no MC config exists, the following error occurs: > >``` >$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers >... >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Error while filtering: cannot iterate over: null >debug: Persisting warnings to output file >FATAL:Error fetching resources: couldn't filter '{ > "metadata": {}, > "items": null >}': cannot iterate over: null >``` > >After creating a dummy `MachineConfig`, the URI fetching succeeds: >``` >$ oc logs pod/ocp4-pci-dss-api-checks-pod --all-containers >... >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >Fetching URI: '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >debug: Applying filter '[.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.config.storage.luks[0].clevis != null)' to path '/apis/machineconfiguration.openshift.io/v1/machineconfigs' >Fetching URI: '/apis/machine.openshift.io/v1beta1/machinesets?limit=500' >``` Instructions for interacting with me using PR comments are available [here](https://prow.ci.openshift.org/command-help?repo=ComplianceAsCode%2Fcompliance-operator). If you have questions or suggestions related to my behavior, please file an issue against the [openshift-eng/jira-lifecycle-plugin](https://github.com/openshift-eng/jira-lifecycle-plugin/issues/new) repository.