Open yitzhak12 opened 6 months ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 30 days if no further activity occurs.
@yitzhak12 please update on the progress
failure on ROSA HCP with m5.2xlarge
machines
We need to add a block
if config.ENV_DATA["worker_instance_type"] in ["m5.2xlarge", "m5.xlarge", "m5.large"]:
expected_mds_value = 1073741824
logs with failure:
[2024-08-14T19:30:42.833Z] tests/functional/z_cluster/test_ceph_default_values_check.py::TestCephDefaultValuesCheck::test_check_mds_cache_memory_limit
[2024-08-14T19:30:42.833Z] [1m-------------------------------- live log setup --------------------------------[0m
[2024-08-14T19:30:42.833Z] 15:30:42 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - testrun_name: dosypenk-OCS4-16-Downstream-OCP4-16-ROSA_HCP-MANAGED-1AZ-RHCOS-0M-3W-tier1
[2024-08-14T19:30:42.833Z] 15:30:42 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - testrun_name: dosypenk-OCS4-16-Downstream-OCP4-16-ROSA_HCP-MANAGED-1AZ-RHCOS-0M-3W-tier1
[2024-08-14T19:30:42.833Z] 15:30:42 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Retrieving the authentication config dictionary
[2024-08-14T19:30:42.833Z] 15:30:42 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc -n odf-storage get pods -o name
[2024-08-14T19:30:43.754Z] 15:30:43 - MainThread - ocs_ci.ocs.utils - [32mINFO[0m - pod name match found appending rook-ceph-tools-75f778d7b4-cdqtg
[2024-08-14T19:30:43.754Z] 15:30:43 - MainThread - ocs_ci.ocs.utils - [32mINFO[0m - Ceph toolbox already exists, skipping
[2024-08-14T19:30:43.754Z] 15:30:43 - MainThread - tests.conftest - [32mINFO[0m - All logs located at /home/jenkins/current-cluster-dir/logs/ocs-ci-logs-1723663710
[2024-08-14T19:30:44.012Z] 15:30:43 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: /home/jenkins/bin/oc version --client -o json
[2024-08-14T19:30:44.012Z] 15:30:43 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - OpenShift Client version: None
[2024-08-14T19:30:44.012Z] 15:30:43 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get csv -n odf-storage -o yaml
[2024-08-14T19:30:46.536Z] 15:30:45 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - collecting ocp version
[2024-08-14T19:30:46.536Z] 15:30:45 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get clusterversion version -o yaml
[2024-08-14T19:30:46.536Z] 15:30:46 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - collecting ocs version
[2024-08-14T19:30:46.536Z] 15:30:46 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get namespace -o yaml
[2024-08-14T19:30:47.458Z] 15:30:47 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - found storage namespaces ['openshift-cluster-storage-operator', 'openshift-kube-storage-version-migrator', 'openshift-kube-storage-version-migrator-operator']
[2024-08-14T19:30:47.458Z] 15:30:47 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-cluster-storage-operator get pod -n openshift-cluster-storage-operator -o yaml
[2024-08-14T19:30:48.016Z] 15:30:47 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-kube-storage-version-migrator get pod -n openshift-kube-storage-version-migrator -o yaml
[2024-08-14T19:30:48.576Z] 15:30:48 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-kube-storage-version-migrator-operator get pod -n openshift-kube-storage-version-migrator-operator -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - ClusterVersion .spec.channel: stable-4.16
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - ClusterVersion .status.desired.version: 4.16.6
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - ClusterVersion .status.desired.image: quay.io/openshift-release-dev/ocp-release@sha256:e4102eb226130117a0775a83769fe8edb029f0a17b6cbca98a682e3f1225d6b7
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - storage namespace openshift-cluster-storage-operator
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - storage namespace openshift-kube-storage-version-migrator
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ddf3eeefd6123e6ab0b2476da5ea86db7577b6bb09a3c535bade3e080699f8f {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ddf3eeefd6123e6ab0b2476da5ea86db7577b6bb09a3c535bade3e080699f8f'}
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - storage namespace openshift-kube-storage-version-migrator-operator
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.ocs.version - [32mINFO[0m - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:afbe821b2d7747a3274c8a1affc9ad2aa75eb3d530563758d44da715b354f122 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d465c84ab064abb7d0ccee2f29ff1dd7a84f9c9e0c31c16f265d557a0d6bd4c'}
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [32mINFO[0m - human readable ocs version info written into /home/jenkins/current-cluster-dir/openshift-cluster-dir/ocs_version.2024-08-14T15:30:48.979953
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [32mINFO[0m - PagerDuty service is not created because platform from ['openshiftdedicated', 'rosa', 'fusion_aas'] is not used
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - testrun_name: dosypenk-OCS4-16-Downstream-OCP4-16-ROSA_HCP-MANAGED-1AZ-RHCOS-0M-3W-tier1
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - MainThread - tests.conftest - [1m[31mERROR[0m - upgrade mark does not exist
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-2 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get StorageClass -A -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-3 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get CephBlockPool -A -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-4 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get CephFileSystem -A -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-5 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get Pod -A -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-6 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get PersistentVolumeClaim -A -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-7 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get PersistentVolume -A -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-8 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get volumesnapshot -A -o yaml
[2024-08-14T19:30:49.136Z] 15:30:48 - Dummy-9 - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get Namespace -A -o yaml
[2024-08-14T19:30:57.200Z] 15:30:56 - MainThread - tests.conftest - [32mINFO[0m - Checking for Ceph Health OK
[2024-08-14T19:30:57.200Z] 15:30:56 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-08-14T19:30:57.760Z] 15:30:57 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-08-14T19:30:58.318Z] 15:30:58 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - These are the ceph tool box pods: ['rook-ceph-tools-75f778d7b4-cdqtg']
[2024-08-14T19:30:58.318Z] 15:30:58 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-tools-75f778d7b4-cdqtg -n odf-storage
[2024-08-14T19:30:58.876Z] 15:30:58 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage -o yaml
[2024-08-14T19:31:03.031Z] 15:31:02 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod name: rook-ceph-tools-75f778d7b4-cdqtg
[2024-08-14T19:31:03.031Z] 15:31:02 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod status: Running
[2024-08-14T19:31:03.031Z] 15:31:02 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc -n odf-storage rsh rook-ceph-tools-75f778d7b4-cdqtg ceph health
[2024-08-14T19:31:04.388Z] 15:31:04 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Ceph cluster health is HEALTH_OK.
[2024-08-14T19:31:04.388Z] 15:31:04 - MainThread - tests.conftest - [32mINFO[0m - Ceph health check passed at setup
[2024-08-14T19:31:04.388Z] 15:31:04 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: ['oc', 'login', '-u', 'cluster-admin', '-p', '*****']
[2024-08-14T19:31:07.640Z] 15:31:07 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-monitoring whoami --show-token
[2024-08-14T19:31:07.640Z] 15:31:07 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-monitoring get Route prometheus-k8s -n openshift-monitoring -o yaml
[2024-08-14T19:31:08.199Z] 15:31:07 - MainThread - tests.conftest - [1m[31mERROR[0m - There was a problem with connecting to Prometheus
[2024-08-14T19:31:08.199Z] Traceback (most recent call last):
[2024-08-14T19:31:08.199Z] File "/home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/tests/conftest.py", line 3841, in log_alerts
[2024-08-14T19:31:08.199Z] prometheus = PrometheusAPI(threading_lock=threading_lock)
[2024-08-14T19:31:08.199Z] File "/home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/ocs_ci/utility/prometheus.py", line 349, in __init__
[2024-08-14T19:31:08.199Z] self.generate_cert()
[2024-08-14T19:31:08.199Z] File "/home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/ocs_ci/utility/prometheus.py", line 391, in generate_cert
[2024-08-14T19:31:08.199Z] kubeconfig["clusters"][0]["cluster"]["certificate-authority-data"]
[2024-08-14T19:31:08.199Z] KeyError: 'certificate-authority-data'
[2024-08-14T19:31:08.199Z] 15:31:07 - MainThread - tests.conftest - [1m[31mERROR[0m - There was a problem with collecting alerts for analysis
[2024-08-14T19:31:08.199Z] Traceback (most recent call last):
[2024-08-14T19:31:08.199Z] File "/home/jenkins/workspace/qe-deploy-ocs-cluster/ocs-ci/tests/conftest.py", line 3847, in _collect_alerts
[2024-08-14T19:31:08.199Z] alerts_response = prometheus.get(
[2024-08-14T19:31:08.199Z] AttributeError: 'NoneType' object has no attribute 'get'
[2024-08-14T19:31:08.199Z] 15:31:07 - MainThread - ocs_ci.framework.pytest_customization.reports - [32mINFO[0m - duration reported by tests/functional/z_cluster/test_ceph_default_values_check.py::TestCephDefaultValuesCheck::test_check_mds_cache_memory_limit immediately after test execution: 25.36
[2024-08-14T19:31:08.199Z] [1m-------------------------------- live log call ---------------------------------[0m
[2024-08-14T19:31:08.199Z] 15:31:07 - MainThread - tests.functional.z_cluster.test_ceph_default_values_check - [32mINFO[0m - Getting the mds cache memory limit
[2024-08-14T19:31:08.199Z] 15:31:07 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-08-14T19:31:08.758Z] 15:31:08 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-08-14T19:31:09.315Z] 15:31:09 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - These are the ceph tool box pods: ['rook-ceph-tools-75f778d7b4-cdqtg']
[2024-08-14T19:31:09.315Z] 15:31:09 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-tools-75f778d7b4-cdqtg -n odf-storage
[2024-08-14T19:31:09.873Z] 15:31:09 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage -o yaml
[2024-08-14T19:31:13.127Z] 15:31:12 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod name: rook-ceph-tools-75f778d7b4-cdqtg
[2024-08-14T19:31:13.127Z] 15:31:12 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod status: Running
[2024-08-14T19:31:13.127Z] 15:31:12 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc -n odf-storage rsh rook-ceph-tools-75f778d7b4-cdqtg ceph config show mds.ocs-storagecluster-cephfilesystem-a mds_cache_memory_limit --format json-pretty
[2024-08-14T19:31:15.006Z] 15:31:14 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc -n odf-storage rsh rook-ceph-tools-75f778d7b4-cdqtg ceph config show mds.ocs-storagecluster-cephfilesystem-b mds_cache_memory_limit --format json-pretty
[2024-08-14T19:31:16.887Z] 15:31:16 - MainThread - tests.conftest - [1m[31mERROR[0m - 'assert 1073741824 == 3221225472' failed
[2024-08-14T19:31:17.445Z] 15:31:17 - MainThread - ocs_ci.framework.pytest_customization.reports - [32mINFO[0m - duration reported by tests/functional/z_cluster/test_ceph_default_values_check.py::TestCephDefaultValuesCheck::test_check_mds_cache_memory_limit immediately after test execution: 8.71
[2024-08-14T19:31:17.445Z] [31mFAILED[0m
[2024-08-14T19:31:17.445Z] _________ TestCephDefaultValuesCheck.test_check_mds_cache_memory_limit _________
[2024-08-14T19:31:17.445Z]
[2024-08-14T19:31:17.445Z] self = <tests.functional.z_cluster.test_ceph_default_values_check.TestCephDefaultValuesCheck object at 0x7f22ad9f7eb0>
[2024-08-14T19:31:17.445Z]
[2024-08-14T19:31:17.445Z] @post_ocs_upgrade
[2024-08-14T19:31:17.445Z] @skipif_external_mode
[2024-08-14T19:31:17.445Z] @skipif_mcg_only
[2024-08-14T19:31:17.445Z] @bugzilla("1951348")
[2024-08-14T19:31:17.445Z] @bugzilla("1944148")
[2024-08-14T19:31:17.445Z] @pytest.mark.polarion_id("OCS-2554")
[2024-08-14T19:31:17.445Z] def test_check_mds_cache_memory_limit(self):
[2024-08-14T19:31:17.445Z] """
[2024-08-14T19:31:17.445Z] Testcase to check mds cache memory limit post ocs upgrade
[2024-08-14T19:31:17.445Z]
[2024-08-14T19:31:17.445Z] """
[2024-08-14T19:31:17.445Z] pod_obj = OCP(
[2024-08-14T19:31:17.445Z] kind=constants.POD, namespace=config.ENV_DATA["cluster_namespace"]
[2024-08-14T19:31:17.445Z] )
[2024-08-14T19:31:17.445Z]
[2024-08-14T19:31:17.445Z] try:
[2024-08-14T19:31:17.445Z] log.info("Getting the mds cache memory limit")
[2024-08-14T19:31:17.445Z] mds_cache_memory_limit = get_mds_cache_memory_limit()
[2024-08-14T19:31:17.445Z] except (IOError, CommandFailed) as ex:
[2024-08-14T19:31:17.445Z] if "ENOENT" in str(ex):
[2024-08-14T19:31:17.445Z] log.info("Restarting the mds pods")
[2024-08-14T19:31:17.445Z] mds_pods = pod.get_mds_pods()
[2024-08-14T19:31:17.445Z] pod.delete_pods(mds_pods)
[2024-08-14T19:31:17.445Z] log.info("Wait for the mds pods to be running")
[2024-08-14T19:31:17.445Z] pod_obj.wait_for_resource(
[2024-08-14T19:31:17.445Z] condition=constants.STATUS_RUNNING,
[2024-08-14T19:31:17.445Z] selector=constants.MDS_APP_LABEL,
[2024-08-14T19:31:17.445Z] resource_count=len(mds_pods),
[2024-08-14T19:31:17.445Z] timeout=30,
[2024-08-14T19:31:17.445Z] )
[2024-08-14T19:31:17.445Z] log.info("Trying to get the mds cache memory limit again")
[2024-08-14T19:31:17.445Z] mds_cache_memory_limit = retry(
[2024-08-14T19:31:17.445Z] CommandFailed,
[2024-08-14T19:31:17.445Z] tries=4,
[2024-08-14T19:31:17.445Z] delay=10,
[2024-08-14T19:31:17.445Z] backoff=1,
[2024-08-14T19:31:17.445Z] )(get_mds_cache_memory_limit)()
[2024-08-14T19:31:17.445Z]
[2024-08-14T19:31:17.445Z] expected_mds_value = 3221225472
[2024-08-14T19:31:17.445Z] expected_mds_value_in_GB = int(expected_mds_value / 1073741274)
[2024-08-14T19:31:17.445Z] > assert mds_cache_memory_limit == expected_mds_value, (
[2024-08-14T19:31:17.445Z] f"mds_cache_memory_limit is not set with a value of {expected_mds_value_in_GB}GB. "
[2024-08-14T19:31:17.445Z] f"MDS cache memory limit is set : {mds_cache_memory_limit}B "
[2024-08-14T19:31:17.445Z] )
[2024-08-14T19:31:17.445Z] [1m[31mE AssertionError: mds_cache_memory_limit is not set with a value of 3GB. MDS cache memory limit is set : 1073741824B [0m
[2024-08-14T19:31:17.445Z] [1m[31mE assert 1073741824 == 3221225472[0m
[2024-08-14T19:31:17.445Z] [1m[31mE +1073741824[0m
[2024-08-14T19:31:17.445Z] [1m[31mE -3221225472[0m
[2024-08-14T19:31:17.445Z]
[2024-08-14T19:31:17.445Z] [1m[31mtests/functional/z_cluster/test_ceph_default_values_check.py[0m:182: AssertionError
We need to change the mds value to "1073741824" when using lower-requirement deployment. See these failed tests for example: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/632/18323/892515/892517/log, https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/632/19054/926010/926016/log.