Due to a previous execution of tests/functional/storageclass/test_replica1.py::TestReplicaOne::test_configure_replica1 the
deviceClasses will include the name of each failure domain, corresponding to the replica 1 pools.
Comment from Travis Nelson:
Travis Nielsen
Sep 10th at 4:19 PM
@Daniel Osypenko
Did you enable replica 1?
Daniel Osypenko
Sep 10th at 4:23 PM
we have a new test that configured replica1 if I am not mistaken
test_configure_replica1
@Aviad Polak
Travis Nielsen
Sep 10th at 4:24 PM
@Daniel Osypenko
In that case, the deviceClasses will include the name of each failure domain, corresponding to the replica 1 pools, so it is expected
We need to correct expected behavior to work with both scenarios and anticipate not only ssd, but the failure domain as well
same error on another execution of tests/functional/z_cluster/cluster_expansion/test_add_capacity.py::TestAddCapacity::test_add_capacity_ui
[2024-11-22T11:39:38.153Z] tests/functional/z_cluster/cluster_expansion/test_add_capacity.py::TestAddCapacity::test_add_capacity_ui
[2024-11-22T11:39:38.153Z] [1m-------------------------------- live log setup --------------------------------[0m
[2024-11-22T11:39:38.153Z] 06:39:38 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - testrun_name: dosypenk-OCS4-17-Downstream-OCP4-17-ROSA_HCP-MANAGED_CP-1AZ-RHCOS-0M-3W-tier1
[2024-11-22T11:39:38.407Z] 06:39:38 - MainThread - tests.conftest - [32mINFO[0m - Checking for Ceph Health OK
[2024-11-22T11:39:38.408Z] 06:39:38 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-11-22T11:39:38.966Z] 06:39:38 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-11-22T11:39:39.890Z] 06:39:39 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - These are the ceph tool box pods: ['rook-ceph-tools-7f4f8b8c64-mjhwr']
[2024-11-22T11:39:39.890Z] 06:39:39 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-tools-7f4f8b8c64-mjhwr -n odf-storage
[2024-11-22T11:39:40.144Z] 06:39:40 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage -o yaml
[2024-11-22T11:39:44.297Z] 06:39:44 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod name: rook-ceph-tools-7f4f8b8c64-mjhwr
[2024-11-22T11:39:44.297Z] 06:39:44 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod status: Running
[2024-11-22T11:39:44.297Z] 06:39:44 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc -n odf-storage rsh rook-ceph-tools-7f4f8b8c64-mjhwr ceph health
[2024-11-22T11:39:46.802Z] 06:39:46 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Ceph cluster health is HEALTH_OK.
[2024-11-22T11:39:46.803Z] 06:39:46 - MainThread - tests.conftest - [32mINFO[0m - Ceph health check passed at setup
[2024-11-22T11:39:46.803Z] 06:39:46 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: ['oc', 'login', '-u', 'cluster-admin', '-p', '*****']
[2024-11-22T11:39:50.057Z] 06:39:49 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-monitoring whoami --show-token
[2024-11-22T11:39:50.057Z] 06:39:49 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-monitoring get Route prometheus-k8s -n openshift-monitoring -o yaml
[2024-11-22T11:39:50.980Z] 06:39:50 - MainThread - ocs_ci.framework.pytest_customization.reports - [32mINFO[0m - duration reported by tests/functional/z_cluster/cluster_expansion/test_add_capacity.py::TestAddCapacity::test_add_capacity_ui immediately after test execution: 12.65
[2024-11-22T11:39:50.980Z] [1m-------------------------------- live log call ---------------------------------[0m
[2024-11-22T11:39:50.980Z] 06:39:50 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get storagecluster -n odf-storage -o yaml
[2024-11-22T11:39:51.544Z] 06:39:51 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-osd -o yaml
[2024-11-22T11:39:52.904Z] 06:39:52 - MainThread - ocs_ci.ocs.ui.helpers_ui - [32mINFO[0m - Add capacity via UI is not supported on platform rosa_hcp
[2024-11-22T11:39:52.905Z] 06:39:52 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get storagecluster -n odf-storage -o yaml
[2024-11-22T11:39:53.466Z] 06:39:53 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get storagecluster -n odf-storage -o yaml
[2024-11-22T11:39:54.025Z] 06:39:53 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get storagecluster -n odf-storage -o yaml
[2024-11-22T11:39:54.948Z] 06:39:54 - MainThread - ocs_ci.ocs.ocp - [32mINFO[0m - Command: patch storagecluster ocs-storagecluster -n odf-storage -p '[{ "op": "replace", "path": "/spec/storageDeviceSets/0/count",
[2024-11-22T11:39:54.948Z] "value": 2}]' --type json
[2024-11-22T11:39:54.948Z] 06:39:54 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc -n odf-storage patch storagecluster ocs-storagecluster -n odf-storage -p '[{ "op": "replace", "path": "/spec/storageDeviceSets/0/count",
[2024-11-22T11:39:54.948Z] "value": 2}]' --type json
[2024-11-22T11:39:55.506Z] 06:39:55 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-osd -o yaml
[2024-11-22T11:39:56.866Z] 06:39:56 - MainThread - tests.functional.z_cluster.cluster_expansion.test_add_capacity - [32mINFO[0m - Checking if existing OSD pods were restarted (deleted) post adding capacity (bug 1931601)
[2024-11-22T11:39:56.866Z] 06:39:56 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get storagecluster -n odf-storage -o yaml
[2024-11-22T11:39:57.789Z] 06:39:57 - MainThread - ocs_ci.ocs.ocp - [32mINFO[0m - Waiting for a resource(s) of kind Pod identified by name '' using selector app=rook-ceph-osd at column name STATUS to reach desired condition Running
[2024-11-22T11:39:57.789Z] 06:39:57 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-osd -o yaml
[2024-11-22T11:39:59.148Z] 06:39:58 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-osd-0-8d76bb45d-9gktg -n odf-storage
[2024-11-22T11:39:59.739Z] 06:39:59 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage -o yaml
[2024-11-22T11:40:03.895Z] 06:40:03 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-osd-1-599cf9b8b8-sgqhj -n odf-storage
[2024-11-22T11:40:04.454Z] 06:40:04 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-osd-2-c6fbd68cd-cdc2c -n odf-storage
[2024-11-22T11:40:05.013Z] 06:40:04 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-osd-3-75f97478f9-rjbvt -n odf-storage
[2024-11-22T11:40:05.938Z] 06:40:05 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-osd-4-655858cc68-wkcln -n odf-storage
[2024-11-22T11:40:06.497Z] 06:40:06 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-osd-5-58cc485497-q8xfn -n odf-storage
[2024-11-22T11:40:07.057Z] 06:40:06 - MainThread - ocs_ci.ocs.ocp - [32mINFO[0m - 6 resources already reached condition!
[2024-11-22T11:40:07.057Z] 06:40:06 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig get operator odf-operator.odf-storage -o yaml
[2024-11-22T11:40:07.615Z] 06:40:07 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-marketplace get CatalogSource -n openshift-marketplace --selector=ocs-operator-internal=true -o yaml
[2024-11-22T11:40:08.173Z] 06:40:08 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-marketplace get OperatorSource ocs-operatorsource -n openshift-marketplace -o yaml
[2024-11-22T11:40:09.096Z] 06:40:08 - MainThread - ocs_ci.utility.utils - [33mWARNING[0m - Command stderr: error: the server doesn't have a resource type "OperatorSource"
[2024-11-22T11:40:09.096Z]
[2024-11-22T11:40:09.096Z] 06:40:08 - MainThread - ocs_ci.ocs.ocp - [33mWARNING[0m - Failed to get resource: ocs-operatorsource of kind: OperatorSource, selector: None, Error: Error during execution of command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-marketplace get OperatorSource ocs-operatorsource -n openshift-marketplace -o yaml.
[2024-11-22T11:40:09.096Z] Error is error: the server doesn't have a resource type "OperatorSource"
[2024-11-22T11:40:09.096Z]
[2024-11-22T11:40:09.096Z] 06:40:08 - MainThread - ocs_ci.ocs.ocp - [33mWARNING[0m - Number of attempts to get resource reached!
[2024-11-22T11:40:09.096Z] 06:40:08 - MainThread - ocs_ci.ocs.resources.packagemanifest - [32mINFO[0m - Catalog source not found!
[2024-11-22T11:40:09.096Z] 06:40:08 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n openshift-marketplace get packagemanifest ocs-operator -n openshift-marketplace -o yaml
[2024-11-22T11:40:09.655Z] 06:40:09 - MainThread - ocs_ci.ocs.resources.ocs - [32mINFO[0m - Check if OCS operator: ocs-operator.v4.17.0-rhodf is in Succeeded phase.
[2024-11-22T11:40:09.655Z] 06:40:09 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get csv ocs-operator.v4.17.0-rhodf -n odf-storage -o yaml
[2024-11-22T11:40:10.578Z] 06:40:10 - MainThread - ocs_ci.ocs.ocp - [32mINFO[0m - Resource ocs-operator.v4.17.0-rhodf is in phase: Succeeded!
[2024-11-22T11:40:10.578Z] 06:40:10 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get csv ocs-operator.v4.17.0-rhodf -n odf-storage -o yaml
[2024-11-22T11:40:11.501Z] 06:40:11 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - No previous version detected in cluster
[2024-11-22T11:40:11.501Z] 06:40:11 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get StorageCluster ocs-storagecluster -n odf-storage -o yaml
[2024-11-22T11:40:12.059Z] 06:40:11 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-11-22T11:40:12.617Z] 06:40:12 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage --selector=app=rook-ceph-tools -o yaml
[2024-11-22T11:40:13.177Z] 06:40:13 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - These are the ceph tool box pods: ['rook-ceph-tools-7f4f8b8c64-mjhwr']
[2024-11-22T11:40:13.177Z] 06:40:13 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod rook-ceph-tools-7f4f8b8c64-mjhwr -n odf-storage
[2024-11-22T11:40:14.104Z] 06:40:13 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get Pod -n odf-storage -o yaml
[2024-11-22T11:40:18.258Z] 06:40:18 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod name: rook-ceph-tools-7f4f8b8c64-mjhwr
[2024-11-22T11:40:18.258Z] 06:40:18 - MainThread - ocs_ci.ocs.resources.pod - [32mINFO[0m - Pod status: Running
[2024-11-22T11:40:18.258Z] 06:40:18 - MainThread - ocs_ci.ocs.resources.storage_cluster - [32mINFO[0m - Verifying crushDeviceClass for storageClassDeviceSets
[2024-11-22T11:40:18.258Z] 06:40:18 - MainThread - ocs_ci.utility.utils - [32mINFO[0m - Executing command: oc --kubeconfig /home/jenkins/current-cluster-dir/openshift-cluster-dir/auth/kubeconfig -n odf-storage get CephCluster -n odf-storage -o yaml
[2024-11-22T11:40:19.181Z] 06:40:18 - MainThread - ocs_ci.framework.pytest_customization.reports - [32mINFO[0m - duration reported by tests/functional/z_cluster/cluster_expansion/test_add_capacity.py::TestAddCapacity::test_add_capacity_ui immediately after test execution: 28.08
[2024-11-22T11:40:19.435Z] [31mFAILED[0m
[2024-11-22T11:40:19.435Z] _____________________ TestAddCapacity.test_add_capacity_ui _____________________
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] self = <tests.functional.z_cluster.cluster_expansion.test_add_capacity.TestAddCapacity object at 0x7f6253ee7820>
[2024-11-22T11:40:19.435Z] reduce_and_resume_cluster_load = None
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] @tier1
[2024-11-22T11:40:19.435Z] @black_squad
[2024-11-22T11:40:19.435Z] def test_add_capacity_ui(self, reduce_and_resume_cluster_load):
[2024-11-22T11:40:19.435Z] """
[2024-11-22T11:40:19.435Z] Add capacity on non-lso cluster via UI on tier1 suite
[2024-11-22T11:40:19.435Z] """
[2024-11-22T11:40:19.435Z] > add_capacity_test(ui_flag=True)
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] [1m[31mtests/functional/z_cluster/cluster_expansion/test_add_capacity.py[0m:150:
[2024-11-22T11:40:19.435Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[2024-11-22T11:40:19.435Z] [1m[31mtests/functional/z_cluster/cluster_expansion/test_add_capacity.py[0m:113: in add_capacity_test
[2024-11-22T11:40:19.435Z] verify_storage_device_class(device_class)
[2024-11-22T11:40:19.435Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] device_class = 'ssd'
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] def verify_storage_device_class(device_class):
[2024-11-22T11:40:19.435Z] """
[2024-11-22T11:40:19.435Z] Verifies the parameters of storageClassDeviceSets in CephCluster.
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] For internal deployments, if user is not specified any DeviceClass in the StorageDeviceSet, then
[2024-11-22T11:40:19.435Z] tunefastDeviceClass will be true and
[2024-11-22T11:40:19.435Z] crushDeviceClass will set to "ssd"
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] Args:
[2024-11-22T11:40:19.435Z] device_class (str): Name of the device class
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] """
[2024-11-22T11:40:19.435Z] # If the user has not provided any specific DeviceClass in the StorageDeviceSet for internal deployment then
[2024-11-22T11:40:19.435Z] # tunefastDeviceClass will be true and crushDeviceClass will set to "ssd"
[2024-11-22T11:40:19.435Z] log.info("Verifying crushDeviceClass for storageClassDeviceSets")
[2024-11-22T11:40:19.435Z] cephcluster = OCP(
[2024-11-22T11:40:19.435Z] kind="CephCluster", namespace=config.ENV_DATA["cluster_namespace"]
[2024-11-22T11:40:19.435Z] )
[2024-11-22T11:40:19.435Z] cephcluster_data = cephcluster.get()
[2024-11-22T11:40:19.435Z] storage_class_device_sets = cephcluster_data["items"][0]["spec"]["storage"][
[2024-11-22T11:40:19.435Z] "storageClassDeviceSets"
[2024-11-22T11:40:19.435Z] ]
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] for each_devise_set in storage_class_device_sets:
[2024-11-22T11:40:19.435Z] # check tuneFastDeviceClass
[2024-11-22T11:40:19.435Z] device_set_name = each_devise_set["name"]
[2024-11-22T11:40:19.435Z] if config.ENV_DATA.get("tune_fast_device_class"):
[2024-11-22T11:40:19.435Z] tune_fast_device_class = each_devise_set["tuneFastDeviceClass"]
[2024-11-22T11:40:19.435Z] msg = f"tuneFastDeviceClass for {device_set_name} is set to {tune_fast_device_class}"
[2024-11-22T11:40:19.435Z] log.debug(msg)
[2024-11-22T11:40:19.435Z] assert (
[2024-11-22T11:40:19.435Z] tune_fast_device_class
[2024-11-22T11:40:19.435Z] ), f"{msg} when {constants.DEVICECLASS} is not selected explicitly"
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] # check crushDeviceClass
[2024-11-22T11:40:19.435Z] crush_device_class = each_devise_set["volumeClaimTemplates"][0]["metadata"][
[2024-11-22T11:40:19.435Z] "annotations"
[2024-11-22T11:40:19.435Z] ]["crushDeviceClass"]
[2024-11-22T11:40:19.435Z] crush_device_class_msg = (
[2024-11-22T11:40:19.435Z] f"crushDeviceClass for {device_set_name} is set to {crush_device_class}"
[2024-11-22T11:40:19.435Z] )
[2024-11-22T11:40:19.435Z] log.debug(crush_device_class_msg)
[2024-11-22T11:40:19.435Z] assert (
[2024-11-22T11:40:19.435Z] crush_device_class == device_class
[2024-11-22T11:40:19.435Z] ), f"{crush_device_class_msg} but it should be set to {device_class}"
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] # get deviceClasses for overall storage
[2024-11-22T11:40:19.435Z] device_classes = cephcluster_data["items"][0]["status"]["storage"]["deviceClasses"]
[2024-11-22T11:40:19.435Z] log.debug(f"deviceClasses are {device_classes}")
[2024-11-22T11:40:19.435Z] for each_device_class in device_classes:
[2024-11-22T11:40:19.435Z] device_class_name = each_device_class["name"]
[2024-11-22T11:40:19.435Z] > assert (
[2024-11-22T11:40:19.435Z] device_class_name == device_class
[2024-11-22T11:40:19.435Z] ), f"deviceClass is set to {device_class_name} but it should be set to {device_class}"
[2024-11-22T11:40:19.435Z] [1m[31mE AssertionError: deviceClass is set to rack1 but it should be set to ssd[0m
[2024-11-22T11:40:19.435Z]
[2024-11-22T11:40:19.435Z] [1m[31mocs_ci/ocs/resources/storage_cluster.py[0m:1104: AssertionError
[2024-11-22T11:40:19.435Z]
Due to a previous execution of
tests/functional/storageclass/test_replica1.py::TestReplicaOne::test_configure_replica1
the deviceClasses will include the name of each failure domain, corresponding to the replica 1 pools.Comment from Travis Nelson:
We need to correct expected behavior to work with both scenarios and anticipate not only ssd, but the failure domain as well
same error on another execution of tests/functional/z_cluster/cluster_expansion/test_add_capacity.py::TestAddCapacity::test_add_capacity_ui