Closed jgato closed 2 years ago
I have created a second LVMCluster because I thought this would collect the new disks. Now I understand it is only one LVMCluster supported, would this have interfere?
It should not. Do you see any errors in the controller-manager or vgmanager logs files?
From the controller-manager:
{
"level": "error",
"ts": 1644229480.7411144,
"logger": "controller.lvmcluster.lvmcluster-controller",
"msg": "failed to create or update vgManager daemonset",
"reconciler group": "lvm.topolvm.io",
"reconciler kind": "LVMCluster",
"name": "vg-manager",
"namespace": "lvm-operator-system",
"error": "failed to update controller reference on vgManager daemonset \"vg-manager\". Object lvm-operator-system/vg-manager is already owned by another LVMCluster controller lvmcluster-sample",
"stacktrace": "github.com/red-hat-storage/lvm-operator/controllers.(*LVMClusterReconciler).Reconcile\n\t/workspace/controllers/lvmcluster_controller.go:104\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.2/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.2/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.2/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.2/pkg/internal/controller/controller.go:227"
}
{
"level": "error",
"ts": 1644229480.7504883,
"logger": "controller.lvmcluster",
"msg": "Reconciler error",
"reconciler group": "lvm.topolvm.io",
"reconciler kind": "LVMCluster",
"name": "lvmcluster-sample-2",
"namespace": "lvm-operator-system",
"error": "failed reconciling: vg-manager failed to update controller reference on vgManager daemonset \"vg-manager\". Object lvm-operator-system/vg-manager is already owned by another LVMCluster controller lvmcluster-sample",
"stacktrace": "sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.2/pkg/internal/controller/controller.go:227"
}
So it could be the second LVMCCluster creation could be causing problems. Do you want me to delete it?
No errors in the vgmanager.
So it could be the second LVMCCluster creation could be causing problems. Do you want me to delete it?
Yes, please delete the second LVMCluster.
Ok, I have cleaned up everything. The second LVMCluster was causing problems and it is something not supported yet. I have repeated the process: Phase 1)
Device Class Statuses:
Name: vg1
Node Status:
Devices:
/dev/nvme0n1
/dev/nvme1n1
/dev/sda
/dev/sdc
/dev/sdd
/dev/sdg
Node: master-0.apollo2.hpecloud.org
[root@master-0 core]# vgs
VG #PV #LV #SN Attr VSize VFree
vg1 6 6 0 wz--n- 6.91t 6.90t
[root@master-0 core]# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 vg1 lvm2 a-- 745.21g 745.21g
/dev/nvme1n1 vg1 lvm2 a-- 745.21g 745.21g
/dev/sda vg1 lvm2 a-- <2.73t <2.72t
/dev/sdc vg1 lvm2 a-- 931.48g 931.48g
/dev/sdd vg1 lvm2 a-- 931.48g 931.48g
/dev/sdg vg1 lvm2 a-- 931.48g 931.48g
Phase 2)
# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 vg1 lvm2 a-- 745.21g 745.21g
/dev/nvme1n1 vg1 lvm2 a-- 745.21g 745.21g
/dev/sda vg1 lvm2 a-- <2.73t <2.72t
/dev/sdc vg1 lvm2 a-- 931.48g 931.48g
/dev/sdd vg1 lvm2 a-- 931.48g 931.48g
/dev/sde vg1 lvm2 a-- 931.48g 931.48g
/dev/sdf vg1 lvm2 a-- 931.48g 931.48g
/dev/sdg vg1 lvm2 a-- 931.48g 931.48g
* But still not recognizable by the LVMCluster
Device Class Statuses: Name: vg1 Node Status: Devices: /dev/nvme0n1 /dev/nvme1n1 /dev/sda /dev/sdc /dev/sdd /dev/sdg
I had one SNO with an LVMCluster created to manage the VG vg1 (/dev/nvme0n1, /dev/nvme1n1, /dev/sda). I wanted to test the addition of new disks. So, I rebooted and created some more disks from the raid I have in the server.
After creating some more disks the SNO is rebooted and:
$ oc get lvmvolumegroupnodestatuses -o yaml apiVersion: v1 items:
$ oc get lvmcluster lvmcluster-sample -o yaml apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"lvm.topolvm.io/v1alpha1","kind":"LVMCluster","metadata":{"annotations":{},"name":"lvmcluster-sample","namespace":"lvm-operator-system"},"spec":{"deviceClasses":[{"name":"vg1"}]}} creationTimestamp: "2022-02-05T18:56:54Z" finalizers: