Closed wibed closed 1 year ago
i increased the verbosity level to 8
kubectl --kubeconfig ../kubeconfig -v=8 -n mayastor get MayastorPool
I0726 09:35:56.224834 13072 loader.go:373] Config loaded from file: ../kubeconfig
I0726 09:35:56.243062 13072 discovery.go:214] Invalidating discovery information
I0726 09:35:56.243561 13072 round_trippers.go:463] GET https://192.168.1.93:6443/api?timeout=32s
I0726 09:35:56.243573 13072 round_trippers.go:469] Request Headers:
I0726 09:35:56.243585 13072 round_trippers.go:473] Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json
I0726 09:35:56.243596 13072 round_trippers.go:473] User-Agent: kubectl1.27.3/v1.27.3 (darwin/amd64) kubernetes/25b4e43
I0726 09:35:56.276489 13072 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
I0726 09:35:56.276527 13072 round_trippers.go:577] Response Headers:
I0726 09:35:56.276542 13072 round_trippers.go:580] Vary: Accept
I0726 09:35:56.276554 13072 round_trippers.go:580] X-From-Cache: 1
I0726 09:35:56.276561 13072 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 5261d514-7ec0-4ca2-ad61-b839c5bb23d5
I0726 09:35:56.276568 13072 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 97ee87bd-f9c5-4d90-b48a-b0f60b73bdae
I0726 09:35:56.276579 13072 round_trippers.go:580] Audit-Id: 87b7257f-80ab-4ec6-b946-d766d4664a29
I0726 09:35:56.276586 13072 round_trippers.go:580] Content-Type: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList
I0726 09:35:56.276593 13072 round_trippers.go:580] X-Varied-Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json
I0726 09:35:56.276600 13072 round_trippers.go:580] Cache-Control: public
I0726 09:35:56.276606 13072 round_trippers.go:580] Date: Wed, 26 Jul 2023 07:35:56 GMT
I0726 09:35:56.276621 13072 round_trippers.go:580] Etag: "7E2E4FBD8CDC884130EBFEF64757F6BC507A8EA6A2EA8F94E05F1B6F207B4E183C63D6D6EA5513FDDC10FC16233248F933DC787C288493D52745A3FAFD68CCD8"
I0726 09:35:56.278345 13072 request.go:1188] Response Body: {"kind":"APIGroupDiscoveryList","apiVersion":"apidiscovery.k8s.io/v2beta1","metadata":{},"items":[{"metadata":{"creationTimestamp":null},"versions":[{"version":"v1","resources":[{"resource":"bindings","responseKind":{"group":"","version":"","kind":"Binding"},"scope":"Namespaced","singularResource":"binding","verbs":["create"]},{"resource":"componentstatuses","responseKind":{"group":"","version":"","kind":"ComponentStatus"},"scope":"Cluster","singularResource":"componentstatus","verbs":["get","list"],"shortNames":["cs"]},{"resource":"configmaps","responseKind":{"group":"","version":"","kind":"ConfigMap"},"scope":"Namespaced","singularResource":"configmap","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["cm"]},{"resource":"endpoints","responseKind":{"group":"","version":"","kind":"Endpoints"},"scope":"Namespaced","singularResource":"endpoints","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ep"]},{"resource":" [truncated 5945 chars]
I0726 09:35:56.279365 13072 round_trippers.go:463] GET https://192.168.1.93:6443/apis?timeout=32s
I0726 09:35:56.279374 13072 round_trippers.go:469] Request Headers:
I0726 09:35:56.279383 13072 round_trippers.go:473] Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json
I0726 09:35:56.279391 13072 round_trippers.go:473] User-Agent: kubectl1.27.3/v1.27.3 (darwin/amd64) kubernetes/25b4e43
I0726 09:35:56.284455 13072 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0726 09:35:56.284470 13072 round_trippers.go:577] Response Headers:
I0726 09:35:56.284477 13072 round_trippers.go:580] Cache-Control: public
I0726 09:35:56.284486 13072 round_trippers.go:580] Content-Type: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList
I0726 09:35:56.284493 13072 round_trippers.go:580] Vary: Accept
I0726 09:35:56.284504 13072 round_trippers.go:580] X-From-Cache: 1
I0726 09:35:56.284510 13072 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 5261d514-7ec0-4ca2-ad61-b839c5bb23d5
I0726 09:35:56.284517 13072 round_trippers.go:580] Audit-Id: 0cec11e9-2854-4c6d-becf-8bd55637040d
I0726 09:35:56.284523 13072 round_trippers.go:580] Date: Wed, 26 Jul 2023 07:35:56 GMT
I0726 09:35:56.284531 13072 round_trippers.go:580] Etag: "E21324106EC1A64BCD5904B036F3CEBD43A93F85C2210DDC210EEC4DCD61D0A84AE5CFE8D6DA408B6FE48E7652BD93A193139BD05EFB00C0F9D60BBC92B834DB"
I0726 09:35:56.284541 13072 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 97ee87bd-f9c5-4d90-b48a-b0f60b73bdae
I0726 09:35:56.284548 13072 round_trippers.go:580] X-Varied-Accept: application/json;g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList,application/json
I0726 09:35:56.284972 13072 request.go:1188] Response Body: {"kind":"APIGroupDiscoveryList","apiVersion":"apidiscovery.k8s.io/v2beta1","metadata":{},"items":[{"metadata":{"name":"apiregistration.k8s.io","creationTimestamp":null},"versions":[{"version":"v1","resources":[{"resource":"apiservices","responseKind":{"group":"","version":"","kind":"APIService"},"scope":"Cluster","singularResource":"apiservice","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"categories":["api-extensions"],"subresources":[{"subresource":"status","responseKind":{"group":"","version":"","kind":"APIService"},"verbs":["get","patch","update"]}]}],"freshness":"Current"}]},{"metadata":{"name":"apps","creationTimestamp":null},"versions":[{"version":"v1","resources":[{"resource":"controllerrevisions","responseKind":{"group":"","version":"","kind":"ControllerRevision"},"scope":"Namespaced","singularResource":"controllerrevision","verbs":["create","delete","deletecollection","get","list","patch","update","watch"]},{"resource":"daemonsets","responseKind":{"group":""," [truncated 16883 chars]
it tells me nothing, but that it fetches the k8s store and cant find MayastorePool
within it
Your etcd pods are all pending, could you check why? If you don't have a default storage class then you'd have to specify one or use manual.
i managed to resort to:
kubectl --kubeconfig ./kubeconfig patch storageclass mayastor -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
this led me to reports of tainted nodes. lets see if i can reproduce it for you
hmm If I understand that line correctly that won't help, you can't have mayastor provide etcd storage for mayastor itself..
It either has to come from another storage class, ex: if you're on the cloud, or "manual" or even openebs localpv. Example how to set storage class: helm instal ... --set="etcd.persistence.storageClass=manual,loki-stack.loki.persistence.storageClassName=manual"
hmm If I understand that line correctly that won't help, you can't have mayastor provide etcd storage for mayastor itself..
It either has to come from another storage class, ex: if you're on the cloud, or "manual" or even openebs localpv. Example how to set storage class: helm instal ... --set="etcd.persistence.storageClass=manual,loki-stack.loki.persistence.storageClassName=manual"
the line picks storageclass mayastor
and adds the isDefault
flag on top of it.
the cluster itself runs on proxmox
see the following:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor
parameters:
repl: '1'
protocol: 'nvmf'
provisioner: io.openebs.csi-mayastor
volumeBindingMode: WaitForFirstConsumer
EDIT:
if i am honest. i dont understand the correlation between DiskPool
and StorageClass
,
is it mandatory to provision a DiskPool
for etcd to start up?
I'd like to set up storage manually and add my disks later.
Yeah so that's not what we want to do here, please undo that change for now.
For Mayastor volumes there's no correlation at all.
But, mayastor itself makes use of its own etcd cluster, as well as a loki instance for logs collection (useful to generate support bundle) and these two things need storage. We use 3rd party helm charts for this, which consume storage via a StorageClass
!
And this is the storage class we need to give our helm chart when installing mayastor as by default it uses the default storage class IIRC.
@Abhinandan-Purkait @avishnu I think we probably need to clarify this in the docs, if it's not already.
i come from taloslinux and dont have a storage class defined by default. referring to the official documentation. there is no such thing as a "default" storage class
https://kubernetes.io/docs/concepts/storage/storage-classes
there might be a storage class included in most releases. could you point out to me which storage class your are referring to?
@wibed The reason for the CRD's missing warning is that the mayastor-operator-diskpool-5955fcd645-nr67v
is not up and running because its waiting for the mayastor-etcd
pods to come up. The DiskPool
CRD is not a part of the helm chart, it gets applied to cluster by mayastor-operator-diskpool
after startup.
Now, for the reason for mayastor-etcd
pods being pending is that it needs a storage provisioner other than mayastor. Mayastor is dependent on the mayastor-etcd
and cannot provide storage to it by its own. You would need to have any other provisioner as @tiagolobocastro pointed out. Now if you don't have any you can install one, for example:- https://openebs.github.io/dynamic-localpv-provisioner/ and use this as a storage to etcd.
i have assigned the "meta partition" for the csi driver to recognize the disk as as candidate for the diskpool.
i named it "test-device", as defined in the storageclass
Name: openebs-device-sc
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"openebs-device-sc"},"parameters":{"devname":"test-device"},"provisioner":"device.csi.openebs.io","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: device.csi.openebs.io
Parameters: devname=test-device
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
a describe on pvc results in a message notifying me about it to be waiting for etcd to be scheduled
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForPodScheduled 59s (x3123 over 13h) persistentvolume-controller waiting for pod mayastor-etcd-0 to be scheduled
yet etcd notifies me about not enough storage space
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m23s (x159 over 13h) default-scheduler 0/4 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) did not have enough free storage. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling..
from my perspective the claim waits for etcd and etcd waits for available storage which is predetermined by the pvc.
EDIT 1: @Abhinandan-Purkait
@wibed The reason for the CRD's missing warning is that the mayastor-operator-diskpool-5955fcd645-nr67v is not up and running because its waiting for the mayastor-etcd pods to come up. The DiskPool CRD is not a part of the helm chart, it gets applied to cluster by mayastor-operator-diskpool after startup.
i first jumped to the conclusion that the diskpool-operator was at fault for not recognizing the available storage. but you mentioned it waits for etcd to turn up. i must be missing an essential step somewhere in the midst of it.
EDIT 2: after resetting the whole node all the services went up as expected
yet i do not have any of the crds installed.
i do have openebs.device-localpv
running as provisioner, yet missing the mayastor storageclass, diskpool or the crds of named resources.
i tried creating and using storageclass mayastor as per documentation. could not bind any volume to storage. after redirecting the storageclass to the openebs provisioner it worked fine.
Can you please send the output for kubectl get pods -n mayastor
&& kubectl get pvc-n mayastor
?
kubectl --kubeconfig ./kubeconfig get pods -n mayastor
NAME READY STATUS RESTARTS AGE
mayastor-agent-core-7c45b7b6c4-sqcg4 2/2 Running 0 20h
mayastor-api-rest-754644d4cb-7vdtm 1/1 Running 0 20h
mayastor-etcd-0 1/1 Running 0 20h
mayastor-etcd-1 1/1 Running 0 20h
mayastor-etcd-2 1/1 Running 0 20h
mayastor-loki-0 1/1 Running 0 20h
mayastor-obs-callhome-c76f65bd9-dcvqx 2/2 Running 0 20h
mayastor-operator-diskpool-5955fcd645-hxdf2 1/1 Running 0 20h
kubectl --kubeconfig ./kubeconfig get pvc -n mayastor
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-mayastor-etcd-0 Bound pvc-627b9a2d-852e-44d9-9b53-23c83292d181 2Gi RWO openebs-device-sc 20h
data-mayastor-etcd-1 Bound pvc-24efe7a7-e95c-4349-bc19-cbc0927f0ec1 2Gi RWO openebs-device-sc 20h
data-mayastor-etcd-2 Bound pvc-2f297562-2fc9-4201-a8ac-1c4214868ca4 2Gi RWO openebs-device-sc 20h
storage-mayastor-loki-0 Bound pvc-e77e08c0-3987-455c-b107-185a5cb03b85 10Gi RWO openebs-device-sc 20h
I don't see the Daemonset Pods? Are they not running?
kubectl get ds -n mayastor
nope,
after querying it. i get
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 45m daemonset-controller Error creating: pods "mayastor-agent-ha-node-m2v9x" is forbidden: violates PodSecurity "baseline:latest": host namespaces (hostNetwork=true), hostPath volumes (volumes "device", "sys", "run-udev", "plugin-dir"), hostPort (container "agent-ha-node" uses hostPort 50053), privileged (container "agent-ha-node" must not set securityContext.privileged=true)
I believe you need some configuration on talos for running privileged pods.
https://github.com/openebs/mayastor/issues/1152
i had to pod-security to privileged.
kubectl --kubeconfig ./kubeconfig patch namespace mayastor -p '{"metadata": {"labels":{"pod-security.kubernetes.io/enforce=privileged"}}}'
mayastor mayastor-agent-core-7c45b7b6c4-n5nzg 2/2 Running 0 18m
mayastor mayastor-agent-ha-node-5j589 1/1 Running 0 60s
mayastor mayastor-agent-ha-node-cchxj 1/1 Running 0 60s
mayastor mayastor-agent-ha-node-tpmmm 1/1 Running 0 60s
mayastor mayastor-api-rest-754644d4cb-fzmbp 1/1 Running 0 18m
mayastor mayastor-csi-node-q4lgj 2/2 Running 0 77s
mayastor mayastor-csi-node-vnn48 2/2 Running 0 77s
mayastor mayastor-csi-node-x4lc9 2/2 Running 0 77s
mayastor mayastor-etcd-0 1/1 Running 0 18m
mayastor mayastor-etcd-1 1/1 Running 0 18m
mayastor mayastor-etcd-2 1/1 Running 0 18m
mayastor mayastor-io-engine-2d49m 0/2 Pending 0 86s
mayastor mayastor-io-engine-d55w4 0/2 Pending 0 86s
mayastor mayastor-io-engine-wqhbh 0/2 Pending 0 86s
mayastor mayastor-loki-0 1/1 Running 0 18m
mayastor mayastor-obs-callhome-c76f65bd9-xqd5l 2/2 Running 0 18m
mayastor mayastor-operator-diskpool-5955fcd645-h94w5 1/1 Running 0 18m
mayastor mayastor-promtail-2v4jf 1/1 Running 0 101s
mayastor mayastor-promtail-7wb8h 1/1 Running 0 101s
mayastor mayastor-promtail-kc5qp 1/1 Running 0 101s
if theres a mayastor csi. does it allocate storage himself? because after a fresh install i have
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mayastor data-mayastor-etcd-0 Bound pvc-b943cce9-1ac6-496e-afb8-b50fa607c85a 2Gi RWO openebs-device-sc 19m
mayastor data-mayastor-etcd-1 Bound pvc-75ea4948-d062-48e7-9183-eb0b387d9999 2Gi RWO openebs-device-sc 19m
mayastor data-mayastor-etcd-2 Bound pvc-f53e6d91-1ea3-412d-bb78-2067d4006bcc 2Gi RWO openebs-device-sc 19m
mayastor storage-mayastor-loki-0 Bound pvc-7a19d296-e2e0-4087-a3b5-83e579e674fe 10Gi RWO openebs-device-sc 19m
mayastor dependent components managed by the openebs.csi driver
No mayastor
cannot provide storage to its own component like etcd and loki. For that you would need a different provisioner as openebs-local-device
etc.
Can you please describe one of those mayastor-io-engine-
pods to see why they are pending?
they didnt had enough resources.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-d779cc7ff-b8hrj 0/1 Completed 0 22h
kube-system coredns-d779cc7ff-fqdwx 1/1 Running 0 16m
kube-system coredns-d779cc7ff-fvd8f 0/1 Completed 0 22h
kube-system coredns-d779cc7ff-p65lx 1/1 Running 0 15m
kube-system kube-apiserver-talos-j8w-2d2 1/1 Running 0 22h
kube-system kube-controller-manager-talos-j8w-2d2 1/1 Running 1 (2d13h ago) 22h
kube-system kube-flannel-78kkl 1/1 Running 1 (15m ago) 22h
kube-system kube-flannel-9mbvg 1/1 Running 1 (15m ago) 22h
kube-system kube-flannel-j6fd9 1/1 Running 0 22h
kube-system kube-flannel-pzlsl 1/1 Running 1 (16m ago) 22h
kube-system kube-proxy-48758 1/1 Running 0 16m
kube-system kube-proxy-968v5 1/1 Running 0 22h
kube-system kube-proxy-wntqw 1/1 Running 0 15m
kube-system kube-proxy-xjzxm 1/1 Running 0 16m
kube-system kube-scheduler-talos-j8w-2d2 1/1 Running 2 (22s ago) 22h
kube-system openebs-device-controller-0 2/2 Running 3 (24s ago) 54m
kube-system openebs-device-node-2fx5h 2/2 Running 2 (15m ago) 54m
kube-system openebs-device-node-l98rp 2/2 Running 2 (15m ago) 54m
kube-system openebs-device-node-zmr49 2/2 Running 2 (16m ago) 54m
mayastor mayastor-agent-core-7c45b7b6c4-67xj4 2/2 Running 0 2m16s
mayastor mayastor-agent-ha-node-4hlbn 1/1 Running 0 2m15s
mayastor mayastor-agent-ha-node-l5v5p 1/1 Running 0 2m15s
mayastor mayastor-agent-ha-node-rmfq6 1/1 Running 0 2m15s
mayastor mayastor-api-rest-754644d4cb-9zsjm 1/1 Running 0 2m16s
mayastor mayastor-csi-controller-5bbb99bf6-k2f4m 5/5 Running 0 2m16s
mayastor mayastor-csi-node-c928c 2/2 Running 0 2m15s
mayastor mayastor-csi-node-pm7r5 2/2 Running 0 2m15s
mayastor mayastor-csi-node-qrbft 2/2 Running 0 2m15s
mayastor mayastor-etcd-0 1/1 Running 0 2m14s
mayastor mayastor-etcd-1 1/1 Running 0 2m8s
mayastor mayastor-etcd-2 1/1 Running 0 2m14s
mayastor mayastor-io-engine-2vmvw 2/2 Running 0 2m14s
mayastor mayastor-io-engine-t7lf5 2/2 Running 0 2m15s
mayastor mayastor-io-engine-vfwsf 2/2 Running 0 2m15s
mayastor mayastor-loki-0 1/1 Running 0 2m13s
mayastor mayastor-obs-callhome-c76f65bd9-qvx76 2/2 Running 0 2m15s
mayastor mayastor-operator-diskpool-5955fcd645-wpfr6 1/1 Running 0 2m15s
mayastor mayastor-promtail-dhvpz 1/1 Running 0 105s
mayastor mayastor-promtail-gkfch 1/1 Running 0 105s
mayastor mayastor-promtail-kgnds 1/1 Running 0 104s
metallb-system controller-595f88d88f-2lfgp 0/1 Completed 0 41m
metallb-system controller-595f88d88f-6qrnd 1/1 Running 0 16m
metallb-system controller-595f88d88f-9q62m 0/1 Completed 0 16m
metallb-system speaker-8lhn8 1/1 Running 1 (14m ago) 14m
metallb-system speaker-kzfdg 1/1 Running 1 (14m ago) 14m
metallb-system speaker-llhnq 1/1 Running 0 41m
metallb-system speaker-zqk4b 1/1 Running 1 (14m ago) 14m
now they're running
Great. Now your pools should have been created?
kubectl get dsp -n mayastor
sadly not
error: the server doesn't have a resource type "dsp"
Can you send the logs for mayastor-operator-diskpool-5955fcd645-wpfr6
?
i reset everything again. yet still no resource 'dsp' to be found.
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m40s default-scheduler Successfully assigned mayastor/mayastor-operator-diskpool-5955fcd645-qtptm to talos-ubo-3cm
Normal Pulling 2m39s kubelet Pulling image "busybox:latest"
Normal Pulled 2m38s kubelet Successfully pulled image "busybox:latest" in 1.15706901s (1.157097898s including waiting)
Normal Created 2m38s kubelet Created container agent-core-grpc-probe
Normal Started 2m38s kubelet Started container agent-core-grpc-probe
Normal Pulling 85s kubelet Pulling image "busybox:latest"
Normal Pulled 84s kubelet Successfully pulled image "busybox:latest" in 1.123108102s (1.123152462s including waiting)
Normal Created 84s kubelet Created container etcd-probe
Normal Started 84s kubelet Started container etcd-probe
Normal Pulling 83s kubelet Pulling image "docker.io/openebs/mayastor-operator-diskpool:v2.3.0"
Normal Pulled 77s kubelet Successfully pulled image "docker.io/openebs/mayastor-operator-diskpool:v2.3.0" in 5.988949979s (5.988972523s including waiting)
Normal Created 77s kubelet Created container operator-diskpool
Normal Started 77s kubelet Started container operator-diskpool
These are the events can you send kubectl logs mayastor-operator-diskpool-5955fcd645-wpfr6 -n mayastor
Defaulted container "operator-diskpool" out of: operator-diskpool, agent-core-grpc-probe (init), etcd-probe (init)
K8S Operator (operator-diskpool) revision 89e839315f62 (v2.3.0+0)
2023-07-31T07:21:47.157486Z INFO operator_diskpool::diskpool::client: Replacing CRD: {
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": {
"name": "diskpools.openebs.io",
"resourceVersion": "435542"
},
"spec": {
"group": "openebs.io",
"names": {
"categories": [],
"kind": "DiskPool",
"plural": "diskpools",
"shortNames": [
"dsp"
],
"singular": "diskpool"
},
"scope": "Namespaced",
"versions": [
{
"additionalPrinterColumns": [
{
"description": "node the pool is on",
"jsonPath": ".spec.node",
"name": "node",
"type": "string"
},
{
"description": "dsp cr state",
"jsonPath": ".status.state",
"name": "state",
"type": "string"
},
{
"description": "Control plane pool status",
"jsonPath": ".status.pool_status",
"name": "pool_status",
"type": "string"
},
{
"description": "total bytes",
"format": "int64",
"jsonPath": ".status.capacity",
"name": "capacity",
"type": "integer"
},
{
"description": "used bytes",
"format": "int64",
"jsonPath": ".status.used",
"name": "used",
"type": "integer"
},
{
"description": "available bytes",
"format": "int64",
"jsonPath": ".status.available",
"name": "available",
"type": "integer"
}
],
"name": "v1alpha1",
"schema": {
"openAPIV3Schema": {
"description": "Auto-generated derived type for DiskPoolSpec via `CustomResource`",
"properties": {
"spec": {
"description": "The pool spec which contains the parameters we use when creating the pool",
"properties": {
"disks": {
"description": "The disk device the pool is located on",
"items": {
"type": "string"
},
"type": "array"
},
"node": {
"description": "The node the pool is placed on",
"type": "string"
}
},
"required": [
"disks",
"node"
],
"type": "object"
},
"status": {
"description": "Status of the pool which is driven and changed by the controller loop.",
"nullable": true,
"properties": {
"available": {
"description": "Available number of bytes.",
"format": "uint64",
"minimum": 0.0,
"type": "integer"
},
"capacity": {
"description": "Capacity as number of bytes.",
"format": "uint64",
"minimum": 0.0,
"type": "integer"
},
"cr_state": {
"default": "Creating",
"description": "The state of the pool.",
"enum": [
"Creating",
"Created",
"Terminating"
],
"type": "string"
},
"pool_status": {
"description": "Pool status from respective control plane object.",
"enum": [
"Unknown",
"Online",
"Degraded",
"Faulted"
],
"nullable": true,
"type": "string"
},
"state": {
"enum": [
"Creating",
"Created",
"Online",
"Unknown",
"Error"
],
"type": "string"
},
"used": {
"description": "Used number of bytes.",
"format": "uint64",
"minimum": 0.0,
"type": "integer"
}
},
"required": [
"available",
"capacity",
"state",
"used"
],
"type": "object"
}
},
"required": [
"spec"
],
"title": "DiskPool",
"type": "object"
}
},
"served": true,
"storage": true,
"subresources": {
"status": {}
}
}
]
}
}
at k8s/operators/src/pool/diskpool/client.rs:49
2023-07-31T07:21:47.170782Z INFO operator_diskpool: Created, crd: "diskpools.openebs.io"
at k8s/operators/src/pool/main.rs:655
2023-07-31T07:21:52.178275Z INFO operator_diskpool: Migration and Cleanup of CRs from MayastorPool to DiskPool complete
at k8s/operators/src/pool/main.rs:843
2023-07-31T07:21:52.178983Z INFO operator_diskpool: Starting DiskPool Operator (dsp) in namespace mayastor
at k8s/operators/src/pool/main.rs:708
Can you send kubectl get crd
?
nope theire created! i fetched the ones from the laptop's ranger desktop, instead of the remote ones.
it certainly was the privileged flag missing
thank you for the effort! awesome we managed to resolve it!
maybe you find something odd.
for the record
addresspools.metallb.io 2023-07-30T16:26:42Z
bfdprofiles.metallb.io 2023-07-30T16:26:42Z
bgpadvertisements.metallb.io 2023-07-30T16:26:42Z
bgppeers.metallb.io 2023-07-30T16:26:42Z
blockdeviceclaims.openebs.io 2023-07-28T10:07:30Z
blockdevices.openebs.io 2023-07-28T10:07:30Z
certificaterequests.cert-manager.io 2023-07-31T07:36:26Z
certificates.cert-manager.io 2023-07-31T07:36:26Z
challenges.acme.cert-manager.io 2023-07-31T07:36:26Z
clusterissuers.cert-manager.io 2023-07-31T07:36:26Z
communities.metallb.io 2023-07-30T16:26:42Z
devicenodes.local.openebs.io 2023-07-31T06:02:14Z
devicevolumes.local.openebs.io 2023-07-31T06:02:14Z
diskpools.openebs.io 2023-07-30T08:23:58Z
ingressroutes.traefik.containo.us 2023-07-31T07:36:26Z
ingressroutes.traefik.io 2023-07-31T07:36:26Z
ingressroutetcps.traefik.containo.us 2023-07-31T07:36:26Z
ingressroutetcps.traefik.io 2023-07-31T07:36:27Z
ingressrouteudps.traefik.containo.us 2023-07-31T07:36:27Z
ingressrouteudps.traefik.io 2023-07-31T07:36:27Z
ipaddresspools.metallb.io 2023-07-30T16:26:42Z
issuers.cert-manager.io 2023-07-31T07:36:27Z
jaegers.jaegertracing.io 2023-07-28T17:18:24Z
l2advertisements.metallb.io 2023-07-30T16:26:42Z
mayastorpools.openebs.io 2023-07-28T17:54:27Z
middlewares.traefik.containo.us 2023-07-31T07:36:27Z
middlewares.traefik.io 2023-07-31T07:36:27Z
middlewaretcps.traefik.containo.us 2023-07-31T07:36:27Z
middlewaretcps.traefik.io 2023-07-31T07:36:27Z
orders.acme.cert-manager.io 2023-07-31T07:36:27Z
serverstransports.traefik.containo.us 2023-07-31T07:36:27Z
serverstransports.traefik.io 2023-07-31T07:36:27Z
serverstransporttcps.traefik.io 2023-07-31T07:36:27Z
tlsoptions.traefik.containo.us 2023-07-31T07:36:27Z
tlsoptions.traefik.io 2023-07-31T07:36:27Z
tlsstores.traefik.containo.us 2023-07-31T07:36:27Z
tlsstores.traefik.io 2023-07-31T07:36:27Z
traefikservices.traefik.containo.us 2023-07-31T07:36:27Z
traefikservices.traefik.io 2023-07-31T07:36:27Z
volumesnapshotclasses.snapshot.storage.k8s.io 2023-07-28T17:18:23Z
volumesnapshotcontents.snapshot.storage.k8s.io 2023-07-28T17:18:23Z
volumesnapshots.snapshot.storage.k8s.io 2023-07-28T17:18:24Z
Great. Thanks for trying out.
on a fresh set up cluster:
helm repo add mayastor https://openebs.github.io/mayastor-extensions/
helm install mayastor mayastor/mayastor -n mayastor --create-namespace --version 2.3.0
Output:kubectl --kubeconfig ../kubeconfig get pods -n mayastor
Output:kubectl --kubeconfig ../kubeconfig get dsp -n mayastor
Output:kubectl --kubeconfig ../kubeconfig -n mayastor get msp
Output: