LINBIT / linstor-server

High Performance Software-Defined Block Storage for container, cloud and virtualisation. Fully integrated with Docker, Kubernetes, Openstack, Proxmox etc.
https://docs.linbit.com/docs/linstor-guide/
GNU General Public License v3.0
981 stars 76 forks source link

Some devices created as Inconsistent #268

Open kvaps opened 2 years ago

kvaps commented 2 years ago

Hi I just faced the bug and I think I know how to reproduce it

I have LINSTOR on three nodes and 2xNVME on each of them.

OS: Ubuntu 20.04.3 LTS kernel: 5.13.0-27-generic DRBD version: 9.1.4 (api:2/proto:110-121) LINSTOR version: 1.17.0

LINSTOR installed using priaeus-operator.

I created three LVM pools:

linstor ps cdp lvm hf-kubevirt-01 /dev/nvme{0,1}n1 --pool-name data --storage-pool lvm
linstor ps cdp lvm hf-kubevirt-02 /dev/nvme{0,1}n1 --pool-name data --storage-pool lvm
linstor ps cdp lvm hf-kubevirt-03 /dev/nvme{0,1}n1 --pool-name data --storage-pool lvm

And 51 pod with 10GB volumes on each of them:

48 volumes provisioned without problems, but three of them are stuck on Inconsistent state:

┊ pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 ┊ hf-kubevirt-01 ┊ 7009 ┊ Unused ┊ Ok    ┊ Inconsistent ┊ 2022-01-26 19:17:41 ┊
┊ pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 ┊ hf-kubevirt-02 ┊ 7009 ┊ Unused ┊ Ok    ┊ Inconsistent ┊ 2022-01-26 19:17:17 ┊
┊ pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 ┊ hf-kubevirt-03 ┊ 7009 ┊ Unused ┊ Ok    ┊   TieBreaker ┊ 2022-01-26 19:17:50 ┊
pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 role:Secondary
  disk:Inconsistent
  hf-kubevirt-02 role:Secondary
    peer-disk:Inconsistent
  hf-kubevirt-03 role:Secondary
    peer-disk:Diskless

I fixed these devices by running:

drbdadm primary --force pvc-a457b50f-4ad8-44b1-a014-25ff5d90f7b1
drbdadm secondary pvc-a457b50f-4ad8-44b1-a014-25ff5d90f7b1

linstor-controller log is full of these messages even for other successfully created resources:

19:16:58.125 [grizzly-http-server-0] INFO  LINSTOR/Controller - SYSTEM - New volume definition with number '0' of resource definition 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created.
19:18:11.853 [MainWorkerPool-1] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F188C3-00000-000008]
19:18:13.415 [MainWorkerPool-1] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F188C3-00000-000044]
linstor err show 61F188C3-00000-000008
ERROR REPORT 61F188C3-00000-000008

============================================================

Application:                        LINBIT�� LINSTOR
Module:                             Controller
Version:                            1.17.0
Build ID:                           7e646d83dbbadf1ec066e1bc8b29ae018aff1f66
Build time:                         2021-12-09T07:27:52+00:00
Error time:                         2022-01-26 19:18:11
Node:                               piraeus-op-cs-controller-554c4d7fc8-bqskt
Peer:                               RestClient(10.111.2.166; 'linstor-csi/')

============================================================

Reported error:
===============

Category:                           RuntimeException
Class name:                         ApiRcException
Class canonical name:               com.linbit.linstor.core.apicallhandler.response.ApiRcException
Generated at:                       Method 'autoPlaceInTransaction', Source file 'CtrlRscAutoPlaceApiCallHandler.java', Line #235

Error message:                      The resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count.

Error context:
    The resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count.

Asynchronous stage backtrace:

    Error has been observed at the following site(s):
        |_ checkpoint ? Auto-place resource
    Stack trace:

Call backtrace:

    Method                                   Native Class:Line number
    autoPlaceInTransaction                   N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscAutoPlaceApiCallHandler:235

Suppressed exception 1 of 1:
===============
Category:                           RuntimeException
Class name:                         OnAssemblyException
Class canonical name:               reactor.core.publisher.FluxOnAssembly.OnAssemblyException
Generated at:                       Method 'autoPlaceInTransaction', Source file 'CtrlRscAutoPlaceApiCallHandler.java', Line #235

Error message:
Error has been observed at the following site(s):
    |_ checkpoint ��� Auto-place resource
Stack trace:

Error context:
    The resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count.

Call backtrace:

    Method                                   Native Class:Line number
    autoPlaceInTransaction                   N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscAutoPlaceApiCallHandler:235
    lambda$autoPlace$0                       N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscAutoPlaceApiCallHandler:146
    doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:147
    lambda$fluxInScope$0                     N      com.linbit.linstor.core.apicallhandler.ScopeRunner:75
    call                                     N      reactor.core.publisher.MonoCallable:91
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:126
    subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:8343
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:188
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2344
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:134
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.Flux:8357
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:188
    onNext                                   N      reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber:121
    complete                                 N      reactor.core.publisher.Operators$MonoSubscriber:1782
    onComplete                               N      reactor.core.publisher.MonoCollect$CollectSubscriber:152
    onComplete                               N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:252
    checkTerminated                          N      reactor.core.publisher.FluxFlatMap$FlatMapMain:838
    drainLoop                                N      reactor.core.publisher.FluxFlatMap$FlatMapMain:600
    drain                                    N      reactor.core.publisher.FluxFlatMap$FlatMapMain:580
    onComplete                               N      reactor.core.publisher.FluxFlatMap$FlatMapMain:457
    checkTerminated                          N      reactor.core.publisher.FluxFlatMap$FlatMapMain:838
    drainLoop                                N      reactor.core.publisher.FluxFlatMap$FlatMapMain:600
    innerComplete                            N      reactor.core.publisher.FluxFlatMap$FlatMapMain:909
    onComplete                               N      reactor.core.publisher.FluxFlatMap$FlatMapInner:1013
    onComplete                               N      reactor.core.publisher.FluxMap$MapSubscriber:136
    onComplete                               N      reactor.core.publisher.Operators$MultiSubscriptionSubscriber:2016
    onComplete                               N      reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber:78
    complete                                 N      reactor.core.publisher.FluxCreate$BaseSink:438
    drain                                    N      reactor.core.publisher.FluxCreate$BufferAsyncSink:784
    complete                                 N      reactor.core.publisher.FluxCreate$BufferAsyncSink:732
    drainLoop                                N      reactor.core.publisher.FluxCreate$SerializedSink:239
    drain                                    N      reactor.core.publisher.FluxCreate$SerializedSink:205
    complete                                 N      reactor.core.publisher.FluxCreate$SerializedSink:196
    apiCallComplete                          N      com.linbit.linstor.netcom.TcpConnectorPeer:455
    handleComplete                           N      com.linbit.linstor.proto.CommonMessageProcessor:363
    handleDataMessage                        N      com.linbit.linstor.proto.CommonMessageProcessor:287
    doProcessInOrderMessage                  N      com.linbit.linstor.proto.CommonMessageProcessor:235
    lambda$doProcessMessage$3                N      com.linbit.linstor.proto.CommonMessageProcessor:220
    subscribe                                N      reactor.core.publisher.FluxDefer:46
    subscribe                                N      reactor.core.publisher.Flux:8357
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:418
    drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:414
    drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:679
    onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:243
    drainFused                               N      reactor.core.publisher.UnicastProcessor:286
    drain                                    N      reactor.core.publisher.UnicastProcessor:329
    onNext                                   N      reactor.core.publisher.UnicastProcessor:408
    next                                     N      reactor.core.publisher.FluxCreate$IgnoreSink:618
    next                                     N      reactor.core.publisher.FluxCreate$SerializedSink:153
    processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:373
    doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:218
    lambda$processMessage$2                  N      com.linbit.linstor.proto.CommonMessageProcessor:164
    onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:177
    runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:439
    run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:526
    call                                     N      reactor.core.scheduler.WorkerTask:84
    call                                     N      reactor.core.scheduler.WorkerTask:37
    run                                      N      java.util.concurrent.FutureTask:264
    run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:304
    runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1128
    run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:628
    run                                      N      java.lang.Thread:829

END OF ERROR REPORT.

csi-controller log:

csi-provisioner
I0126 19:16:56.005108       1 controller.go:1279] provision "default/my-pvc10" class "piraeus-ssd": started
I0126 19:16:56.005225       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-pvc10", UID:"b442c3dc-d7b4-4ba3-9755-d47351bf4d42", APIVersion:"v1", ResourceVersion:"7196056", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/my-pvc10"
W0126 19:17:56.005654       1 controller.go:933] Retrying syncing claim "b442c3dc-d7b4-4ba3-9755-d47351bf4d42", failure 0
E0126 19:17:56.005678       1 controller.go:956] error syncing claim "b442c3dc-d7b4-4ba3-9755-d47351bf4d42": failed to provision volume with StorageClass "piraeus-ssd": rpc error: code = DeadlineExceeded desc = context deadline exceeded
I0126 19:17:56.005705       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-pvc10", UID:"b442c3dc-d7b4-4ba3-9755-d47351bf4d42", APIVersion:"v1", ResourceVersion:"7196056", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "piraeus-ssd": rpc error: code = DeadlineExceeded desc = context deadline exceeded
I0126 19:17:57.006240       1 controller.go:1279] provision "default/my-pvc10" class "piraeus-ssd": started
I0126 19:17:57.006347       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-pvc10", UID:"b442c3dc-d7b4-4ba3-9755-d47351bf4d42", APIVersion:"v1", ResourceVersion:"7196056", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/my-pvc10"
I0126 19:18:25.116045       1 controller.go:1384] provision "default/my-pvc10" class "piraeus-ssd": volume "pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42" provisioned
I0126 19:18:25.116068       1 controller.go:1397] provision "default/my-pvc10" class "piraeus-ssd": succeeded
I0126 19:18:25.118726       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-pvc10", UID:"b442c3dc-d7b4-4ba3-9755-d47351bf4d42", APIVersion:"v1", ResourceVersion:"7196056", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
linstor-csi-plugin
time="2022-01-26T19:16:56Z" level=info msg="determined volume id for volume named 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42'" linstorCSIComponent=driver nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com version= volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:16:56Z" level=info msg="reconcile resource definition for volume" linstorCSIComponent=client volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:16:57Z" level=info msg="reconcile volume definition for volume" linstorCSIComponent=client volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:16:58Z" level=info msg="reconcile resource placement for volume" linstorCSIComponent=client volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:17:56Z" level=info msg="deleting volume" linstorCSIComponent=client volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:17:56Z" level=error msg="failed to clean up volume" error="context canceled" linstorCSIComponent=driver nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com version= volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:17:56Z" level=error msg="method failed" error="rpc error: code = Internal desc = CreateVolume failed for pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: failed to autoplace constraint replicas: context canceled" linstorCSIComponent=driver method=/csi.v1.Controller/CreateVolume nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com req="name:\"pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42\" capacity_range: volume_capabilities: access_mode: > parameters: parameters: parameters: parameters: parameters: accessibility_requirements: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: > preferred: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: segments: > > " resp="" version=
time="2022-01-26T19:17:57Z" level=info msg="determined volume id for volume named 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42'" linstorCSIComponent=driver nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com version= volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:17:57Z" level=info msg="reconcile resource definition for volume" linstorCSIComponent=client volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:17:57Z" level=info msg="reconcile volume definition for volume" linstorCSIComponent=client volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:17:57Z" level=info msg="reconcile resource placement for volume" linstorCSIComponent=client volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
time="2022-01-26T19:18:26Z" level=error msg="method failed" error="rpc error: code = NotFound desc = ControllerPublishVolume failed for pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 on node hf-kubevirt-02: node CONNECTED" linstorCSIComponent=driver method=/csi.v1.Controller/ControllerPublishVolume nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com req="volume_id:\"pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42\" node_id:\"hf-kubevirt-02\" volume_capability: access_mode: > volume_context: volume_context: volume_context: volume_context: volume_context: " resp="" version=
time="2022-01-26T19:18:27Z" level=error msg="method failed" error="rpc error: code = NotFound desc = ControllerPublishVolume failed for pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 on node hf-kubevirt-02: node CONNECTED" linstorCSIComponent=driver method=/csi.v1.Controller/ControllerPublishVolume nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com req="volume_id:\"pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42\" node_id:\"hf-kubevirt-02\" volume_capability: access_mode: > volume_context: volume_context: volume_context: volume_context: volume_context: " resp="" version=
time="2022-01-26T19:18:27Z" level=error msg="method failed" error="rpc error: code = NotFound desc = ControllerPublishVolume failed for pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 on node hf-kubevirt-02: node CONNECTED" linstorCSIComponent=driver method=/csi.v1.Controller/ControllerPublishVolume nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com req="volume_id:\"pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42\" node_id:\"hf-kubevirt-02\" volume_capability: access_mode: > volume_context: volume_context: volume_context: volume_context: volume_context: " resp="" version=
time="2022-01-26T19:18:28Z" level=error msg="method failed" error="rpc error: code = NotFound desc = ControllerPublishVolume failed for pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 on node hf-kubevirt-02: node CONNECTED" linstorCSIComponent=driver method=/csi.v1.Controller/ControllerPublishVolume nodeID=hf-kubevirt-03 provisioner=linstor.csi.linbit.com req="volume_id:\"pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42\" node_id:\"hf-kubevirt-02\" volume_capability: access_mode: > volume_context: volume_context: volume_context: volume_context: volume_context: " resp="" version=
time="2022-01-26T19:18:49Z" level=info msg="attaching volume" linstorCSIComponent=client targetNode=hf-kubevirt-02 volume=pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42

linstor-satellites log:

hf-kubevirt-01
19:17:38.451 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-01'.
19:17:38.451 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-02'.
19:17:38.451 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-03'.
19:18:18.376 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:18.376 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:18.376 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.
19:18:41.460 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:41.460 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:41.460 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.
19:18:54.624 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:54.624 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:54.624 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.
hf-kubevirt-02
19:17:09.606 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-02'.
19:17:17.226 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Primary Resource pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
19:17:17.226 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Primary bool set on Resource pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42
19:17:25.754 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-02'.
19:17:38.443 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-01'.
19:17:38.443 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-03'.
19:17:38.443 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:24.054 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-01'.
19:18:24.054 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-02'.
19:18:24.054 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-03'.
19:18:29.412 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:29.412 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:29.412 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.
19:18:50.083 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:50.083 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:50.083 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.
hf-kubevirt-03
19:17:40.738 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-01'.
19:17:40.738 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-02'.
19:17:40.738 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' created for node 'hf-kubevirt-03'.
19:18:18.942 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:18.942 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:18.942 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.
19:18:39.199 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:39.199 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:39.199 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.
19:18:50.855 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-01'.
19:18:50.855 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-02'.
19:18:50.855 [MainWorkerPool-1] INFO  LINSTOR/Satellite - SYSTEM - Resource 'pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42' updated for node 'hf-kubevirt-03'.

dmesg from nodes:

hf-kubevirt-01
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Starting worker thread (from drbdsetup [874872])
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Starting sender thread (from drbdsetup [874876])
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Starting sender thread (from drbdsetup [874879])
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: meta-data IO uses: blk-bio
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: disk( Diskless -> Attaching )
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: Maximum number of peer devices = 7
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Method to ensure write ordering: flush
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: discard_zeroes_data=0 and discard_zeroes_if_aligned=no: disabling discards
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: drbd_bm_resize called with capacity == 20975152
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: resync bitmap: bits=2621894 words=286776 pages=561
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: size = 10 GB (10487576 KB)
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: recounting of set bits took additional 4ms
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: disk( Attaching -> Inconsistent ) quorum( no -> yes )
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: attached to current UUID: 0000000000000004
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: conn( StandAlone -> Unconnected )
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Starting receiver thread (from drbd_w_pvc-b442 [874873])
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: conn( Unconnected -> Connecting )
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: conn( StandAlone -> Unconnected )
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Starting receiver thread (from drbd_w_pvc-b442 [874873])
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: conn( Unconnected -> Connecting )
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Handshake to peer 0 successful: Agreed network protocol version 121
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES.
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Peer authenticated using 20 bytes HMAC
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Starting ack_recv thread (from drbd_r_pvc-b442 [874889])
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Preparing remote state change 560714177
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: discard_zeroes_data=0 and discard_zeroes_if_aligned=no: disabling discards
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: drbd_sync_handshake:
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: self 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:0 flags:24
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:0 flags:24
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: uuid_compare()=no-sync by rule=just-created-both
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Committing remote state change 560714177 (primary_nodes=0)
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
Jan 26 20:17:40 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: pdsk( DUnknown -> Inconsistent ) repl( Off -> Established )
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Handshake to peer 2 successful: Agreed network protocol version 121
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES.
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Peer authenticated using 20 bytes HMAC
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Starting ack_recv thread (from drbd_r_pvc-b442 [874891])
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Preparing cluster-wide state change 3520286370 (1->2 499/146)
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: discard_zeroes_data=0 and discard_zeroes_if_aligned=no: disabling discards
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: State change 3520286370: primary_nodes=0, weak_nodes=0
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Committing cluster-wide state change 3520286370 (48ms)
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-03: pdsk( DUnknown -> Diskless ) repl( Off -> Established )
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Preparing remote state change 3368284658
Jan 26 20:17:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Committing remote state change 3368284658 (primary_nodes=0)
Jan 26 20:18:18 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:18:42 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:18:55 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Preparing cluster-wide state change 1673987859 (1->-1 7683/4609)
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: State change 1673987859: primary_nodes=2, weak_nodes=FFFFFFFFFFFFFFF8
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Committing cluster-wide state change 1673987859 (0ms)
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: role( Secondary -> Primary )
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: disk( Inconsistent -> UpToDate )
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: size = 10 GB (10487576 KB)
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Forced to consider local data as UpToDate!
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: new current UUID: 651675A9709469C3 weak: FFFFFFFFFFFFFFFC
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: drbd_sync_handshake:
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: self 651675A9709469C3:182F376DB5887040:0000000000000000:0000000000000000 bits:0 flags:20
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: peer 0000000000000004:0000000000000000:182F376DB5887040:0000000000000000 bits:2621894 flags:4
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: uuid_compare()=source-set-bitmap by rule=just-created-peer
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: Setting and writing one bitmap slot, after drbd_sync_handshake
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: repl( Established -> WFBitMapS )
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: repl( WFBitMapS -> SyncSource )
Jan 26 20:33:25 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: Began resync as SyncSource (will sync 10487576 KB [2621894 bits set]).
Jan 26 20:33:31 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: role( Primary -> Secondary )
Jan 26 20:33:31 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: FIXME drbd_a_pvc-b442[875018] op clear, bitmap locked for 'demote' by [874873]
Jan 26 20:33:51 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Preparing remote state change 3281855738
Jan 26 20:33:51 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Committing remote state change 3281855738 (primary_nodes=1)
Jan 26 20:33:51 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: peer( Secondary -> Primary )
Jan 26 20:35:23 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: updated UUIDs 651675A9709469C2:0000000000000000:0000000000000000:0000000000000000
Jan 26 20:35:23 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: Resync done (total 117 sec; paused 0 sec; 89636 K/sec)
Jan 26 20:35:23 hf-kubevirt-01 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: pdsk( Inconsistent -> UpToDate ) repl( SyncSource -> Established )
hf-kubevirt-02
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Starting worker thread (from drbdsetup [441422])
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: meta-data IO uses: blk-bio
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: disk( Diskless -> Attaching )
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: Maximum number of peer devices = 7
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Method to ensure write ordering: flush
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: discard_zeroes_data=0 and discard_zeroes_if_aligned=no: disabling discards
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: Adjusting my ra_pages to backing device's (32 -> 64)
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: drbd_bm_resize called with capacity == 20975152
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: resync bitmap: bits=2621894 words=286776 pages=561
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: size = 10 GB (10487576 KB)
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: recounting of set bits took additional 0ms
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: disk( Attaching -> Inconsistent )
Jan 26 20:17:10 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: attached to current UUID: 0000000000000004
Jan 26 20:17:18 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:17:27 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:17:38 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Starting sender thread (from drbdsetup [444875])
Jan 26 20:17:38 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Starting sender thread (from drbdsetup [444877])
Jan 26 20:17:38 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:17:39 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: conn( StandAlone -> Unconnected )
Jan 26 20:17:39 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Starting receiver thread (from drbd_w_pvc-b442 [441423])
Jan 26 20:17:39 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: conn( Unconnected -> Connecting )
Jan 26 20:17:39 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: conn( StandAlone -> Unconnected )
Jan 26 20:17:39 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Starting receiver thread (from drbd_w_pvc-b442 [441423])
Jan 26 20:17:39 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: conn( Unconnected -> Connecting )
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Handshake to peer 1 successful: Agreed network protocol version 121
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES.
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Peer authenticated using 20 bytes HMAC
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Starting ack_recv thread (from drbd_r_pvc-b442 [444884])
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Preparing cluster-wide state change 560714177 (0->1 499/146)
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: discard_zeroes_data=0 and discard_zeroes_if_aligned=no: disabling discards
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: drbd_sync_handshake:
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: self 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:0 flags:24
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:0 flags:24
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: uuid_compare()=no-sync by rule=just-created-both
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: State change 560714177: primary_nodes=0, weak_nodes=0
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Committing cluster-wide state change 560714177 (36ms)
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
Jan 26 20:17:40 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: pdsk( DUnknown -> Inconsistent ) repl( Off -> Established )
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Handshake to peer 2 successful: Agreed network protocol version 121
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES.
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Peer authenticated using 20 bytes HMAC
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: Starting ack_recv thread (from drbd_r_pvc-b442 [444886])
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Preparing remote state change 3520286370
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Committing remote state change 3520286370 (primary_nodes=0)
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Preparing cluster-wide state change 3368284658 (0->2 499/146)
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: discard_zeroes_data=0 and discard_zeroes_if_aligned=no: disabling discards
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: State change 3368284658: primary_nodes=0, weak_nodes=0
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Committing cluster-wide state change 3368284658 (32ms)
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-03: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
Jan 26 20:17:42 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-03: pdsk( DUnknown -> Diskless ) repl( Off -> Established )
Jan 26 20:18:26 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:18:30 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:18:51 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: rs_discard_granularity feature disabled
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Preparing remote state change 1673987859
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Committing remote state change 1673987859 (primary_nodes=2)
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: peer( Secondary -> Primary )
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: pdsk( Inconsistent -> UpToDate )
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: size = 10 GB (10487576 KB)
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: drbd_sync_handshake:
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: self 0000000000000004:0000000000000000:182F376DB5887040:0000000000000000 bits:0 flags:4
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: peer 651675A9709469C3:182F376DB5887040:0000000000000000:0000000000000000 bits:0 flags:1020
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: uuid_compare()=target-set-bitmap by rule=just-created-self
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: Setting and writing the whole bitmap, fresh node
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: repl( Established -> WFBitMapT )
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: setting UUIDs to 182F376DB5887040:0000000000000000:182F376DB5887040:0000000000000000
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-03: resync-susp( no -> connection dependency )
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: repl( WFBitMapT -> SyncTarget )
Jan 26 20:33:25 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: Began resync as SyncTarget (will sync 10487576 KB [2621894 bits set]).
Jan 26 20:33:31 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: peer( Primary -> Secondary )
Jan 26 20:33:51 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Preparing cluster-wide state change 3281855738 (0->-1 3/1)
Jan 26 20:33:51 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: State change 3281855738: primary_nodes=1, weak_nodes=FFFFFFFFFFFFFFF8
Jan 26 20:33:51 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Committing cluster-wide state change 3281855738 (0ms)
Jan 26 20:33:51 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: role( Secondary -> Primary )
Jan 26 20:35:23 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: Resync done (total 117 sec; paused 0 sec; 89636 K/sec)
Jan 26 20:35:23 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: updated UUIDs 651675A9709469C3:0000000000000000:182F376DB5887040:0000000000000000
Jan 26 20:35:23 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: disk( Inconsistent -> UpToDate )
Jan 26 20:35:23 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-03: resync-susp( connection dependency -> no )
Jan 26 20:35:23 hf-kubevirt-02 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: repl( SyncTarget -> Established )
hf-kubevirt-03
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42: Starting worker thread (from drbdsetup [498614])
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Starting sender thread (from drbdsetup [498618])
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Starting sender thread (from drbdsetup [498621])
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: conn( StandAlone -> Unconnected )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Starting receiver thread (from drbd_w_pvc-b442 [498615])
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: conn( Unconnected -> Connecting )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: conn( StandAlone -> Unconnected )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Starting receiver thread (from drbd_w_pvc-b442 [498615])
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: conn( Unconnected -> Connecting )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Handshake to peer 0 successful: Agreed network protocol version 121
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES.
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Peer authenticated using 20 bytes HMAC
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Starting ack_recv thread (from drbd_r_pvc-b442 [498628])
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Handshake to peer 1 successful: Agreed network protocol version 121
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Feature flags enabled on protocol level: 0xf TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES.
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Peer authenticated using 20 bytes HMAC
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Starting ack_recv thread (from drbd_r_pvc-b442 [498626])
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Preparing remote state change 3520286370
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: size = 10 GB (10487576 KB)
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Committing remote state change 3520286370 (primary_nodes=0)
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: pdsk( DUnknown -> Inconsistent ) repl( Off -> Established )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Preparing remote state change 3368284658
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Committing remote state change 3368284658 (primary_nodes=0)
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: conn( Connecting -> Connected ) peer( Unknown -> Secondary )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: quorum( no -> yes )
Jan 26 20:17:42 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: pdsk( DUnknown -> Inconsistent ) repl( Off -> Established )
Jan 26 20:33:25 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Preparing remote state change 1673987859
Jan 26 20:33:25 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: Committing remote state change 1673987859 (primary_nodes=2)
Jan 26 20:33:25 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: peer( Secondary -> Primary )
Jan 26 20:33:25 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-01: pdsk( Inconsistent -> UpToDate )
Jan 26 20:33:25 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009: receiver updated UUIDs to exposed data uuid: 651675A9709469C3
Jan 26 20:33:25 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: resync-susp( no -> peer )
Jan 26 20:33:31 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-01: peer( Primary -> Secondary )
Jan 26 20:33:51 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Preparing remote state change 3281855738
Jan 26 20:33:51 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: Committing remote state change 3281855738 (primary_nodes=1)
Jan 26 20:33:51 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42 hf-kubevirt-02: peer( Secondary -> Primary )
Jan 26 20:35:23 hf-kubevirt-03 kernel: drbd pvc-b442c3dc-d7b4-4ba3-9755-d47351bf4d42/0 drbd1009 hf-kubevirt-02: pdsk( Inconsistent -> UpToDate ) resync-susp( peer -> no )
kvaps commented 2 years ago

To reproduce:

#!/bin/sh
kubectl delete sc piraeus-ssd

for INSTANCE in $(seq 1 100); do
kubectl create -f- <<EOT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc${INSTANCE}
  labels:
    app: "test"
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  storageClassName: piraeus-ssd
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: my-pod${INSTANCE}
  labels:
    app: "test"
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - test
          topologyKey: "kubernetes.io/hostname"
  containers:
    - name: my-container
      image: alpine:3.14
      imagePullPolicy: IfNotPresent
      command:
        - sleep
        - infinity
      volumeDevices:
        - devicePath: /dev/xvda
          name: my-volume
  volumes:
    - name: my-volume
      persistentVolumeClaim:
        claimName: my-pvc${INSTANCE}
  terminationGracePeriodSeconds: 0
EOT
done

kubectl create -f- <<EOT
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: piraeus-ssd
parameters:
  autoPlace: "2"
  storagePool: lvm
provisioner: linstor.csi.linbit.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOT
kvaps commented 2 years ago

Or even:

#!/bin/sh
kubectl delete sc piraeus-ssd

for INSTANCE in $(seq 1 100); do
kubectl create -f- <<EOT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc${INSTANCE}
  labels:
    app: "test"
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Block
  storageClassName: piraeus-ssd
  resources:
    requests:
      storage: 10Gi
EOT
done

kubectl create -f- <<EOT
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: piraeus-ssd
parameters:
  autoPlace: "2"
  storagePool: lvm
provisioner: linstor.csi.linbit.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOT

should be enough

kvaps commented 2 years ago

Another problem with the different device:

linstor r l -r pvc-fbdd98f5-492a-4971-a72f-998bbe95d027

╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName                             ┊ Node           ┊ Port ┊ Usage  ┊ Conns ┊    State ┊ CreatedOn           ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ pvc-fbdd98f5-492a-4971-a72f-998bbe95d027 ┊ hf-kubevirt-01 ┊ 7013 ┊        ┊       ┊  Unknown ┊                     ┊
┊ pvc-fbdd98f5-492a-4971-a72f-998bbe95d027 ┊ hf-kubevirt-02 ┊ 7013 ┊ Unused ┊ Ok    ┊ UpToDate ┊                     ┊
┊ pvc-fbdd98f5-492a-4971-a72f-998bbe95d027 ┊ hf-kubevirt-03 ┊ 7013 ┊ Unused ┊       ┊  Unknown ┊ 2022-01-27 13:34:27 ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
13:29:58.763 [grizzly-http-server-1] INFO  LINSTOR/Controller - SYSTEM - New volume definition with number '0' of resource definition 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' created.
13:31:59.226 [grizzly-http-server-1] INFO  LINSTOR/Controller - SYSTEM - New volume definition with number '0' of resource definition 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' created.
13:32:25.466 [MainWorkerPool-1] ERROR LINSTOR/Controller - SYSTEM - Resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' on node 'hf-kubevirt-02' not found. [Report number 61F298C8-00000-000905]
13:32:49.266 [MainWorkerPool-1] WARN  LINSTOR/Controller - SYSTEM - RetryTask: Failed resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' of node 'hf-kubevirt-02' added for retry.
13:34:15.677 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001356]
13:34:26.769 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001399]
13:34:40.911 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001464]
13:34:43.725 [MainWorkerPool-1] WARN  LINSTOR/Controller - SYSTEM - RetryTask: Failed resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' of node 'hf-kubevirt-01' added for retry.
13:34:58.850 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001527]
13:35:30.208 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001592]
13:35:38.179 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001646]
13:35:41.782 [MainWorkerPool-1] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001712]
13:35:55.992 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-001830]
13:36:42.806 [TaskScheduleService] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-002025]
13:36:49.593 [MainWorkerPool-1] ERROR LINSTOR/Controller - SYSTEM - The resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' was already deployed on 3 nodes: 'hf-kubevirt-01', 'hf-kubevirt-02', 'hf-kubevirt-03'. The resource would have to be deleted from nodes to reach the placement count. [Report number 61F298C8-00000-002052]
13:37:19.282 [MainWorkerPool-1] ERROR LINSTOR/Controller - SYSTEM - (Node: 'hf-kubevirt-02') Generated resource file for resource 'pvc-fbdd98f5-492a-4971-a72f-998bbe95d027' is invalid. [Report number 61F298C8-00000-002118]
# diff /var/lib/linstor.d/pvc-fbdd98f5-492a-4971-a72f-998bbe95d027.res /var/lib/linstor.d/pvc-fbdd98f5-492a-4971-a72f-998bbe95d027.res_tmp
10c10,11
<         quorum off;
---
>         on-no-quorum io-error;
>         quorum majority;
16c17
<         shared-secret     "n8fEwi3XXZRtMKzhsoWn";
---
>         shared-secret     "9EgIXgOal126vPZmJCFT";
29c30
<             device      minor 1020;
---
>             device      minor 1026;
31a33,74
>     }
>
>     on hf-kubevirt-01
>     {
>         volume 0
>         {
>             disk        /dev/drbd/this/is/not/used;
>             disk
>             {
>                 discard-zeroes-if-aligned yes;
>             }
>             meta-disk   internal;
>             device      minor 1026;
>         }
>         node-id    1;
>     }
>
>     on hf-kubevirt-03
>     {
>         volume 0
>         {
>             disk        none;
>             disk
>             {
>                 discard-zeroes-if-aligned yes;
>             }
>             meta-disk   internal;
>             device      minor 1026;
>         }
>         node-id    2;
>     }
>
>     connection
>     {
>         host hf-kubevirt-02 address ipv4 192.168.242.38:7013;
>         host hf-kubevirt-01 address ipv4 192.168.242.35:7013;
>     }
>
>     connection
>     {
>         host hf-kubevirt-02 address ipv4 192.168.242.38:7013;
>         host hf-kubevirt-03 address ipv4 192.168.242.37:7013;

WAIDW?

kvaps commented 1 year ago

Just reproduced this bug on clean cluster:

https://asciinema.org/a/RKgx4fV1BdVTkcAYJXZ7GU0AX?t=80

no errors nor on controller and satellite, only in dmesg:

[Thu Dec 22 13:04:02 2022] drbd pvc-8e7c653f-7458-4d0a-a373-aec594215561: State change failed: Can not start OV/resync since it is already active
[Thu Dec 22 13:04:02 2022] drbd pvc-8e7c653f-7458-4d0a-a373-aec594215561/0 drbd1000 gpnvkc-w3: Failed: resync-susp( connection dependency -> no )
[Thu Dec 22 13:04:02 2022] drbd pvc-8e7c653f-7458-4d0a-a373-aec594215561/0 drbd1000 gpnvkc-w1: Failed: repl( SyncTarget -> WFBitMapT )
[Thu Dec 22 13:04:02 2022] drbd pvc-8e7c653f-7458-4d0a-a373-aec594215561/0 drbd1000 gpnvkc-s2: Failed: resync-susp( connection dependency -> no )

linstor 1.20.0; drbd 9.2.0

JoelColledge commented 1 year ago

The problems in this issue look like they have different underlying causes to me.

https://github.com/LINBIT/linstor-server/issues/268#issue-1115464857 (initial issue) - looks like LINSTOR is failing to promote. That's an old version now, may be fixed already.

https://github.com/LINBIT/linstor-server/issues/268#issuecomment-1023226704 ("Another problem with the different device") - looks like something at LINSTOR level too. May also be fixed already.

https://github.com/LINBIT/linstor-server/issues/268#issuecomment-1362824963 ("Just reproduced this bug on clean cluster") - A stuck resync at DRBD level. The part you quoted is a recoverable problem. A state change fails and is postponed: "...postponing this until current resync finished". The reason your device is stuck in state Inconsistent is that gpnvkc-w2 is SyncTarget towards gpnvkc-w1 and isn't making any progress. Not sure why. Try DRBD 9.1.12. If you can reproduce this reliably then we might be able to fix it.