alibaba / open-local

cloud-native local storage management system for stateful workload, low-latency with simplicity
Apache License 2.0
466 stars 81 forks source link

Unable to install open-local on minicube #47

Closed alex-arica closed 3 years ago

alex-arica commented 3 years ago

Hello,

I followed the installation guide here

When I typed kubectl get po -nkube-system -l app=open-local the output was:

NAME                                              READY   STATUS      RESTARTS   AGE
open-local-agent-p2xdq                            3/3     Running     0          13m
open-local-csi-provisioner-59cd8644ff-n52xc       1/1     Running     0          13m
open-local-csi-resizer-554f54b5b4-xkw97           1/1     Running     0          13m
open-local-csi-snapshotter-64dff4b689-9g9wl       1/1     Running     0          13m
open-local-init-job--1-f9vzz                      0/1     Completed   0          13m
open-local-init-job--1-j7j8b                      0/1     Completed   0          13m
open-local-init-job--1-lmvqd                      0/1     Completed   0          13m
open-local-scheduler-extender-5dc8d8bb49-n44pn    1/1     Running     0          13m
open-local-snapshot-controller-846c8f6578-2bfhx   1/1     Running     0          13m

However, when I typed kubectl get nodelocalstorage, I got this output:

NAME       STATE   PHASE   AGENTUPDATEAT   SCHEDULERUPDATEAT   SCHEDULERUPDATESTATUS
minikube                                                       

According to the installation guide, the column The STATE should display DiskReady.

And if I typed kubectl get nls -o yaml, it outputted:

piVersion: v1
items:
- apiVersion: csi.aliyun.com/v1alpha1
  kind: NodeLocalStorage
  metadata:
    creationTimestamp: "2021-09-20T13:37:09Z"
    generation: 1
    name: minikube
    resourceVersion: "615"
    uid: 6f193362-e2b2-4053-a6e6-81de35c96eaf
  spec:
    listConfig:
      devices: {}
      mountPoints:
        include:
        - /mnt/open-local/disk-[0-9]+
      vgs:
        include:
        - open-local-pool-[0-9]+
    nodeName: minikube
    resourceToBeInited:
      vgs:
      - devices:
        - /dev/sdb
        name: open-local-pool-0
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

I am running Minicube on my desktop computer which has a SSD hard disk.

Thank you for your help.

TheBeatles1994 commented 3 years ago

Is ssd hard disk mounted?

Is ssd hard disk already formated? like ext4?

Run minikube ssh to log into the minikube environment, and type

alex-arica commented 3 years ago

Yes Minikube is installed on the SSD hard disk. I have Debian installed on the hard drive and it is formatted with ext4.

On the host Debian (not Minikube ssh), the command df -Th outputs:

Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs   16G     0   16G   0% /dev
tmpfs          tmpfs     3.2G  1.8M  3.2G   1% /run
/dev/nvme0n1p5 ext4      114G   46G   64G  42% /
tmpfs          tmpfs      16G   70M   16G   1% /dev/shm
tmpfs          tmpfs     5.0M  8.0K  5.0M   1% /run/lock
/dev/nvme0n1p1 vfat      256M   31M  226M  12% /boot/efi
overlay        overlay   114G   46G   64G  42% /var/lib/docker/overlay2/a5830fa52d606ae0be105e59846226b8f7b80f67fe7de0b2da2d72c0d63ac9e9/merged
tmpfs          tmpfs     3.2G  844K  3.2G   1% /run/user/1000
overlay        overlay   114G   46G   64G  42% /var/lib/docker/overlay2/7a8d0332bd5a8819311816d8e4920190238c24875ac0e2d635d2ac2f955c165e/merged

With Minikube ssh, the command lsblk outputs:

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0 232.9G  0 disk 
|-nvme0n1p1 259:1    0   260M  0 part 
|-nvme0n1p2 259:2    0   128M  0 part 
|-nvme0n1p3 259:3    0   500M  0 part 
|-nvme0n1p4 259:4    0 114.9G  0 part 
|-nvme0n1p5 259:5    0 116.2G  0 part /tmp/hostpath-provisioner
`-nvme0n1p6 259:6    0   977M  0 part [SWAP]

With Minikube ssh, the command blkid does not output anything.

With Minikube ssh, the command df -Th outputs:

Filesystem     Type     Size  Used Avail Use% Mounted on
overlay        overlay  114G   46G   64G  42% /
tmpfs          tmpfs     64M     0   64M   0% /dev
shm            tmpfs     64M     0   64M   0% /dev/shm
/dev/nvme0n1p5 ext4     114G   46G   64G  42% /var
tmpfs          tmpfs     16G  9.2M   16G   1% /run
tmpfs          tmpfs     16G  8.0K   16G   1% /tmp
tmpfs          tmpfs    5.0M     0  5.0M   0% /run/lock
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/2ef006ab347a51392285dc69e1271a074c6d7609726b1541eae122fc14e51b1f/merged
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/d6242beb42b9474ef84113342f23637328368f19725608a780aa1a53a0f33946/merged
shm            tmpfs     64M     0   64M   0% /var/lib/docker/containers/5c37879474399e83afac887210f4d73bc139feff5d64f3ba3dbe7423de64acc6/mounts/shm
shm            tmpfs     64M     0   64M   0% /var/lib/docker/containers/d18554183ab58e4c6c017015bc379f66f7e6e90d6e4d7ca99472235c2a9bc59a/mounts/shm
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/2ab3c4a9b32be74c4311f8d0190cfbdaafcf818585f404c5b854082775dc90e4/merged
shm            tmpfs     64M     0   64M   0% /var/lib/docker/containers/da233120e374d342d7093961eea9d67cd757c04769df56ee9b9171484ec574cb/mounts/shm
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/c645459d5adc87e52470beb2fcf129f88fd64b4a557b44684349774b2530c854/merged
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/82a3486e521be6494ce61cd9562c63d45a7410a8d6a834ccd7d938c75ff026fc/merged
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/738d3cc97bf9aa9517f5b849d18fbb2c30101d20e9e9db77248751f818286777/merged
shm            tmpfs     64M     0   64M   0% /var/lib/docker/containers/1c539aeb33039e336a181797a954eafd901d10ccb918434a695ed9bfd884a28a/mounts/shm
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/bc2480b0d8f5d382e56c47b940af7f1540308fb2af2e92789829c275b7f98d34/merged
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/39cf7309fc17d7e49e0d360684c5deb9b83c0101c3f903d3153eeb0dfacbe32f/merged
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/485090b4fb2cf4b50bc5e31f38735eb30d9ed0dd5045f2c405b569ff644319ef/merged
shm            tmpfs     64M     0   64M   0% /var/lib/docker/containers/85cbf3b3f4219d51977515626adad7bdb6326abbf915fb12d1a24dd191d9de3b/mounts/shm
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/c7f8263eece5f3ce14ac42ebf896c677ab134597a51828f58f27d1bf02217efd/merged
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/98d8109a0392c29726fab627d0277eea2bd4c5152b7d679f47bb0db05c6650a8/merged
shm            tmpfs     64M     0   64M   0% /var/lib/docker/containers/e6554bedc94a4bb91dac1c7859001004ebffda4cbcc1e70dc3babdebb2237a6e/mounts/shm
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/c33e1c9c2cd14bdd873fd1d9f0fd34f1782f20676f542067e07a1d52761de9f8/merged
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/f87faa6b022060a0f9e7306946d9f6628eb3af737fa6fc3139b6bd50fb179880/merged
shm            tmpfs     64M     0   64M   0% /var/lib/docker/containers/2f62051bc65bed894a28d9419f635718802aed2791f8900bd56ca94f5abdaf71/mounts/shm
overlay        overlay  114G   46G   64G  42% /var/lib/docker/overlay2/8c9fe55880a1e1b00f755f75d2c019d80474501dc593c65dc263d0243bb89f14/merged
TheBeatles1994 commented 3 years ago
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0 232.9G  0 disk 
|-nvme0n1p1 259:1    0   260M  0 part 
|-nvme0n1p2 259:2    0   128M  0 part 
|-nvme0n1p3 259:3    0   500M  0 part 
|-nvme0n1p4 259:4    0 114.9G  0 part 
|-nvme0n1p5 259:5    0 116.2G  0 part /tmp/hostpath-provisioner
`-nvme0n1p6 259:6    0   977M  0 part [SWAP]

Open-Local need a partition to create VG, you need to edit spec of nodelocalstorage.

...
    resourceToBeInited:
      vgs:
      - devices:
        - /dev/nvme0n1p4  # choose a partition that you wish to create VG with, not /dev/sdb
        name: open-local-pool-0
...
alex-arica commented 3 years ago

Thank you.

What you are suggesting is not in the installation guide here

Is there a reason for that?

I made the following changes as you suggested:

alex@debian:~/source/open-local$ kubectl get nodelocalstorage -o yaml
apiVersion: v1
items:
- apiVersion: csi.aliyun.com/v1alpha1
  kind: NodeLocalStorage
  metadata:
    creationTimestamp: "2021-09-21T09:07:04Z"
    generation: 2
    name: minikube
    resourceVersion: "4139"
    uid: e0432fc8-63e2-443e-b724-8b6b10c36354
  spec:
    listConfig:
      devices: {}
      mountPoints:
        include:
        - /mnt/open-local/disk-[0-9]+
      vgs:
        include:
        - open-local-pool-[0-9]+
    nodeName: minikube
    resourceToBeInited:
      vgs:
      - devices:
        - /dev/nvme0n1p4
        name: open-local-pool-0
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

But the output is the same for:

kubectl get nodelocalstorage 
NAME       STATE   PHASE   AGENTUPDATEAT   SCHEDULERUPDATEAT   SCHEDULERUPDATESTATUS
minikube                                                       

And there is no events providing details when running kubectl describe nodelocalstorage.

TheBeatles1994 commented 3 years ago

User documentation is not detailed enough now, I will add more details later.

Try to output the logs of open-local agent:

kubectl logs -nkube-system [name of open-local agent pod] -c agent
alex-arica commented 3 years ago

Thank you.

kubectl logs -nkube-system open-local-agent-xwlqt -c agent
time="2021-09-21T17:49:13+08:00" level=info msg="Version: v0.2.2, Commit: 314b005"
W0921 17:49:13.662051    4613 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2021-09-21T17:49:13+08:00" level=info msg="starting open-local agent"
time="2021-09-21T17:49:13+08:00" level=info msg="Started open-local agent"
time="2021-09-21T17:49:13+08:00" level=error msg="ListVolumeGroupNames error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:13+08:00" level=error msg="[getAllLocalSnapshotLV]List volume group names error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:13+08:00" level=error msg="[ExpandSnapshotLVIfNeeded]get open-local snapshot lv failed: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:13+08:00" level=error msg="get node local storage minikube failed: nodelocalstorages.csi.aliyun.com \"minikube\" not found"
time="2021-09-21T17:49:13+08:00" level=info msg="creating node local storage minikube"
time="2021-09-21T17:49:43+08:00" level=error msg="ListVolumeGroupNames error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:43+08:00" level=error msg="[getAllLocalSnapshotLV]List volume group names error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:43+08:00" level=error msg="[ExpandSnapshotLVIfNeeded]get open-local snapshot lv failed: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:43+08:00" level=error msg="LookupVolumeGroup error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:43+08:00" level=error msg="ListVolumeGroupNames error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:49:43+08:00" level=error msg="discover VG error: List volume group error: nsenter: failed to execute vgs: No such file or directory"
# AFTER I EDITED nodelocalstorage:
time="2021-09-21T17:50:13+08:00" level=error msg="ListVolumeGroupNames error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:50:13+08:00" level=error msg="[getAllLocalSnapshotLV]List volume group names error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:50:13+08:00" level=error msg="[ExpandSnapshotLVIfNeeded]get open-local snapshot lv failed: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:50:13+08:00" level=error msg="LookupVolumeGroup error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:50:13+08:00" level=error msg="ListVolumeGroupNames error: nsenter: failed to execute vgs: No such file or directory"
time="2021-09-21T17:50:13+08:00" level=error msg="discover VG error: List volume group error: nsenter: failed to execute vgs: No such file or directory"

Above, the logs from "17:50:13" are when I edited nodelocalstorage to add:

  resourceToBeInited:
    vgs:
    - devices:
      - /dev/nvme0n1p4
      name: open-local-pool-0
TheBeatles1994 commented 3 years ago

Do you install lvm2 in minikube vm?

alex-arica commented 3 years ago

With Minikube ssh, I run:

sudo apt-get update
sudo apt-get install lvm2

And now when I run kubectl get nls, the output is:

NAME       STATE       PHASE     AGENTUPDATEAT   SCHEDULERUPDATEAT   SCHEDULERUPDATESTATUS
minikube   DiskReady   Running   7s              7s                  

Nice one! Thank you for your help. I am going to test it further, and this is a great progress thanks to you.

TheBeatles1994 commented 3 years ago

And thanks for your issue! I will add more details in user guide docs and provide more event to report errors.

alex-arica commented 3 years ago

Thank you!

Once installed lvm2 and now if I check the logs of the open-local agent, the output is:

time="2021-09-21T18:19:42+08:00" level=warning msg="WARNING: ntfs signature detected on /dev/nvme0n1p4 at offset 3. Wipe it? [y/n]: [n]"
time="2021-09-21T18:19:42+08:00" level=error msg="CreatePhysicalVolume error: Aborted wiping of ntfs.\n1 existing signature left on the device."
time="2021-09-21T18:19:42+08:00" level=error msg="create physical volume /dev/nvme0n1p4 error: lvm: CreatePhysicalVolume: Aborted wiping of ntfs.\n1 existing signature left on the device."
time="2021-09-21T18:19:42+08:00" level=error msg="create vg open-local-pool-0 failed: lvm: CreatePhysicalVolume: Aborted wiping of ntfs.\n1 existing signature left on the device."

On my desktop computer I have also a NTFS partition. I suspect '/dev/nvme0n1p4' is NTFS.

On my host Debian (not Minikube ssh), the partitions are as follows:

lsblk -f
NAME        FSTYPE FSVER LABEL    UUID                                 FSAVAIL FSUSE% MOUNTPOINT
nvme0n1                                                                               
├─nvme0n1p1 vfat   FAT32 SYSTEM   82EF-CBE2                             225.9M    12% /boot/efi
├─nvme0n1p2                                                                           
├─nvme0n1p3 ntfs         Recovery EC52F02F52EFFFDE                                    
├─nvme0n1p4 ntfs         Windows  D4ECF4F5ECF4D32A                                    
├─nvme0n1p5 ext4   1.0            84b070e0-b7da-4b28-b494-36041b0f0245   62.3G    40% /
└─nvme0n1p6 swap   1              a9169257-e93f-40c5-8125-5cba8d6b8bfc                [SWAP]

And with Minikube ssh:

docker@minikube:~$ lsblk -f
NAME        FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
nvme0n1                                      
|-nvme0n1p1                                  
|-nvme0n1p2                                  
|-nvme0n1p3                                  
|-nvme0n1p4                                  
|-nvme0n1p5                     62.3G    40% /tmp/hostpath-provisioner
`-nvme0n1p6                                  [SWAP]

From the above, I know that '/dev/nvme0n1p5' is ext4. Should I use '/dev/nvme0n1p5' when I edit nodelocalstorage, as follows?

resourceToBeInited:
    vgs:
    - devices:
      - /dev/nvme0n1p5
      name: open-local-pool-0
alex-arica commented 3 years ago

Sorry I closed by mistake,

TheBeatles1994 commented 3 years ago

Your partition has a file system(ntfs).

Set env "Force_Create_VG" to “true” in container "agent" of open-local-agent deamonset

alex-arica commented 3 years ago

You mentioned earlier that Open-Local needs a partition to create VG.

The fact that the NTFS partition '/dev/nvme0n1p4' has already data, will it be formatted?

If that's the case I may lose data used by my Windows partition.

TheBeatles1994 commented 3 years ago

Yes, the partition will be formatted and data will be lost.

So you need to use a partition which have no user data.

alex-arica commented 3 years ago

Ok thank you.

TheBeatles1994 commented 3 years ago

Looking forward to your feedback!

alex-arica commented 3 years ago

I created a new partition nvme0n1p7 in my SSD hard drive, as follows:

docker@minikube:~$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0 232.9G  0 disk 
|-nvme0n1p1 259:1    0   260M  0 part 
|-nvme0n1p2 259:2    0   128M  0 part 
|-nvme0n1p3 259:3    0   500M  0 part 
|-nvme0n1p4 259:4    0   113G  0 part 
|-nvme0n1p5 259:5    0 116.2G  0 part /tmp/hostpath-provisioner
|-nvme0n1p6 259:6    0   977M  0 part [SWAP]
`-nvme0n1p7 259:7    0   1.9G  0 part 

I modified the NLS by editing heml/templates/nlsc.yaml, as follows:

    resourceToBeInited:
      vgs:
      - devices:
        - /dev/nvme0n1p7
...

I added a new env variable by editing heml/templates/agent.yaml, as follows:

name: Force_Create_VG
value: "true"

I can create the pods successfully. The NLS outputs:

kubectl get nls 
NAME       STATE       PHASE     AGENTUPDATEAT   SCHEDULERUPDATEAT   SCHEDULERUPDATESTATUS
minikube   DiskReady   Running   1s              22m                 accepted

The agent's logs:

kubectl logs open-local-agent-ct566 -nkube-system  -c agent
time="2021-09-21T21:32:18+08:00" level=info msg="Version: v0.2.2, Commit: 314b005"
W0921 21:32:18.387656    6349 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2021-09-21T21:32:18+08:00" level=info msg="starting open-local agent"
time="2021-09-21T21:32:18+08:00" level=info msg="Started open-local agent"
time="2021-09-21T21:32:18+08:00" level=error msg="get node local storage minikube failed: nodelocalstorages.csi.aliyun.com \"minikube\" not found"
time="2021-09-21T21:32:18+08:00" level=info msg="creating node local storage minikube"

However, when I run kubectl apply -f ./example/lvm/sts-lvm.yaml, the PV is not created. The PVC has the following events (see Warning):

kubectl describe pvc
Name:          html-nginx-lvm-0
Namespace:     default
StorageClass:  open-local-lvm
Status:        Pending
Volume:        
Labels:        app=nginx-lvm
Annotations:   volume.beta.kubernetes.io/storage-provisioner: local.csi.aliyun.com
               volume.kubernetes.io/selected-node: minikube
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       nginx-lvm-0
Events:
  Type     Reason                Age               From                                                                Message
  ----     ------                ----              ----                                                                -------
  Normal   WaitForFirstConsumer  37s               persistentvolume-controller                                         waiting for first consumer to be created before binding
  Normal   Provisioning          5s (x6 over 37s)  local.csi.aliyun.com_minikube_68d22d59-49f0-4b8f-97d9-842a2560fae5  External provisioner is provisioning volume for claim "default/html-nginx-lvm-0"
  Warning  ProvisioningFailed    5s (x6 over 37s)  local.csi.aliyun.com_minikube_68d22d59-49f0-4b8f-97d9-842a2560fae5  failed to provision volume with StorageClass "open-local-lvm": rpc error: code = Unknown desc = Create Lvm with error rpc error: code = Internal desc = failed to create lv: Failed to run cmd: /bin/nsenter --mount=/proc/1/ns/mnt --ipc=/proc/1/ns/ipc --net=/proc/1/ns/net --uts=/proc/1/ns/uts  lvcreate -n local-d7e9c7ce-ba73-4bd9-86a7-9fe833102de0 -L 1073741824b -W y -y open-local-pool-0, with out:   /dev/open-local-pool-0/local-d7e9c7ce-ba73-4bd9-86a7-9fe833102de0: not found: device not cleared
  Aborting. Failed to wipe start of new LV.
, with error: exit status 5
  Normal  ExternalProvisioning  1s (x4 over 37s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "local.csi.aliyun.com" or manually created by system administrator

Moreover, in the documentation, the following command does not work:

kubectl patch nls minikube --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/resourceToBeInited/vgs/0\", \"value\": {\"devices\": [\"/dev/vdb\"], \"name\": \"open-local-pool-0\" } }]'

It outputs:

The request is invalid
TheBeatles1994 commented 3 years ago

I found that this is a known problem: Debian is 'known' with it's own rewrite of lvm2 udev rules. And those rules are not ack-ed by upstream and they are not correct.

Do you use the Debian system in your production environment? There is no problem in Centos.

TheBeatles1994 commented 3 years ago

I see that use lvcreate -Zn.... can create logical volume successfully, but it is not recommended, because trying to mount an unzeroed logical volume can cause the system to hang.

alex-arica commented 3 years ago

Thank you for highlighting the LVM issue with Debian. I am using Debian in dev and production environments. Unfortunately, it would be difficult for me to move to another distro at this stage.

TheBeatles1994 commented 3 years ago

So this is a LVM2 issue in Debian now.

If the command lvcreate -n test-lv-in-debian -L 1G -W y -y open-local-pool-0 return success, that means you can use Open-Local.

TheBeatles1994 commented 3 years ago

If there are no other questions, I will close the issue

alex-arica commented 3 years ago

Thank you for your help so far. No more questions, but as it stands, it does not work locally for me. It is worth testing on Debian on your side.

TheBeatles1994 commented 3 years ago

What version of the Debian system are you using? I try to reproduce it.

alex-arica commented 3 years ago

Thank you. I am using Debian 11 Bullseye.

TheBeatles1994 commented 3 years ago

I can create logical volume successfully on my debian system.

Kernel version(uname -a): Linux *** 4.19.0-17-amd64 #1 SMP Debian 4.19.194-3 (2021-07-18) x86_64 GNU/Linux

LVM version(lvm version): LVM version: 2.03.02(2) (2018-12-18) Library version: 1.02.155 (2018-12-18) Driver version: 4.39.0

My commands are as following:

You can try to execute those command manually to see if there are still problems.

alex-arica commented 3 years ago

Thank you. And does open-local work on your Debian?

TheBeatles1994 commented 3 years ago

Yes, I run a open-local container.

docker run --rm -it --entrypoint="" -v /sys:/sys -v /dev:/dev --privileged --pid=host thebeatles1994/open-local:v0.2.2 sh 

sh-4.2# /bin/nsenter --mount=/proc/1/ns/mnt --ipc=/proc/1/ns/ipc --net=/proc/1/ns/net --uts=/proc/1/ns/uts  lvcreate -n test-lv-in-debian-new -L 1G -W y -y open-local-pool-0
  Logical volume "test-lv-in-debian-new" created.
alex-arica commented 3 years ago

Thank you.

As I mentioned in my message 2 days ago, with Minikube I couldn't get open-local to work when I followed the installation guide.

Have you tried the same approach with Minikube on Debia

If it worked for you, I will be interested in knowing what you did specifically that I haven't done.

TheBeatles1994 commented 3 years ago

Now I install Open-Local in minikube in my debian OS, and it worked when I create Pod using Open-Local PV.

I followed the installation guide too, and I edit spec of nls minikube(change /dev/sdb to /dev/vdb)

alex-arica commented 3 years ago

Thank you for trying with Minikube and Debian. I will try it again.

TheBeatles1994 commented 3 years ago

image

alex-arica commented 3 years ago

It works for me up to there as well.

However, when I run kubectl apply -f example/lvm/sts-lvm-snap.yaml the PV is not created. I will try again and hopefully it will work this time.

TheBeatles1994 commented 3 years ago

image

TheBeatles1994 commented 3 years ago

You must create original PV first by running kubectl apply -f example/lvm/sts-lvm.yaml then create a volumesnapshot by running kubectl apply -f example/lvm/snapshot.yaml and run kubectl apply -f example/lvm/sts-lvm-snap.yaml to create a Pod use new PV whose datasource is original PV.

TheBeatles1994 commented 3 years ago

When you want to expand PV, you must delete all snapshots of the PV. Or you will get this warning:

  Warning  VolumeResizeFailed     8s (x5 over 12s)   external-resizer local.csi.aliyun.com                                              resize volume "local-faa3ee32-0304-4906-92e9-7f642e87fca5" by resizer "local.csi.aliyun.com" failed: rpc error: code = Unknown desc = Create Lvm with error rpc error: code = Internal desc = failed to expand lv: Failed to run cmd: /bin/nsenter --mount=/proc/1/ns/mnt --ipc=/proc/1/ns/ipc --net=/proc/1/ns/net --uts=/proc/1/ns/uts  lvextend -L21474836480B open-local-pool-0/local-faa3ee32-0304-4906-92e9-7f642e87fca5, with out:   Snapshot origin volumes can be resized only while inactive: try lvchange -an.
TheBeatles1994 commented 3 years ago

It works for me up to there as well.

However, when I run kubectl apply -f example/lvm/sts-lvm-snap.yaml the PV is not created. I will try again and hopefully it will work this time.

What did you do to make it work?

alex-arica commented 3 years ago

I did not make it work yet. Because the PV does not create for some reasons. I will try again tomorrow and let you know (UK time).

TheBeatles1994 commented 3 years ago

I am jealous for you bro, cause Chinese programmer may work until after 9 p.m.(T⌓T)

alex-arica commented 3 years ago

You guys work too hard!

Here in the UK we work between 9am and 5pm.

Thank you for your help so far. I appreciate it!

TheBeatles1994 commented 3 years ago

I have improved the user guide documentation #48 , welcome to provide suggestions.

TheBeatles1994 commented 3 years ago

Support reporting error events in nls when creating VG failed #49