Open sbogomolov opened 9 months ago
Hello,
shared storage is not properly tested. Because I do not have any of them. There were some bugs, i do not remember when we fixed it. But I heard it works now...
Please, try latest release (or edge). We have a plan to make refactoring and add more tests of shared storages in future releases.
PS. Scheduler is responsible for migration of pods. IF it happens => PVC has right affinity already. Probably the issue in another component of proxmox/kubernetes. Try to check the logs.
When pod migrates to a different node we would need to detach the virtual disk from one VM and attach it to another one. Are you saying that this logic is already there?
@sbogomolov I just tested this in my homelab and can verify that this CSI driver will detach the volume and re-attach it on the appropriate node when the scheduler migrates the pod to a different node. This is fairly straightforward to test by just cordoning the node and restarting the pod.
@sbogomolov I just tested this in my homelab and can verify that this CSI driver will detach the volume and re-attach it on the appropriate node when the scheduler migrates the pod to a different node. This is fairly straightforward to test by just cordoning the node and restarting the pod.
This is great news! I'll try to test this on my cluster.
Can confirm this works, at least with my iSCSI volume. Pod can be created on all workers spread over the Proxmox cluster.
@taylor-madeak @christiaangoossens Could you please provide more details on how you got volume migration to work?
I've tried both v0.6.1
and edge
and can't get volume migration to work across zones/hypervisor host machines.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/sergelogvinov/proxmox-csi-plugin/v0.6.1/docs/deploy/proxmox-csi-plugin-release.yml
- proxmox-csi-secret.yaml
- sc.yaml
images:
- name: ghcr.io/sergelogvinov/proxmox-csi-node
newTag: edge
- name: ghcr.io/sergelogvinov/proxmox-csi-controller
newTag: edge
with the following StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: proxmox-csi
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/fstype: ext4
storage: local-zfs
cache: writethrough
ssd: "true"
mountOptions:
- noatime
provisioner: csi.proxmox.sinextra.dev
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
I've tried with both a StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: stateful
namespace: pve-csi
spec:
replicas: 1
selector:
matchLabels:
app: stateful-pv
template:
metadata:
labels:
app: stateful-pv
spec:
containers:
- name: alpine
image: alpine
command: [ "sleep","1d" ]
volumeMounts:
- name: stateful
mountPath: /mnt
volumeClaimTemplates:
- metadata:
name: stateful
spec:
storageClassName: proxmox-csi
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
serviceName: stateful
and a Deployment
with an ephemeral volume (this alone works, but the data is of course lost on each restart of the pod) and a pvc
apiVersion: apps/v1
kind: Deployment
metadata:
name: pv-deploy
namespace: pve-csi
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
selector:
matchLabels:
app: pv-deploy
template:
metadata:
labels:
app: pv-deploy
spec:
containers:
- name: alpine
image: alpine
command: [ "sleep","1d" ]
volumeMounts:
- name: deploy
mountPath: /mnt
- name: pvc
mountPath: /tmp
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc
- name: deploy
ephemeral:
volumeClaimTemplate:
spec:
storageClassName: proxmox-csi
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1.5Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
namespace: pve-csi
spec:
storageClassName: proxmox-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
I've tried both changing nodeAffinity
and cordoning the node the pods are running on before restarting them.
I'm running a three node Proxmox cluster (homelab
with machines abel
, cantor
, and euclid
) running Kubernetes in two separate VMs , each on their own physical node (k8s-ctrl-01
on abel
and k8s-ctrl-02
on euclid
).
The k8s-nodes are manually labelled
kubectl label node k8s-ctrl-01 topology.kubernetes.io/region=homelab
kubectl label node k8s-ctrl-01 topology.kubernetes.io/zone=abel
kubectl label node k8s-ctrl-02 topology.kubernetes.io/region=homelab
kubectl label node k8s-ctrl-02 topology.kubernetes.io/zone=euclid
Migrating k8s-ctrl-02
to abel
(so that both VMs are on the same physical host/zone) and relabelling it
kubectl label node k8s-ctrl-02 topology.kubernetes.io/zone=abel --overwrite
The PVs are migrated flawlessly from one node to the other and back again.
I see in the README.md that
The Pod
cannot
migrate to another zone (another Proxmox node)
but the above comments led me to believe that a pod is able to migrate to another zone/hypervisor host machine.
Am I doing something wrong, or is pv-migration to a different zone not supported yet? If no, is it a planned feature?
I'm nevertheless impressed by this plugin and I'm going to make good use of it in my homelab!
Hi, local storages cannot migrate to another proxmox node. It works only with shared
storages like ceph.
But you can migrate pv/pvc to another node manually, https://github.com/sergelogvinov/proxmox-csi-plugin/blob/main/docs/pvecsictl.md brew version has a bug, try edge version...
docker pull ghcr.io/sergelogvinov/pvecsictl:edge
@sergelogvinov Awesome! I'll have to try it.
Would it be possible to port this functionality to proxmox-csi-plugin?
It is not easy to implement this. There are many limitations on the Kubernetes side. That's why I created this CLI tool. We cannot share with the Kubernetes scheduler the cost of launching pods on a non-local Proxmox node.
I'm trying to use a nfs storage, at least being bound to a certain node. However it seems that the CSI disallows any kind of NFS storage?
I'm trying to use a nfs storage, at least being bound to a certain node. However it seems that the CSI disallows any kind of NFS storage?
Hi @veebkolm, I haven't tested many shared storage options. I know that CIFS works well, but Samba (cifs) isn't fully reliable. NFS is made for network file systems, not for block devices. If you can test storing raw/qcow disks on NFS storage and it works fine, we can remove this limitation.
thanks.
Hi, @sergelogvinov
We hold most of our VM disks on a separate machine, served as raw images over nfs, works very fine
@veebkolm Can you test the edge
version? i've removed the nfs
from the list.
Thank you for your contribution to the project!
@sergelogvinov I tested it out, everything seems to work, including rescheduling and mounting on another node :+1:
Thanks!
@veebkolm Did you also test scenarios that NFS is normally known to have some issues with? I am not an expert with how raw disks are written back to storage in Proxmox, but I had some issues with VM boot disks on NFS (Truenas). Didn't have any with iSCSI, although it's much more annoying to set up.
Might be good to test some heavy random read write and databases.
Good point - I didn't yet try any real, heavy workloads. I will try to post an update when we do.
As for background, the NAS we're using is an vanilla Ubuntu machine having a ZFS pool - so ZFS/NFS related caveats are valid like with Truenas. We haven't had problems with our VM boot disks but I suppose there haven't been much storage intensive workloads.
Am working to get setup for the first time using proxmox clusterapi provider and later proxmox csi plugin with ceph storge. The csi homepage at github doesn't mention ceph or that it works when moving pods around. After reading this issue it sounds like it is going to work. I'll update here once I've had a chance to test. If so, might be time to update the main page.
We already support shared storage. However, PV backed by shared storage is bound to the node it was created on. If pod is killed and recreated on a different node - it cannot use that PV. Have anyone already looked into making this possible?