Closed sandaymin123 closed 1 year ago
@sandaymin123 what was your workaround?
edited the deamonset
@sandaymin123 I see that you edited the daemonset to workaround this issue. Do you have a generic solution for this problem that can be used for both regular k8s and custom k8s? And can you provide more details on the custom k8s deployment you are talking about and why the default /var/lib/kubelet
path does not work for the custom k8s deployment?
@SandeepPissay The motivation for using a separate path is as follows:
We allow customers to order infrastructure that can have a larger secondary disk so it can be used by the container orchestration runtime for pulling images, creation of empheral storage, and encrypt the disk that could house customer sensitive data. Customers running very large workloads (i.e Spark or Data and A&AI) often run of of space when just using a single disk for both root partition and /var/lib/kubelet path are co-located
Varoius Solutions like RH OCS let you set a proprerty on a configmap to change the path..Also, the AZURE CSE Driver also lets you have that option on it's helm chart (example...https://github.com/kubernetes-sigs/azurefile-csi-driver/tree/master/charts) I don't want to be prescriptive on how to change the driver to specify the property as there are different ways of getting that input. cc: @dims
Thanks @sandaymin123 for the context. I'll add this feature request to our internal backlog. I think we can take care of it when we have an operator for the vSphere CSI.
@SandeepPissay Hi. I'm not sure the time frame for the operator, but if it's a ways out yet, would you be open to providing something simple like a patch file or kustomization?
@tejohnson123 I would prefer to have the operator provide this option and also tested in the context of the operator. Feel free to customize our yamls for the time being if that works for you.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
We would still like this option to be part of the operator installation when that is available. Keeping this issue open.
/remove-lifecycle rotten
@sandaymin123 If you dont mind, can you answere few queries can you share which values in the daemon set did you modify. Also, was this a custom kubelet or the one from upstream. Is this case only when we use a cloud provider that is not the vsphere cloud provider or would this case be valid regardless of cloud provider.
@jvrahav
We modified the pod-mount-dir volume mount:
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /var/data/kubelet <-------------- This
mountPropagation: Bidirectional
name: pods-mount-dir
- mountPath: /dev
name: device-dir
- mountPath: /sys/block
name: blocks-dir
- mountPath: /sys/devices
name: sys-devices-dir
and
volumes:
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
name: registration-dir
- hostPath:
path: /var/lib/kubelet/plugins/csi.vsphere.vmware.com
type: DirectoryOrCreate
name: plugin-dir
- hostPath:
path: /var/data/kubelet <----------- this
type: Directory
name: pods-mount-dir
- hostPath:
path: /dev
type: ""
name: device-dir
- hostPath:
path: /sys/block
type: Directory
name: blocks-dir
- hostPath:
path: /sys/devices
type: Directory
name: sys-devices-dir
We use upstream kubelet. This would be the case regardless of cloud provider.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This is really a pain. Seems that vdo operator has this ability. A supported helm chart for vsphere-csi-driver would be helpfull
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Is this a BUG REPORT or FEATURE REQUEST?:
What happened: On custom K8 deployments on VMWARE, a different kubelet path is used other than the default /var/lib/kubelet but /var/data/kubelet instead. As a resullt, the default value is passed to the csi node driver to be used to mount the storage /var/lib/kubelet fails to perform the mounts as it does not have access to /var/data... -->
Describe alternatives you've considered editing deamonset, -->
What you expected to happen: By way of a config map or other overriding means, (env var,etc), the ability to inject the kublet path directory.
Describe alternatives you've considered
How to reproduce it (as minimally and precisely as possible): Configure a K8 cluster with a different kublet path other than /var/lib/kubelet
Anything else we need to know?:
Environment:
uname -a
):@dims @jsafrane and @Mike Fedosin