Closed kokhang closed 6 years ago
Did you launch the bootstrapper with the argument to your configmap name?
i just created the bootstrapper.yaml. Im assuming this is passing the config name argument in https://github.com/kubernetes-incubator/external-storage/blob/master/local-volume/bootstrapper/deployment/kubernetes/bootstrapper.yaml#L23. Is that correct?
Oh are you using the latest version of the provisioner? We recently made a change to convert the configmap from json to yaml. So if you're using the latest configmap yaml, then the v1.0.x provisioner is going to fail parsing the configmap and fallback to the defaults.
So I would use the yaml examples from this tag: https://github.com/kubernetes-incubator/external-storage/tree/local-volume-provisioner-v1.0.1
Regardless, we should improve this by failing the provisioner bootup if we fail to parse the configmap, instead of falling back to defaults. Only fallback to defaults if the configmap doesn't exist.
/area local-volume /assign
Ah OK ill try it with the json config map. Do you think #463 is also related to config map data not being correct? Thanks @msau42
yeah I think they're related
Hi @msau42, the bootstrapper is still not using the values in the configmap. I deployed with bootstrapper v1.0.0 and v1.0.1 to no avail.
Here is my configmap named local-volume-config
:
$ kubectl -n kube-system get configmaps local-volume-config -o yaml
apiVersion: v1
data:
local-storage: |
{
"hostDir": "/mnt/sda1/disks"
}
kind: ConfigMap
metadata:
creationTimestamp: 2017-11-16T19:22:35Z
name: local-volume-config
namespace: kube-system
resourceVersion: "1963"
selfLink: /api/v1/namespaces/kube-system/configmaps/local-volume-config
uid: 80801d33-cb03-11e7-8585-080027adef5b
Logs from the bootstrapper pod:
$ kubectl -n kube-system logs local-volume-provisioner-bootstrap -f
I1116 19:26:31.358395 1 bootstrapper.go:356] Running bootstrap pod with config local-volume-config: {StorageClassConfig:map[] NodeLabelsForPV:[]}
I1116 19:26:31.367735 1 bootstrapper.go:368] Successfully created local volume provisioner
Logs from the provisioner:
$ kubectl -n kube-system logs local-volume-provisioner-9pftj -f
I1116 19:26:32.102249 1 main.go:58] Starting controller
I1116 19:26:32.102303 1 main.go:76] Could not get config map due to: resource name may not be empty, using default configmap
I1116 19:26:32.102309 1 main.go:79] Running provisioner with config map[local-storage:{HostDir:/mnt/disks MountDir:/local-disks}]
I1116 19:26:32.102318 1 controller.go:37] Initializing volume cache
I1116 19:26:32.102329 1 populator.go:85] Starting Informer controller
I1116 19:26:32.102660 1 populator.go:89] Waiting for Informer initial sync
I1116 19:26:33.103315 1 controller.go:58] Controller started
E1116 19:26:33.103352 1 discovery.go:91] Error reading directory: open /local-disks: no such file or directory
From the provisioner log, it seems that it is using the default HostDir.
I am using this to create the bootstrapper: https://github.com/kubernetes-incubator/external-storage/blob/local-volume-provisioner-v1.0.1/local-volume/bootstrapper/deployment/kubernetes/bootstrapper.yaml
Thanks.
You should also use bootstrapper:v1.0.1
Thanks @msau42. Using v1.0.1
instead of latest
for both the bootstrapper and the provisioner did it for me. Feel free to close this issue.
Btw, any plans of when local-volume would be promoted to alpha? Currently i have to enable it via the feature-gates and restart kube-api which is a bit of a hassle.
I'll keep this issue open to come up with some better way to maintain the yamls in terms of versioning, and also deal with compatibility with K8s versions too.
cc @dhirajh
Regarding local storage going to beta, I'm hoping 1.10 if I can get the scheduler work merged in 1.9.
Also, regarding feature gates, you cannot directly pass in the feature gate when you initially bring up the cluster?
to summarize, some action items from this issue:
@msau42 i could pass the feature-gate when bringing up the cluster. But im more concerned with the case where k8s is setup and ready. Such as GKE, Azure Container Engine, CoreOS tectonic etc.
I am trying to integrate local-volume with Rook and ideally, i would like for Rook users to not have to do anything to their running k8s cluster to deploy Rook. It just should work like any other app :)
At least for GKE, you use the "--kubernetes-alpha" flag to bring up alpha clusters. And because you don't have access to the master in GKE, the users can't enable specific alpha features after the fact.
Also for local volume, you need to enable the feature gate on all kubelets too, not just the master components.
@dhirajh: GitHub didn't allow me to assign the following users: dhirajh.
Note that only kubernetes-incubator members can be assigned.
/assign
/close
I have setup three directories in the node to be used as local storage. The directories are
/mnt/sda1/disks/vol<n>
. But when i launch the bootstrapper, it detects the directories and creates a PV for each. But it incorrectly assigns the local path to be/mnt/disks/vol<n>
. For some reason it is not using the path/mnt/sda1/disks/vol<n>
.This is seen in the provisioner logs:
This is what the local-volume configmaps looks like
I am using minikube.