kubernetes-retired / external-storage

[EOL] External storage plugins, provisioners, and helper libraries
Apache License 2.0
2.7k stars 1.6k forks source link

[local-volume] Bootstrapper uses default path for PV local path #462

Closed kokhang closed 6 years ago

kokhang commented 6 years ago

I have setup three directories in the node to be used as local storage. The directories are /mnt/sda1/disks/vol<n>. But when i launch the bootstrapper, it detects the directories and creates a PV for each. But it incorrectly assigns the local path to be /mnt/disks/vol<n>. For some reason it is not using the path /mnt/sda1/disks/vol<n>.

This is seen in the provisioner logs:

I1115 23:40:23.180737       1 discovery.go:141] Found new volume at host path "/mnt/disks/vol1" with capacity 1539346432, creating Local PV "local-pv-d0592836"
I1115 23:40:23.185020       1 discovery.go:156] Created PV "local-pv-d0592836" for volume at "/mnt/disks/vol1"
I1115 23:40:23.185088       1 cache.go:55] Added pv "local-pv-d0592836" to cache

This is what the local-volume configmaps looks like

$ kubectl -n kube-system get configmaps local-volume-config -o yaml
apiVersion: v1
data:
  storageClassMap: |
    local-storage:
      hostDir: "/mnt/sda1/disks"
      mountDir: "/local-disks"
kind: ConfigMap
metadata:
  creationTimestamp: 2017-11-15T23:40:02Z
  name: local-volume-config
  namespace: kube-system
  resourceVersion: "495"
  selfLink: /api/v1/namespaces/kube-system/configmaps/local-volume-config
  uid: 4d067a97-ca5e-11e7-9c4a-080027f80648

I am using minikube.

msau42 commented 6 years ago

Did you launch the bootstrapper with the argument to your configmap name?

kokhang commented 6 years ago

i just created the bootstrapper.yaml. Im assuming this is passing the config name argument in https://github.com/kubernetes-incubator/external-storage/blob/master/local-volume/bootstrapper/deployment/kubernetes/bootstrapper.yaml#L23. Is that correct?

msau42 commented 6 years ago

Oh are you using the latest version of the provisioner? We recently made a change to convert the configmap from json to yaml. So if you're using the latest configmap yaml, then the v1.0.x provisioner is going to fail parsing the configmap and fallback to the defaults.

msau42 commented 6 years ago

So I would use the yaml examples from this tag: https://github.com/kubernetes-incubator/external-storage/tree/local-volume-provisioner-v1.0.1

msau42 commented 6 years ago

Regardless, we should improve this by failing the provisioner bootup if we fail to parse the configmap, instead of falling back to defaults. Only fallback to defaults if the configmap doesn't exist.

/area local-volume /assign

kokhang commented 6 years ago

Ah OK ill try it with the json config map. Do you think #463 is also related to config map data not being correct? Thanks @msau42

msau42 commented 6 years ago

yeah I think they're related

kokhang commented 6 years ago

Hi @msau42, the bootstrapper is still not using the values in the configmap. I deployed with bootstrapper v1.0.0 and v1.0.1 to no avail.

Here is my configmap named local-volume-config:

$ kubectl -n kube-system get configmaps local-volume-config -o yaml
apiVersion: v1
data:
  local-storage: |
    {
      "hostDir": "/mnt/sda1/disks"
    }
kind: ConfigMap
metadata:
  creationTimestamp: 2017-11-16T19:22:35Z
  name: local-volume-config
  namespace: kube-system
  resourceVersion: "1963"
  selfLink: /api/v1/namespaces/kube-system/configmaps/local-volume-config
  uid: 80801d33-cb03-11e7-8585-080027adef5b

Logs from the bootstrapper pod:

$ kubectl -n kube-system logs local-volume-provisioner-bootstrap -f
I1116 19:26:31.358395       1 bootstrapper.go:356] Running bootstrap pod with config local-volume-config: {StorageClassConfig:map[] NodeLabelsForPV:[]}
I1116 19:26:31.367735       1 bootstrapper.go:368] Successfully created local volume provisioner

Logs from the provisioner:

$ kubectl -n kube-system logs local-volume-provisioner-9pftj -f
I1116 19:26:32.102249       1 main.go:58] Starting controller
I1116 19:26:32.102303       1 main.go:76] Could not get config map due to: resource name may not be empty, using default configmap
I1116 19:26:32.102309       1 main.go:79] Running provisioner with config map[local-storage:{HostDir:/mnt/disks MountDir:/local-disks}]
I1116 19:26:32.102318       1 controller.go:37] Initializing volume cache
I1116 19:26:32.102329       1 populator.go:85] Starting Informer controller
I1116 19:26:32.102660       1 populator.go:89] Waiting for Informer initial sync
I1116 19:26:33.103315       1 controller.go:58] Controller started
E1116 19:26:33.103352       1 discovery.go:91] Error reading directory: open /local-disks: no such file or directory

From the provisioner log, it seems that it is using the default HostDir.

I am using this to create the bootstrapper: https://github.com/kubernetes-incubator/external-storage/blob/local-volume-provisioner-v1.0.1/local-volume/bootstrapper/deployment/kubernetes/bootstrapper.yaml

Thanks.

msau42 commented 6 years ago

You should also use bootstrapper:v1.0.1

kokhang commented 6 years ago

Thanks @msau42. Using v1.0.1 instead of latest for both the bootstrapper and the provisioner did it for me. Feel free to close this issue.

kokhang commented 6 years ago

Btw, any plans of when local-volume would be promoted to alpha? Currently i have to enable it via the feature-gates and restart kube-api which is a bit of a hassle.

msau42 commented 6 years ago

I'll keep this issue open to come up with some better way to maintain the yamls in terms of versioning, and also deal with compatibility with K8s versions too.

cc @dhirajh

msau42 commented 6 years ago

Regarding local storage going to beta, I'm hoping 1.10 if I can get the scheduler work merged in 1.9.

msau42 commented 6 years ago

Also, regarding feature gates, you cannot directly pass in the feature gate when you initially bring up the cluster?

msau42 commented 6 years ago

to summarize, some action items from this issue:

kokhang commented 6 years ago

@msau42 i could pass the feature-gate when bringing up the cluster. But im more concerned with the case where k8s is setup and ready. Such as GKE, Azure Container Engine, CoreOS tectonic etc.

I am trying to integrate local-volume with Rook and ideally, i would like for Rook users to not have to do anything to their running k8s cluster to deploy Rook. It just should work like any other app :)

msau42 commented 6 years ago

At least for GKE, you use the "--kubernetes-alpha" flag to bring up alpha clusters. And because you don't have access to the master in GKE, the users can't enable specific alpha features after the fact.

msau42 commented 6 years ago

Also for local volume, you need to enable the feature gate on all kubelets too, not just the master components.

k8s-ci-robot commented 6 years ago

@dhirajh: GitHub didn't allow me to assign the following users: dhirajh.

Note that only kubernetes-incubator members can be assigned.

In response to [this](https://github.com/kubernetes-incubator/external-storage/issues/462#issuecomment-345085181): >/assign Instructions for interacting with me using PR comments are available [here](https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
dhirajh commented 6 years ago

/assign

msau42 commented 6 years ago

/close