att-comdev / openstack-helm

PROJECT HAS MOVED TO OPENSTACK
https://github.com/openstack/openstack-helm
69 stars 41 forks source link

ceph volume creation failed with "rbd: create volume failed, err: exit status 22" #303

Closed chinasubbareddym closed 7 years ago

chinasubbareddym commented 7 years ago

Is this a bug report or feature request? (choose one): bug

Kubernetes Version (output of kubectl version):

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.5", GitCommit:"894ff23729bbc0055907dd3a496afb725396adda", GitTreeState:"clean", BuildDate:"2017-03-22T00:17:51Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Helm Client and Tiller Versions (output of helm version):

Client: &version.Version{SemVer:"v2.2.3", GitCommit:"1402a4d6ec9fb349e17b912e32fe259ca21181e3", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.2.3", GitCommit:"1402a4d6ec9fb349e17b912e32fe259ca21181e3", GitTreeState:"clean"}

Development or Deployment Environment?: Deployment

Release Tag or Master:

Expected Behavior: ceph volume not getting created.

What Actually Happened:

**part of ceph verification step , i am trying to create a volume with /openstack-helm/tests/pvc-test.yaml in ceph namespace , but my volume is failed to create below error message.

Failed to provision volume with StorageClass "general": rbd: create volume failed, err: exit status 22

if i try to create volume directly inside pod of ceph , i could do that .

rbd create --size 1 test

even ceph status is good .**

How to Reproduce the Issue (as minimally as possible):

after installing the ceph helm chart, try to create test volume.

Any Additional Comments:

v1k0d3n commented 7 years ago

@chinasubbareddym we might need to troubleshoot some of this with you. i took ceph from master and was able to turn it up. can you find us on either IRC #openstack-helm or Kubernetes Slack (channel is #openstack-helm). we have plenty of people willing to help you through this.

Ananth-vr commented 7 years ago

@chinasubbareddym makesure kube-controller-manager- is pointing to kube-DNS kubectl exec -n kube-system kube-controller-manager-k8controller -ti -- /bin/bash

echo dnsip > /etc/resolv.conf echo search svc.cluster.local >> /etc/resolv.conf

chinasubbareddym commented 7 years ago

@krrypto , Thanks, it worked after correcting dns ip this might be broken , because we built the controller-manager pod from att repos for rbd package .

v1k0d3n commented 7 years ago

@chinasubbareddym are you referring to the att-comdev provided kube-controller-manager on quay?

the controller manager + ceph requires dns to be configured correct; meaning that kube-dns needs to be resolvable at both the pod level, and host level. we tried to capture this in our installation guide for bare metal