Closed kumarganesh2814 closed 6 years ago
Hi Ganesh
This is strange. I've just tested this again by blowing away my minikube environment and re-following the project steps and it all deploys and runs fine. When I look at the PVs and PVCs, I see:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-1d7c166d-e01f-11e7-8659-080027d4a6e5 1Gi RWO Delete Bound default/mongodb-persistent-storage-claim-mongod-1 standard 20m
pv/pvc-2082ec70-e01f-11e7-8659-080027d4a6e5 1Gi RWO Delete Bound default/mongodb-persistent-storage-claim-mongod-2 standard 20m
pv/pvc-fb96d32c-e01e-11e7-8659-080027d4a6e5 1Gi RWO Delete Bound default/mongodb-persistent-storage-claim-mongod-0 standard 21m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/mongodb-persistent-storage-claim-mongod-0 Bound pvc-fb96d32c-e01e-11e7-8659-080027d4a6e5 1Gi RWO standard 21m
pvc/mongodb-persistent-storage-claim-mongod-1 Bound pvc-1d7c166d-e01f-11e7-8659-080027d4a6e5 1Gi RWO standard 20m
pvc/mongodb-persistent-storage-claim-mongod-2 Bound pvc-2082ec70-e01f-11e7-8659-080027d4a6e5 1Gi RWO standard 20m
Here's info about my environment, for you to compare, if it helps:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-07-26T00:12:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$ minikube version
minikube version: v0.21.0
(SSH'd to Minikube):
$ cat /etc/os-release
NAME=Buildroot
VERSION=2017.02
ID=buildroot
VERSION_ID=2017.02
PRETTY_NAME="Buildroot 2017.02"
What do you get for these?
Note: Due to a recent k8s 1.8 bug https://github.com/kubernetes/kubernetes/issues/53309 I had to add "--validate=false" to the generate script for the kubectl apply line. I've checked that in to the github project now.
Paul
Hi Paul
Mine kubernates cluster is not minikube one. I have created this using kubeadm. Will that make any difference to create PV and PVC.
I guess provisioner may differ for CentOS baremetal install of kubernates.
I would try to see this as I remember for sidecar demo also PVC and PVdidn’t get created.
I needed to comment out few things there too.
It will be helpful if you can guide for kubernates cluster installed on centos with kubeadm.
Thanks and Regards Ganesh
Hi Paul
Same problem with kumarganesh2814. I created my kubernetes cluster with ansible in ubuntu-server.
# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-6948c479-e16f-11e7-ad69-08002741a89a 2Gi RWO Delete Bound default/html-web-0 nfs-storage 8h
pv/pvc-6ff04fd2-e178-11e7-ad69-08002741a89a 2Gi RWO Delete Bound default/html-web-1 nfs-storage 7h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/html-web-0 Bound pvc-6948c479-e16f-11e7-ad69-08002741a89a 2Gi RWO nfs-storage 8h
pvc/html-web-1 Bound pvc-6ff04fd2-e178-11e7-ad69-08002741a89a 2Gi RWO nfs-storage 7h
pvc/mongodb-persistent-storage-claim-mongod-0 Pending standard 22m
BTW, the following scripts are so perfect I'v tried: Deploying a MongoDB Replica Set to the Google Kubernetes Engine (GKE) Deploying a MongoDB Sharded Cluster to the Google Kubernetes Engine (GKE)
Thanks and Regards Owen
I resolved this problem, now the mongodb cluster is running well:
$ kubectl get pod | grep mongod
mongod-0 1/1 Running 0 1h
mongod-1 1/1 Running 0 1h
mongod-2 1/1 Running 0 1h
by the following (for Ubuntu 16.04):
$ apt-get -y install rpcbind
$ apt-get -y install nfs-kernel-server
Install nfs-client on all kubernetes nodes (e.g., 192.168.99.123, 192.168.99.124)
$ apt-get -y install nfs-common
set nfs-server on kubernetes master
# Create a share folder
$ mkdir /opt/nfsdata
# It's ok if the file is not exists
$ vi /etc/exports
# Add a line. Replace 192.168.99.0 with your gateway of kubernetes master
/opt/nfsdata 192.168.99.0/24(rw,sync,no_root_squash)
$ systemctl enable rpcbind.service
$ systemctl enable nfs-server.service
$ systemctl start rpcbind.service
$ systemctl start nfs-server.service
create rbac.yaml
$ cat > rbac.yaml <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
EOF
create nfs-deployment.yaml
$ cat > nfs-deployment.yaml <<EOF
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: jicki/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.99.100
- name: NFS_PATH
value: /opt/nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.99.100
path: /opt/nfsdata
EOF
$ kubectl apply -f rbac.yaml
$ kubectl apply -f nfs-deployment.yaml
create nfs-storageclass.yaml
$ cat > nfs-storageclass.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
provisioner: fuseim.pri/ifs # fuseim.pri/ifs is one env of nfs-client-provisioner services
EOF
create nfs storageclass
$ kubectl apply -f nfs-storageclass.yaml
check it
$ kubectl get storageclass
NAME PROVISIONER
nfs-storage fuseim.pri/ifs
$ cd /path/to/minikube-mongodb-demo
$ cd resources
$ vi mongodb-service.yaml
volume.beta.kubernetes.io/storage-class: "nfs-storage"
2. re-create
```bash
$ cd /path/to/minikube-mongodb-demo
$ cd scripts
$ ./teardown.sh
$ ./generate.sh
@hiowenluke Cool Solution, but not tried yet will try this and share
Thanks for sharing solution.
Best Regards Ganesh Kumar
Closing as issue not related to minikube and hence not related to this project specifically (although good information for k8s & mongodb is contained in the responses generally, which can still be searched for and viewed when this issue is marked as closed).
Hi,
I am trying this demo for Prod POC and after running "generate.sh"
I see pod in pending state after I do kubectl describe for pod I see issue like below
So I had similar issue while deploying sidecar mongo demo(Sorry for giving refrenence)
There I created PV and PVC manully then created cluster which worked ok but not upto expectation.
So do I need to do same here
Is this provisioner is default with kubernetes or we need to get it from somewhere "ProvisioningFailed storageclass.storage.k8s.io "standard" not found"
Its 3 Master 3 Minion HA Cluster
Please advice
Best Regards Ganesh