mora-resource-allocation-edge-cloud / mora

1 stars 0 forks source link

Error in pending PVCs "storageclass.storage.k8s.io "default" not found" - NO PV created #42

Open Maya-kassis opened 2 years ago

Maya-kassis commented 2 years ago

Hello,

I am trying to deploy cloud mora on k8s baremetal cluster: helm install vp-cloud -f variants/values.cloud.yaml --generate-name --disable-openapi-validation Error: INSTALLATION FAILED: persistentvolumeclaims already exists

The problem is in the PV and PVC, I followed the solution here but it didn't help me: https://github.com/mora-resource-allocation-edge-cloud/mora/issues/41#issuecomment-1050217819

I see no PV is created, but PVCs are created and pending, its description is storageclass.storage.k8s.io "default" not found

I deploy manually the file-system storage class, below is its yaml file:

apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"example-nfs"},"parameters":{"path":"/share","readOnly":"false","server":"nfs-server.example.com"},"provisioner":"example.com/external-nfs"}
      storageclass.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2022-03-11T10:28:06Z"
    managedFields:
    - apiVersion: storage.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:kubectl.kubernetes.io/last-applied-configuration: {}
            f:storageclass.kubernetes.io/is-default-class: {}
        f:parameters:
          .: {}
          f:path: {}
          f:readOnly: {}
          f:server: {}
        f:provisioner: {}
        f:reclaimPolicy: {}
        f:volumeBindingMode: {}
      manager: kubectl
      operation: Update
      time: "2022-03-11T13:07:46Z"
    name: example-nfs
    resourceVersion: "18801069"
    selfLink: /apis/storage.k8s.io/v1/storageclasses/example-nfs
    uid: d95e0584-79f9-40d5-a324-88050a72f8ee
  parameters:
    path: /share
    readOnly: "false"
    server: nfs-server.example.com
  provisioner: example.com/external-nfs
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
aleskandro commented 2 years ago

Hello @Maya-kassis, as the storage class is set to "default" in the PVCs and the storage class of your provisioner is example-nfs, the PVCs keep in the waiting status to gather a provisioned PV with a non available storage class.

Could you try overriding the default storageClassName value with:

storageClassName: example-nfs

This has to be done in your custom values.yaml file

Maya-kassis commented 2 years ago

I altered the values.yaml and the same error occured. Error: INSTALLATION FAILED: persistentvolumeclaims "videoserver-videofiles" already exists

NAME                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongodb-data             Pending                                      example-nfs    15s
videoserver-logs         Pending                                      example-nfs    15s
videoserver-video        Pending                                      example-nfs    15s
videoserver-videofiles   Pending                                      example-nfs    15s

aidyf2n@aidyf2n2:~/mora/deployment$ kubectl get pv
No resources found in default namespace.

Here is the values.yaml:


# resources limits and requests for the deployments
resources:
  vms:
    limits:
      cpu: 500m
      memory: 1Gi
    requests:
      cpu: 300m
      memory: 512Mi
  vps:
    limits:
      cpu: 500m
      memory: 1Gi
    requests:
      cpu: 300m
      memory: 512Mi
  mongodb:
    limits:
      cpu: 1200m
      memory: 512Mi
    requests:
      cpu: 1000m
      memory: 512Mi
  apigateway:
    limits:
      cpu: 500m
      memory: 1Gi
    requests:
      cpu: 300m
      memory: 512Mi
  loadbalancer:
    limits:
      cpu: 900m
      memory: 2Gi
    requests:
      cpu: 800m
      memory: 1Gi
  kafka:
    limits:
      cpu: 500m
      memory: 1Gi
    requests:
      cpu: 300m
      memory: 512Mi
  zookeeper:
    limits:
      cpu: 500m
      memory: 1Gi
    requests:
      cpu: 300m
      memory: 512Mi
routes:
  # This is the domain that expose the Cloud variant VMS service
  # If you're deploying a non-cloud variant (variantType > -1) it also is the domain to which a client is redirected
  # if the Edge deployment cannot serve a user's request
  cloudURL: 'cloud-vms-1.master.particles.dieei.unict.it'
  # This is the domain to which expose the main route used by the clients
  # (actually this would be achieved by a location transparency DNS configuration)
  edgeURL: 'edge-vp-1.master.particles.dieei.unict.it'
  # NOT RECOMMENDED: when a DNS server is not available to solve the CloudURL,
  #  one can choose to provide the IP at which the CloudURL should resolve.
  # This is provided to the micro-services that need to communicate with the cloud service
  # when a variant different than the cloud one is performed.
  # In order to enable it, you also have to set the DNSServerForCloud to false
  CloudIP: ''

# Set this to false if you want to use CloudIP above
# TODO remove me in favor of len(cloudIP) != 0
noDNSServerForCloud: false

# Note: the following two values cannot be together set at true
# If you are deploing on openshift, set isOpenshift at true and isMinikube at false
# If you are deploying on a generic k8s cluster, set both at false (dynamic provisioning should be used and have a
#  look at the storageClassName to use
# If you are deploying on Minikube set isOpenshift at true and isMinikube at true
# TODO validation

# If you are deploying on OpenShift, keep this true (Note, it's a string)
isOpenShift: false
# If you are deploying on Minikube, set this true
isMinikube: false

# storageClassName for unict okd deployment: glusterfs-storage
# Storage Class Name to use for the persistent volume claims
#storageClassName: default
storageClassName: example-nfs
# Settings for MongoDB
mongodb:
  replicas: 1 # Not used. Keep replicas at 1
  username: root
  password: toor
  authenticationDatabase: admin
  databaseName: video-server
  serviceName: mongodb
  # The following values are used if variantType is different than -1 (edge variants)
  videoCollectionSize: 10000
  videoCollectionMaxDocs: 10

vms:
  replicas: 1
  # Let the micro-services know they are executing Edge variant (false) or not (TODO remove in favor
  # of variantType == -1
  isCloud: "true"
  # (enabled if isCloud === 'false')
  # Set limits of the capped collection (i.e., the maximum number of video stored in
  # the Edge, leveraging a LRU cache retention policy)
  maxVideo: 10 # Set limits of the capped collection (i.e., the maximum number of video stored in the Edge, leveraging a LRU cache retention policy)
  variantType: "-1" # TODO use an integer
  needKafkaBeans: "true" # Set it "false" (string) if VariantType == 0
  # TODO make needKafkaBeans boolean

# VariantType:

# -1: Cloud Variant
# 0: Cache Variant
# 1: Offline-encoding variant
# 2: Online-encoding variant

vps:
  replicas: 1

lb:
  replicas: 1
  # Sets the maximum number of concurrent users an Edge Deployment can serve
  maxConcurrentUsers: 100

apigateway:
  replicas: 1

zookeeper:
  replicas: 1

kafka:
  replicas: 1

# The scheme at which the system will have to reply
#  (http:// or https:// if you configure SSL, currently not supported)
scheme: "http://"

services:
  apiGateway:
    name: api-gateway
    port: 8081

# Images url for the containers
images:
  edgeLb: docker.io/aleskandro/video-server:edge-lb
  cloudGateway: docker.io/aleskandro/video-server:cloud-gateway
  cloudVms: docker.io/aleskandro/video-server:cloud-vms3
  cloudVps: docker.io/aleskandro/video-server:cloud-vps
  mongoDb: docker.io/bitnami/mongodb:4.4
  kafka: wurstmeister/kafka:2.11-2.0.0
  zookeeper: library/zookeeper:3.4.13
  # This image is used by default for mongoDb if isOpenShift is set to true
  openShiftMongoDb: docker-registry.default.svc:5000/openshift/mongodb
aleskandro commented 2 years ago

Can you try to remove the PVCs manually and re-run helm?

Maya-kassis commented 2 years ago

I tried what you recommend, still PVC in pending state:

aidyf2n@aidyf2n2:~/mora/deployment$ helm install vp-cloud -f variants/values.cloud.yaml --generate-name --disable-openapi-validation
Error: INSTALLATION FAILED: persistentvolumeclaims "videoserver-videofiles" already exists

aidyf2n@aidyf2n2:~/mora/deployment$ kubectl get pv
No resources found in default namespace.

aidyf2n@aidyf2n2:~/mora/deployment$ kubectl get pvc
NAME                     STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongodb-data             Pending   foo-pv    0                         example-nfs    15s
videoserver-logs         Pending   foo-pv3   0                         example-nfs    15s
videoserver-video        Pending   foo-pv2   0                         example-nfs    15s
videoserver-videofiles   Pending   foo-pv4   0                         example-nfs    15s

aidyf2n@aidyf2n2:~/mora/deployment$ kubectl describe pvc mongodb-data
Name:          mongodb-data
Namespace:     default
StorageClass:  example-nfs
Status:        Pending
Volume:        foo-pv
Labels:        app.kubernetes.io/managed-by=Helm
Annotations:   meta.helm.sh/release-name: vp-cloud-1647858074
               meta.helm.sh/release-namespace: default
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      0
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:        <none>

would you please provide me with the yaml file of the storageclass you create.