Azure / aksArc

# Welcome to the Azure Kubernetes Service enabled by Azure Arc (AKS Arc) repo This is where the AKS Arc team will track features and issues with AKS Arc. We will monitor this repo in order to engage with our community and discuss questions, customer scenarios, or feature requests. Checkout our projects tab to see the roadmap for AKS Arc!
MIT License
111 stars 45 forks source link

[BUG] disk.csi.akshci.com is creating a new .vhdx disk, pv & pvc for every cluster redeployment. #195

Closed gittihub123 closed 2 years ago

gittihub123 commented 2 years ago

Describe the bug I'm using the disk.csi.akshci.com plugin and want to use local storage for our cluster. Everytime I'm redeploying my aks cluster I get a new .vhdx disk, pv and pvc. I want to be able to take down my cluster, and redeploy it and keep the same data on my pods.

To Reproduce Steps to reproduce the behavior:

Create a new akshcistoragecontainer. New-AksHciStorageContainer -Name "master" -Path "C:\ClusterStorage\xxxxx\xxxxxx\data\master"

Create a new storageclass (I create one for my master & cold storage also)

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: sc-es-hot
provisioner: disk.csi.akshci.com
parameters:
  blocksize: "33554432"
  container: hot
  dynamic: "true"
  group: clustergroup
  hostname: ca-xxxxx
  logicalsectorsize: "4096"
  physicalsectorsize: "4096"
  port: "55000"
  fsType: ext4
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

Create my deployment (part of it)

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: esdev01t
spec:
  version: 8.3.0
  nodeSets:
  - name: master
    count: 3
    config:
      node.roles: ["master"]
    podTemplate:
      metadata:
        labels:
          node: master
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          resources:
            limits:
              memory: 4Gi
              cpu: 0.5
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 40Gi
        storageClassName: sc-es-master
  - name: hot
    count: 1
    config:
      node.roles: ["data_warm"]
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          resources:
            limits:
              memory: 4Gi
              cpu: 0.5
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 40Gi
        storageClassName: sc-es-hot
  - name: cold
    count: 1
    config:
      node.roles: ["data_cold"]
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          resources:
            limits:
              memory: 4Gi
              cpu: 0.5
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 40Gi
        storageClassName: sc-es-cold
  http:
    service:
      spec:
        type: LoadBalancer
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: esdev01t
spec:
  version: 8.3.0
  count: 1
  elasticsearchRef:
    name: esdev01t
    namespace: default
  http:
    service:
      spec:
        type: LoadBalancer

Screenshot image

image

image

image

image

Expected behavior I want to reuse my pv with the pods.

Environment (please complete the following information):

Questions:

  1. Is this the correct way to deploy my cluster with local storage?
  2. If not, how should I implement my cluster with local storage on this setup?

Thank you.

gittihub123 commented 2 years ago

Resolved the issue through using this link https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html This issue was related to elasticsearch and is now resolved!