rancher-sandbox / rancher-desktop

Container Management and Kubernetes on the Desktop
https://rancherdesktop.io
Apache License 2.0
5.99k stars 283 forks source link

volumeMode: Filesystem didn't show me an error in terminal #1728

Open hermesalvesbr opened 2 years ago

hermesalvesbr commented 2 years ago

Actual Behavior

---
apiVersion: v1
kind: Service
metadata:
  name: directus-service
  namespace: rancherstudy
  labels:
    app: directus-service
spec:
  type: NodePort
  ports:
    - port: 8055
  selector:
    app: directus-rancherstudy
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: directus-rancherstudy
  namespace: rancherstudy
  labels:
    app: directus-rancherstudy
spec:
  replicas: 1
  # strategy:
    # type: Recreate
  selector:
    matchLabels:
      app: directus-rancherstudy
  template:
    metadata:
      labels:
        app: directus-rancherstudy
    spec:
      containers:
      - image: directus/directus
        name: directus
        env:
        - name: DB_PASSWORD
          value: testhere
        ports:
        - containerPort: 8055
          name: directus
        volumeMounts:
        - name: directus-vol
          mountPath: /directus
      volumes:
        - name: directus-vol
          persistentVolumeClaim:
            claimName: directus-pvc

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: directus-pvc
spec:
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: directus-scaling
  namespace: rancherstudy
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: directus-rancherstudy
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 95

Steps to Reproduce

I'm studying volumes on my linux with manjaro. I tried to persist volume, in the terminal it doesn't show any error, but it doesn't work.

Can I use volumeMode: Filesystem with Rancher Desktop? Is it a bug or am I using it incorrectly?

Result

 kubectl apply -f directus-deployment.yml
service/directus-service created
deployment.apps/directus-education created
persistentvolumeclaim/directus-pvc created
horizontalpodautoscaler.autoscaling/directus-scaling created

Expected Behavior

 kubectl get pvc --sort-by=.spec.capacity.storage
NAME           STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
directus-pvc   Pending                                      local-path     59s

Additional Information

No response

Rancher Desktop Version

1.0.1

Rancher Desktop K8s Version

1.22.6

Which container runtime are you using?

containerd (nerdctl)

What operating system are you using?

Other (specify below)

Operating System / Build Version

Linux myagon 5.15.25-1-MANJARO #1 SMP PREEMPT Wed Feb 23 14:44:03 UTC 2022 x86_64 GNU/Linux

What CPU architecture are you using?

x64

Linux only: what package format did you use to install Rancher Desktop?

No response

Windows User Only

No response


(Edited by @mook-as to fix formatting.)

ericpromislow commented 2 years ago

See https://rancher-users.slack.com/archives/C0200L1N1MM/p1646671873180999 for discussion. Since the RD slack discussion thread is finite, here's the discussion:

Joël:

Does someone know if iit's possible to change "volumeBindingMode: WaitForFirstConsumer" to "volumeBindingMode: Immediate"? (local-path storage class) btw: if I re-create the storage class "local-path" manually using "Immediate" the local path provisionier complains "type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "local-path": configuration error, no node was specified " The reason why I need this is due to the fact that some other processes creates the PVC in advance using ArgoCD. ArgoCD cannot finish the rollout using status "Pending". (edited)

Jan:

Please raise an issue against rancher/local-path-provisioner: Dynamically provisioning persistent local storage with Kubernetes

This is mostly an upstream issue ( https://github.com/rancher/local-path-provisioner ). As said in the slack thread, please create an issue there and reference it here.