Open johnnypea opened 2 years ago
@johnnypea thanks for creating this issue! Yes seems like this is not part of the latest spec in devspace, we'll fix that for the next release!
@FabianKramm thanks!
Please, could you give me working example how can I use this sharing a volume between the pods. It is not very clear from the docs. Should "shared" be set for every pod which is using that shared volume?
@johnnypea As far as I see it in the chart's templates, the shared: true
option creates a PersistentVolumeClaim
separately with name php-stocket
as in your example. That means multiple deployments can mount this PVC. If you have shared: false
, instead of creating a PVC directly, the chart will create a PVC Template as part of the StatefulSet which makes it very hard to mount it from 2 different deployments. Details in this commit: https://github.com/loft-sh/component-chart/commit/d8c5e1a422c35bde54e1ba09bf2852115b88ffc4#
EDIT: Yes, shared: true
should be set for every deployment.
@LukasGentele @FabianKramm I finally got to test it on 5.18.3 but still no luck getting it work...
00:05:32 [fatal] error deploying: error deploying pwa: Unable to deploy helm chart: error during 'helm upgrade pwa --namespace dev-intraspace --values /var/folders/mc/09gm4cz13bxgpj5nb8nkg_380000gn/T/348670964 --install component-chart --repo https://charts.devspace.sh --repository-config='' --version 0.8.4 --kube-context minikube': Release "pwa" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PersistentVolumeClaim "pwa-dist" in namespace "dev-intraspace" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "pwa": current value is "app"
=> exit status 1
deployments:
- name: app
# This deployment uses `helm` but you can also define `kubectl` deployments or kustomizations
helm:
# We are deploying the so-called Component Chart: https://devspace.sh/component-chart/docs
componentChart: true
# Under `values` we can define the values for this Helm chart used during `helm install/upgrade`
# You may also use `valuesFiles` to load values from files, e.g. valuesFiles: ["values.yaml"]
values:
containers:
- name: caddy-container
image: caddy
volumeMounts:
- containerPath: /srv/pwa/dist
volume:
name: pwa-dist
readOnly: true
shared: true
volumes:
- name: pwa-dist
size: 5Gi
- name: pwa
helm:
componentChart: true
values:
containers:
- image: pwa-prod
volumeMounts:
- containerPath: /usr/src/pwa/dist
volume:
name: pwa-dist
shared: true
volumes:
- name: pwa-dist
size: 5Gi
When I removed the volume definition I just got:
00:10:10 [fatal] error deploying: error deploying pwa: Unable to deploy helm chart: error during 'helm upgrade pwa --namespace dev-intraspace --values /var/folders/mc/09gm4cz13bxgpj5nb8nkg_380000gn/T/2846785660 --install component-chart --repo https://charts.devspace.sh --repository-config='' --version 0.8.4 --kube-context minikube': Release "pwa" does not exist. Installing it now.
Error: Deployment.apps "pwa" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "pwa-dist"
=> exit status 1
@johnnypea when you get the resource already exist error, can you do a devspace purge
and then devspace dev
again?
@FabianKramm I did devspace purge --all
and kubectl delete persistentvolumeclaims --all
but still the same result. I tried deploying in new namespace but it didn't help.
This pod yaml if it would be any help. I can't see "shared" definition there, is that correct?
apiVersion: v1
kind: Pod
metadata:
annotations:
helm.sh/chart: component-chart-0.8.4
creationTimestamp: "2022-02-03T09:42:52Z"
generateName: app-
labels:
app.kubernetes.io/component: app
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: devspace-app
controller-revision-hash: app-596fd7dd97
statefulset.kubernetes.io/pod-name: app-0
name: app-0
namespace: dev-intraspace
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: app
uid: 0c90d19c-776b-47ff-971e-776a7ff55ccb
resourceVersion: "220929"
uid: a7037f7e-2fa4-41c0-ab90-5378420efdd0
spec:
containers:
- env:
- name: DATABASE_URL
value: postgresql://symfony:ChangeMe@database:5432/app?serverVersion=13
- name: MERCURE_JWT_SECRET
value: '!ChangeMe!'
- name: MERCURE_PUBLIC_URL
value: https://localhost/.well-known/mercure
- name: MERCURE_URL
value: http://caddy/.well-known/mercure
image: php:aAflOfS
imagePullPolicy: IfNotPresent
name: php-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/php
name: php-socket
- mountPath: /usr/local/etc/php/conf.d/symfony.ini
name: phpdevini-config
readOnly: true
subPath: symfony.ini
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-8vrbr
readOnly: true
- env:
- name: MERCURE_PUBLISHER_JWT_KEY
value: '!ChangeMe!'
- name: MERCURE_SUBSCRIBER_JWT_KEY
value: '!ChangeMe!'
- name: SERVER_NAME
value: localhost:8080, localhost:4443, caddy:80
image: caddy:HdVoAwM
imagePullPolicy: IfNotPresent
name: caddy-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/php
name: php-socket
readOnly: true
- mountPath: /srv/pwa/dist
name: pwa-dist
readOnly: true
- mountPath: /data
name: caddy-data
- mountPath: /config
name: caddy-config
- mountPath: /etc/caddy/Caddyfile
name: caddyfile-config
readOnly: true
subPath: Caddyfile
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-8vrbr
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: app-0
imagePullSecrets:
- name: devspace-auth-docker
nodeName: minikube
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
subdomain: app-headless
terminationGracePeriodSeconds: 5
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: php-socket
persistentVolumeClaim:
claimName: php-socket-app-0
- name: caddy-data
persistentVolumeClaim:
claimName: caddy-data-app-0
- name: caddy-config
persistentVolumeClaim:
claimName: caddy-config-app-0
- configMap:
defaultMode: 420
name: caddyfile-config
name: caddyfile-config
- configMap:
defaultMode: 420
name: phpdevini-config
name: phpdevini-config
- name: pwa-dist
persistentVolumeClaim:
claimName: pwa-dist
- name: kube-api-access-8vrbr
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-02-03T09:42:53Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-02-03T09:42:55Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-02-03T09:42:55Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-02-03T09:42:53Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://db6c1c3976ac622e083a2707f4f04b332eb1280eeb30f271704ab77d8231bde5
image: caddy:EOwtPGO
imageID: docker://sha256:42f108cc22d108667641908aad865e3050f23815e69c662340f44739752df607
lastState: {}
name: caddy-container
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-02-03T09:42:55Z"
- containerID: docker://42de142e91e5445511d1ef1b45cbe8dbb88eea2c913207483f1699bae5a2b69d
image: php:aAflOfS
imageID: docker://sha256:e9838b262e9c90f3662a1688f4ea7dd35a2bd5b76a82462d402a61a60e129ab8
lastState: {}
name: php-container
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2022-02-03T09:42:55Z"
hostIP: 192.168.49.2
phase: Running
podIP: 172.17.0.3
podIPs:
- ip: 172.17.0.3
qosClass: BestEffort
startTime: "2022-02-03T09:42:53Z"
@johnnypea thanks for the info, can you share your devspace.yaml that fails during devspace dev
? Do you have multiple deployments that somehow reference the same pvc?
@LukasGentele Yes, I do. I sent it in my previous comment. Do you need any more details?
@johnnypea thanks for the info, I guess thats currently a limitation of the component chart as it tries to create 2 PVC's here. You are probably better off to create your own custom helm chart that does this the way you want with a single PVC and multiple deployments
@FabianKramm would it be possible to use it like this?
deployments:
- name: pwa-dist-pvc
kubectl:
manifests:
- pwa/deploy/pwa-dist-pvc.yaml
- name: pwa
helm:
componentChart: true
values:
containers:
- image: pwa-prod
volumeMounts:
- containerPath: /usr/src/pwa/dist
volume:
name: pwa-dist
pwa/deploy/pwa-dist-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pwa-dist
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Getting this error:
Error: Deployment.apps "pwa" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "pwa-dist"
@johnnypea no, the problem is that the pvc already exists and you need to separate the deployments, where 1 deployment wouldn't create the persistent volume claim
https://devspace.sh/component-chart/docs/configuration/containers#volumeshared