Closed YYyp99 closed 8 months ago
Please provide reason that you pod got stuck in pending state (kubectl describe po
) and the status of your PV
/PVC
(kubectl describe pv
, kubectl describe pvc
. It's likely that your node doesn't meet the requirement to create persistent volume as your values.yaml
defined.
Please provide reason that you pod got stuck in pending state (
kubectl describe po
) and the status of yourPV
/PVC
(kubectl describe pv
,kubectl describe pvc
. It's likely that your node doesn't meet the requirement to create persistent volume as yourvalues.yaml
defined.
I created a new PV, but it seems like it's not being utilized.
I created a new PV, but it seems like it's not being utilized.
There's no need to create pv
or pvc
manually as they are all handled by this helm chart by design. What are the status of PersistentVolumes
create by this helm chart as kubectl describe pv -n <your-namespace>
? What's the version of the cluster and helm
?
I created a new PV, but it seems like it's not being utilized.
There's no need to create
pv
orpvc
manually as they are all handled by this helm chart by design. What are the status ofPersistentVolumes
create by this helm chart askubectl describe pv -n <your-namespace>
? What's the version of the cluster andhelm
?
This is a PV I created myself, Helm Chart did not create a PV,
the version of the cluster
the version of helm
@YYyp99 So you've manually created a PersistentVolume
named dify-postgresql-primary
in namespace default
which is claimed by redis-data-my-release-redis-master-0
. This suggests that:
bitnami
described here. (Component redis
and postgresql
are defined as dependency of bitnami helm charts.)PersistentVolume
as expected with your setup, and its PVC
claims your manually created PV
that were meant for postgresql
with storage class NFS
, which is not ideal for redis
. You may need to specify storage class explicitly to prevent such cases. (e.g. .Values.redis.master.persistence.storageClass
, .Values.redis.replica.persistence.storageClass
, api.persistence.persistentVolumeClaim.storageClass
)We may need a copy of your values.yaml
(without actual credential data) and your start up command to identify the cause of two issues
PersistentVolume
were created by this chart or bitnami
depdendency PVC
claims unexpected PV
(wrong PV
instance + wrong storage class)PS: If you want to create PV
manually for redis
or postgresql
, please look into existingClaim
in .Values.yaml
@YYyp99 So you've manually created a
PersistentVolume
nameddify-postgresql-primary
in namespacedefault
which is claimed byredis-data-my-release-redis-master-0
. This suggests that:
- The dynamic provisioning feature works fine, which rules out possible causes as
bitnami
described here. (Componentredis
andpostgresql
are defined as dependency of bitnami helm charts.)- bitnami/redis failed to create a
PersistentVolume
as expected with your setup, and itsPVC
claims your manually createdPV
that were meant forpostgresql
with storage classNFS
, which is not ideal forredis
. You may need to specify storage class explicitly to prevent such cases. (e.g..Values.redis.master.persistence.storageClass
,.Values.redis.replica.persistence.storageClass
,api.persistence.persistentVolumeClaim.storageClass
)We may need a copy of your
values.yaml
(without actual credential data) and your start up command to identify the cause of two issues
- no
PersistentVolume
were created by this chart orbitnami
depdendencyPVC
claims unexpectedPV
(wrongPV
instance + wrong storage class)PS: If you want to create
PV
manually forredis
orpostgresql
, please look intoexistingClaim
in.Values.yaml
I successfully created PVs and PVCs on my own, and all pods are in the running state.
However, when I try to use the web service and execute kubectl port-forward service/my-release-dify-web 8888:3000, I cannot access it in the browser. When I use curl on the server, I encounter an error.
@YYyp99
AFAIK there's no such service
forwarding mechanism. One could only forward connections to specific pod
.
The last few line from stdout
upon helm install
should demonstrate how one would achieve it.
We still need a copy of values.yaml
in order to replicate this issue. Or we shall conclude that it's not a bug but a user end configuration error.
@YYyp99
AFAIK there's no such
service
forwarding mechanism. One could only forward connections to specificpod
. The last few line fromstdout
uponhelm install
should demonstrate how one would achieve it.We still need a copy of
values.yaml
in order to replicate this issue. Or we shall conclude that it's not a bug but a user end configuration error.
Thank you very much. The version of dify you provided is 0.4.9. Is there a Helm Chart available for the latest version of dify, version 0.5.7?
@YYyp99 AFAIK there's no such
service
forwarding mechanism. One could only forward connections to specificpod
. The last few line fromstdout
uponhelm install
should demonstrate how one would achieve it. We still need a copy ofvalues.yaml
in order to replicate this issue. Or we shall conclude that it's not a bug but a user end configuration error.Thank you very much. The version of dify you provided is 0.4.9. Is there a Helm Chart available for the latest version of dify, version 0.5.7?
What's new in 0.5.7
that could not be set up with this version of helm charts by changing image in .Values.yaml
image in
.Values.yaml
I haven't used it yet. What I mean is, if I want to use version 0.5.7, do I just need to change the version of the image in Values. yaml, and the remaining configuration doesn't need to be changed
image in
.Values.yaml
I haven't used it yet. What I mean is, if I want to use version 0.5.7, do I just need to change the version of the image in Values. yaml, and the remaining configuration doesn't need to be changed
This chart allows you to substitute images with these config defined in .Values
in case you are in an environment without access to docker hub and need to pull image from another registry. We haven't test this chart in 0.5.x yet but we haven't seen any breaking change in official dify
release note. You may have a try in non production environment first.
@YYyp99
AFAIK there's no such
service
forwarding mechanism. One could only forward connections to specificpod
. The last few line fromstdout
uponhelm install
should demonstrate how one would achieve it.We still need a copy of
values.yaml
in order to replicate this issue. Or we shall conclude that it's not a bug but a user end configuration error.
@YYyp99 Since no .Values.yaml
were provided and we could not replicate this issue, we shall conclude that it's a user end configuration error. Closing this issue.
When I deploy in this way,dify-api,dipy-worker,postgresql-primary,postgresql-read,redis-master,redis-replicas,Their status is all pending,Error message display:0/3 nodes are available: 3 node(s) didn't find available persistent volumes to bind. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. ,All PVC states are pending