Open Lukas8342 opened 11 months ago
Same here on nomad, csi-plugin logs also: panic: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
On image : chrislusf/seaweedfs-csi-driver:v1.1.8
Works fine :) - It's look like this changes cause a problem: https://github.com/seaweedfs/seaweedfs-csi-driver/commit/785e69a08ef47eab94742b040870ec0716f20f13
Can confirm that latest image is broken for me on Nomad with the error messages referencing Kubernetes, and that using v1.1.8 as @worotyns suggested works.
@duanhongyi please take a look here.
@chrislusf It seems to be incompatible with Nomad. KUBERNETES-SERVICE_HOST env does not exist in Nomad.
Let me take a look in the next few days.
Still broken in the latest version.
I think this commit has completely broke this CSI driver https://github.com/seaweedfs/seaweedfs-csi-driver/commit/785e69a08ef47eab94742b040870ec0716f20f13#diff-d7f330f6d6efcabc25613925c10237045948e05bc020c7ecf16c3b331e371e62
Send a PR to revert this change?
@chrislusf
I think it can be downgraded, that is, Nomad CSI does not support limited capacity.
Is this feasible? This modification is the simplest. I currently do not have a Nomad cluster to experiment with.
The pseudocode is as follows, mainly looking at the maxVolumeSize
variable:
func GetVolumeCapacity(volumeId string) (int64, error) {
client, err := NewInCluster()
if err != nil {
return maxVolumeSize, nil
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if volume, err := client.CoreV1().PersistentVolumes().Get(ctx, volumeId, metav1.GetOptions{}); err != nil {
return 0, err
} else {
storage := volume.Spec.Capacity.Storage()
capacity, _ := storage.AsInt64()
return capacity, nil
}
}
I have looked at Nomad's API and it is not the standard K8S API; So the simplest way to fix it is to ignore obtaining the capacity of nomad PVC and directly return the maximum value.
https://developer.hashicorp.com/nomad/api-docs/volumes
If this is feasible, I will submit a PR tomorrow.
Hello,
I'm encountering an issue that I'm unsure whether it stems from SeaweedFS, SeaweedFS-csi-driver or HashiCorp Nomad. I'm reaching out here as a starting point, hoping for guidance as my troubleshooting options are running thin. In my current setup, I have one master, one filer, and one volume server, all running on the same machine with these configurations:
bash
When utilizing the CSI with the following Nomad job:
hcl
It initially appears to work, but upon running jobs with different images, I consistently encounter a "Transport endpoint is not connected" error.
The filer logs display the following when starting a job and mounting it to a volume:
bash
Nomad volume mounting is done as follows:
hcl
I appreciate any insights or guidance you can provide to help resolve this issue.
Thank you.