seaweedfs / seaweedfs-csi-driver

SeaweedFS CSI Driver https://github.com/seaweedfs/seaweedfs
Apache License 2.0
220 stars 50 forks source link

Can not install Helm bitnami/mysql to storage class - seaweedfs-storage #89

Open andrey-stein opened 2 years ago

andrey-stein commented 2 years ago

Hello !

  1. I have 3 storage classes in Rancher RKE k8s cluster:

    • local-path
    • longhorn
    • seaweedfs-storage (default) image
  2. seaweedfs-storage works and is tested.
    I've created containers, following the examples and attached storage.
    image

  3. The problem: helm install -n tests-hdd mysql bitnami/mysql
    installs well on local-path and longhorn,
    but freezes when installing on seaweedfs-storage

image

with logs saying: image

or in written form:

mysql 14:47:36.98
mysql 14:47:36.98 Welcome to the Bitnami mysql container
mysql 14:47:36.99 Subscribe to project updates by watching https://github.com/bitnami/containers
mysql 14:47:36.99 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mysql 14:47:37.00
mysql 14:47:37.01 INFO  ==> ** Starting MySQL setup **
mysql 14:47:37.07 INFO  ==> Validating settings in MYSQL_*/MARIADB_* env vars
mysql 14:47:37.09 INFO  ==> Initializing mysql database
mysql 14:47:37.17 WARN  ==> The mysql configuration file '/opt/bitnami/mysql/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
mysql 14:47:37.21 INFO  ==> Installing database

Please,
Give me an advice,
what would be the right workaround in such situations ?

chrislusf commented 2 years ago

please try to simplify everything to reproduce the problem. Just use weed mount and try to install mysql there.

andrey-stein commented 2 years ago

please try to simplify everything to reproduce the problem. Just use weed mount and try to install mysql there.

it's not so easy to install MySQL the way Bitnami does. sorry if i have lack of knowledge.


Comparing mount for all 3 storage classes,

mount | grep /data

gives

/dev/sdb on /data type ext4 (rw,relatime) for local-path
/dev/longhorn/pvc-3d5613f3-f951-4da7-9395-dc142ad7de2a on /data type ext4 (rw,relatime) for longhorn

and for weed
172.29.0.6:8888:/buckets/pvc-c6a54aee-2dc0-4483-aed1-369f61b9957a on /data type fuse.seaweedfs (rw,relatime,user_id=0,group_id=0,allow_other)

looks like mount options for weed are different from local-path and longhorn
while local-path and longhorn both have (rw,relatime)
weed has (rw,relatime,user_id=0,group_id=0,allow_other)

is there a way to tell weed CSI which mount options to use ?

andrey-stein commented 2 years ago

kubectl edit sc seaweedfs-storage

this should work,
but the options are ignored

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    meta.helm.sh/release-name: seaweedfs-csi-driver
    meta.helm.sh/release-namespace: kube-system
    storageclass.kubernetes.io/is-default-class: "true"
  creationTimestamp: "2022-09-21T10:11:18Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: seaweedfs-storage
  resourceVersion: "6447995"
  uid: e568ae26-1be0-4817-a5e5-3cec32027387
mountOptions:
- rw,relatime
- user_id=1001,group_id=1001
provisioner: seaweedfs-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate

:construction: this piece of code is supposed to add options:

mountOptions:
- rw,relatime
- user_id=1001,group_id=1001

should options be supplied in some different way ?

andrey-stein commented 2 years ago

Hello,

Looks like the problem is in number of volumes, I was starting server like this

/usr/local/bin/weed server -ip=172.29.0.6 -ip.bind=0.0.0.0 -dir=/var/lib/weed \
    -master.peers=172.29.0.6:9333,172.29.0.7:9333,172.29.0.8:9333 \
    -dataCenter=dc1 -rack=rack1 -volume.max=3 -volume.port=8086 -filer=true \
    -metricsPort=8088

-volume.max=3
replaced with
-volume.max=3000


Now things work.

# Current command
/usr/local/bin/weed server -ip=172.29.0.6 -ip.bind=0.0.0.0 -dir=/var/lib/weed \
    -master.peers=172.29.0.6:9333,172.29.0.7:9333,172.29.0.8:9333 \
    -dataCenter=dc1 -rack=rack1 -volume.max=3000 -volume.port=8086 -filer=true \
    -metricsPort=8088

I would like to kindly ask to indicate how can I guess a number of files which could be stored with -volume.max=3000. Is it 3000 files ?
would there be difference between
-volume.max=3000
and default
-volume.max=8
for a 100GB SSD ?

if not, then why -volume.max=3 was not working ? Probably i miss some conceptual detail.