Open maloo opened 2 years ago
We tried setting the permissions using an init container, but that just resulted in the following error instead:
2022-01-16 03:06:10.58 Server Registry startup parameters:
-d /var/opt/mssql/data/master.mdf
-l /var/opt/mssql/data/mastlog.ldf
-e /var/opt/mssql/log/errorlog
2022-01-16 03:06:10.58 Server Error: 17113, Severity: 16, State: 1.
2022-01-16 03:06:10.58 Server Error 87(The parameter is incorrect.) occurred while opening file '/var/opt/mssql/data/master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary.
I got the same error message by using the docker image on a Synology NAS. Solution: The Database-Folder is located outside e. g. /volume1/docker/mssql and is mounted inside the Docker at /var/opt/mssql. mssql tries to create the necessary folder in this path and fails with this error, when the permission are not met.
@maloo I had this problem on docker-compose, but when I tried to mount blank volume, then it went through with mssql user and proper permissions. Are you by any chance mounting existing volume with different permissions?
I'm mounting a persistent volume claim. Right are set in yaml to 1001 used by sql. I solved this by mounting mssql/data instead.
add this works for me:
securityContext: fsGroup: 10001
this works for me:
chown 10001:10001 <mount folder>
You'd need to downgrade the SQLserver version to 2017 to avoid that error. Use the following command to create a zonal cluster in GKE standard. Replace environment variables and hard coded values.
gcloud beta container --project "$PROJECT_ID" clusters create "mssql" --zone "us-central1-f" \
--no-enable-basic-auth --cluster-version "1.23.8-gke.400" --release-channel "rapid" --machine-type "t2d-standard-1" \
--image-type "COS_CONTAINERD" --disk-type "pd-ssd" --disk-size "50" --metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--max-pods-per-node "110" --spot --num-nodes "4" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --enable-private-nodes\
--master-ipv4-cidr "172.16.254.240/28" --enable-master-global-access --enable-ip-alias --network \
"projects/$VPC_PROJECT_ID/global/networks/hil-test" \
--subnetwork "projects/$VPC_PROJECT_ID/regions/us-central1/subnetworks/hil-test" \
--cluster-secondary-range-name "gke-pod-range-0" --services-secondary-range-name "gke-service-range-0"\
--no-enable-intra-node-visibility --default-max-pods-per-node "110" --enable-dataplane-v2 \
--no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
--enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 \
--workload-pool "$PROJECT_ID.svc.id.goog" --enable-shielded-nodes \
--security-group "gke-security-groups@$YOUR_DOMAIN.com" --node-locations "us-central1-f"
Avoid creating a regional cluster or GKE autopilot otherwise persistent volume claims may fail to bind as the node may not be in the zones specified by kind: StorageClass
. Apply the following storage class:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: repd-us-central1-f
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
replication-type: none
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- us-central1-f
Follow the instructions to create the SA password. Then create the deployment, pvc, service:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sqlserver-volume-19gi
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 19Gi
storageClassName: repd-us-central1-f
---
# sqlserver deployment and service: sqlserver-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sqlserver-deployment-19gi
spec:
replicas: 1
selector:
matchLabels:
app: ms-sqlserver-19gi
template:
metadata:
labels:
app: ms-sqlserver-19gi
spec:
terminationGracePeriodSeconds: 10
containers:
- name: ms-sqlserver-19gi
image: mcr.microsoft.com/mssql/server:2017-latest
resources:
limits:
cpu: "0.5"
ephemeral-storage: 2Gi
memory: 1Gi
requests:
cpu: "0.5"
ephemeral-storage: 2Gi
memory: 1Gi
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: sqlserver-secret
key: SA_PASSWORD
volumeMounts:
- name: sqlserver-volume
mountPath: /var/opt/mssql
volumes:
- name: sqlserver-volume
persistentVolumeClaim:
claimName: sqlserver-volume-19gi
---
apiVersion: v1
kind: Service
metadata:
name: sqlserver-deployment-19gi
spec:
selector:
app: ms-sqlserver-19gi
ports:
- protocol: TCP
port: 1433
targetPort: 1433
type: LoadBalancer
Finally observe the deployment in green status.
When running with podman on RHEL 9 with selinux enabled, this is what fixed the problem for me:
chcon -t container_file_t /var/opt/mssql
@maloo Can you please provide example on how u mount the mssql/data with persistent volume claim? Thanks.
@sanme98 You can use below Configuration:
`volumeMounts:
@sanme98 You can use below Configuration:
volumeMounts: - name: mssqldb mountPath: /mssql/data
instead of mountPath: /var/opt/mssql
Well, this avoids the issue indeed... but mssql data doesn't end up on the local folder... which is the initial goal
@sanme98 You can use below Configuration:
volumeMounts: - name: mssqldb mountPath: /mssql/data
instead of mountPath: /var/opt/mssqlWell, this avoids the issue indeed... but mssql data doesn't end up on the local folder... which is the initial goal
I am also having this issue. I do not get any of my mssql data in the mounted volume as it is all in /var/opt/mssql.
Would anyone have a solution?
This was the solution to my problem published in stackoverflow: https://stackoverflow.com/a/77808783/13176149
In my case the issue was virtual disk limit I had to go and increase that in docker
for me the situation was resolved only by setting runAsUser: 0
on StatefulSet
for me the situation was resolved only by setting
runAsUser: 0
onStatefulSet
For a test environment, this works for me!
spec:
template:
spec:
securityContext:
fsGroup: 10001
runAsUser: 0
We are trying to evaluate MSSQL and a few other databases for deployment and development in Kubernetes. To do this we tried to start each DB in Docker-Container and all was fine (after setting local path group permissions to 10001). When we tried the same thing in Docker-Kubernetes we got the following error:
The stateful set looks like this:
The PVC sqlserver-pvc is of storageclass type
hostpath
, since that is the only one that exist in Docker-Kubernetes. But it does not seem to help to set fsGroup to 10001. We also tried pv.beta.kubernetes.io/gid: "10001"Is there any way to run MS SQL with a data mount in Docker-Kubernetes? Is there any option to have the data drive be initialized with valid permissions? Since we are not able to evaluate MS SQL Server in Docker-Kubernetes, will this work in Azure AKS and if so, are there any example PV/PVC/STS YAML samples?