Closed HannanSolo closed 6 years ago
@hakuch ^^
At the time of writing this sample, StatefulSets were in beta.
According to the documentation for StatefulSets, as of release 1.9 they are stable.
If you are running Kubernetes at release 1.9 or greater, I believe you can change apiVersion
in scylla-statefulset.yaml
to apiVersion: 1
.
Does that resolve the error?
@hakuch I am actually trying this on GKE which has a kube version of 1.8.6. So I changed the api version to v1 just for the fun of it and got
error: error validating "scylla-statefulset.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"StatefulSet"}; if you choose to ignore these errors, turn validation off with --validate=false
And I'm not sure where to go from there. Also thank you so much for the help, I'm a big fan of your work and the keynotes you did helped a ton.
@ThePixelBro22, I'm glad to help and appreciate your kind words.
I gave you incorrect advice: try apiVersion: v1
.
If that doesn't work, I'll try to spin up a test cluster to try to figure this out definitively.
@hakuch
I get this error
error: error validating "scylla-statefulset.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"StatefulSet"}; if you choose to ignore these errors, turn validation off with --validate=false
And this is my yaml.
apiVersion: v1
kind: StatefulSet
metadata:
name: scylla
labels:
app: scylla
spec:
serviceName: scylla
replicas: 3
selector:
matchLabels:
app: scylla
template:
metadata:
labels:
app: scylla
spec:
containers:
- name: scylla
image: scylladb/scylla:2.0.0
imagePullPolicy: IfNotPresent
args: ["--seeds", "scylla-0.scylla.default.svc.cluster.local"]
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "PID=$(pidof scylla) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"]
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- exec
- /opt/ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- name: scylla-data
mountPath: /var/lib/scylla
- name: scylla-ready-probe
mountPath: /opt/ready-probe.sh
subPath: ready-probe.sh
volumes:
- name: scylla-ready-probe
configMap:
name: scylla
volumeClaimTemplates:
- metadata:
name: scylla-data
annotations:
volume.beta.kubernetes.io/storage-class: scylla-ssd
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
and im using kube 1.8.6
Based on https://github.com/kubernetes/kubernetes/issues/37393, perhaps you could try apiVersion: apps/v1beta1
?
I'm not familiar with Google Kubernetes Engine. Another option is that there is a mismatch between the version of kubectl
that you're running locally and the version of Kubernetes that is running on GKE.
I forgot to mention: I set up a Kubernetes cluster on Google Compute Engine using this guide and specifically installed Kubernetes version 1.8.6 through
$ curl -sS https://get.k8s.io > install.sh
$ chmod +x install.sh
$ KUBERNETES_RELEASE="v1.8.6" ./install.sh
I had no issue creating the StatefulSet using the YAML file as it appears in the repository today.
Ok, ill try it out. Im using google kubernetes engine, on google cloud platform. It just a way more simpler way to creating and managing clusters.
It worked but now i get error on the stateful set on kubernetes.
PersistentVolumeClaim is not bound: "scylla-data-scylla-1" (repeated 3 times)
EDIT
Seems that it works after waiting a bit :)
Glad to hear it! I think this issue can now be closed.
Argh, so close but yet so far. I'm getting these errors on each pod.
ERROR 2018-01-26 16:59:36,654 [shard 0] init - Bad configuration: invalid value in 'seeds': 'scylla-0.scylla.default.svc.cluster.local': std::system_error (error C-Ares:4, Not found)
ERROR 2018-01-26 16:59:36,654 [shard 0] seastar - Exiting on unhandled exception: bad_configuration_error (std::exception)
INFO 2018-01-26 16:59:36,654 [shard 0] compaction_manager - Asked to stop
INFO 2018-01-26 16:59:36,654 [shard 0] compaction_manager - Stopped
INFO 2018-01-26 16:59:36,655 [shard 1] compaction_manager - Asked to stop
INFO 2018-01-26 16:59:36,655 [shard 1] compaction_manager - Stopped
2018-01-26 16:59:36,669 INFO exited: scylla (exit status 1; not expected)
2018-01-26 16:59:37,671 INFO gave up: scylla entered FATAL state, too many start retries too quickly
Are you getting this as well? @hakuch
EDIT: So I played around a bit and got it to work just by deleting the seed arg, thanks for all the help, Now time to expose the service, any tips?
I wanna get Scylla on my GCP kubernetes and am following this guide.
https://github.com/scylladb/scylla-code-samples/tree/master/kubernetes-scylla
but when I run this command
kubectl create -f scylla-statefulset.yaml
I get an error