Closed attardi closed 6 years ago
I fixed it by commenting the lines starting with description, like:
description: Kubernetes Native Serverless Framework
in:
https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kubeless-v1.0.0-alpha.7.yaml
It seems that the format of CRDs has changed for K8s 1.11. We'll need to adapt the CRDs for that.
@attardi after removing the description, are you able to deploy a sample function executing the following?
kubectl apply -f - << EOF
apiVersion: kubeless.io/v1beta1
kind: Function
metadata:
labels:
created-by: kubeless
function: get-python
name: get-python
namespace: default
spec:
checksum: sha256:d251999dcbfdeccec385606fd0aec385b214cfc74ede8b6c9e47af71728f6e9a
deps: ""
function: |
def foo(event, context):
return "hello world"
function-content-type: text
handler: helloget.foo
runtime: python2.7
service:
ports:
- name: http-function-port
port: 8080
protocol: TCP
targetPort: 8080
selector:
created-by: kubeless
function: get-python
type: ClusterIP
timeout: "180"
EOF
I know you are having trouble using the kubeless
binary (#882), that's why I suggest to use directly kubectl
. After executing the above you should be able to see the pod running in the default namespace.
error: unable to recognize "STDIN": no matches for kind "Function" in version "kubeless.io/v1beta1"
@rmros how do you get that error? Are you able to install the Kubeless manifest? You should be able to see if the Function
CRD is installed with kubectl get crd
.
hi @andresmgot , i solved that error by remove description line's inside kubeless-v1.0.0-alpha.7.yaml, mr andres im thank you for maintaining kubeless :) 👍
Hi! @andresmgot, I got the similar error when I deploy Kafka and Zookeeper on kubeless.
kubectl create -f kafka-zookeeper-v1.0.0-alpha.9.yaml
deployment.apps/kafka-trigger-controller created
service/kafka created
statefulset.apps/zoo created
service/zookeeper created
service/zoo created
clusterrole.rbac.authorization.k8s.io/kafka-controller-deployer created
clusterrolebinding.rbac.authorization.k8s.io/kafka-controller-deployer created
error: error validating "kafka-zookeeper-v1.0.0-alpha.9.yaml": error validating data: ValidationError(CustomResourceDefinition): unknown field "description" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinition; if you choose to ignore these errors, turn validation off with --validate=false
I have already created a PV to matches the PVC for the Kafka statefulset
cat pv-volume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: myclaim
labels:
release: "stable"
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
[root@localhost ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
myclaim 8Gi RWO Retain Released default/myclaim slow 1h
what should I do ? above case , I used the v1.0.0-alpha.7 version, I change kubeless to the newest version v1.0.0-alpha.8,and still got this error
[root@localhost ~]# kubectl create -f kafka-zookeeper-v1.0.0-alpha.9.yaml
deployment.apps/kafka-trigger-controller created
service/kafka created
statefulset.apps/zoo created
service/zookeeper created
service/zoo created
clusterrole.rbac.authorization.k8s.io/kafka-controller-deployer created
clusterrolebinding.rbac.authorization.k8s.io/kafka-controller-deployer created
error: error validating "kafka-zookeeper-v1.0.0-alpha.9.yaml": error validating data: ValidationError(CustomResourceDefinition): unknown field "description" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinition; if you choose to ignore these errors, turn validation off with --validate=false
[root@localhost ~]# kubeless version
Kubeless version: v1.0.0-alpha.8
Finally, I comment the "description" line in kafka-zookeeper-v1.0.0-alpha.9.yaml and solve the problem!,but got another problem. when i create topic,it reports bad status error
kubeless topic create test-topic
FATA[0000] websocket.Dial wss://10.90.94.211:6443/api/v1/namespaces/kubeless/pods/kafka-0/exec?command=bash&command=%2Fopt%2Fbitnami%2Fkafka%2Fbin%2Fkafka-topics.sh&command=--zookeeper&command=zookeeper.kubeless%3A2181&command=--replication-factor&command=1&command=--partitions&command=1&command=--create&command=--topic&command=test-topic&container=broker&stderr=true&stdout=true: bad status
I checked the POD status, and found that two pods always be the pending state and I checked the PV,PVC,statefuset and svc,It seems nothing wrong according to your doc pubsub doc for kubeless
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
myclaim 8Gi RWO Retain Bound default/myclaim slow 10m
[root@localhost ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim Bound myclaim 8Gi RWO slow 9m
[root@localhost ~]# kubectl -n kubeless get statefulset
NAME DESIRED CURRENT AGE
kafka 1 1 17m
zoo 1 1 17m
[root@localhost ~]# kubectl -n kubeless get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker ClusterIP None <none> 9092/TCP 17m
kafka ClusterIP 10.105.85.226 <none> 9092/TCP 17m
ui NodePort 10.102.153.189 <none> 3000:31630/TCP 2h
zoo ClusterIP None <none> 9092/TCP,3888/TCP 17m
zookeeper ClusterIP 10.107.7.139 <none> 2181/TCP 17m
Then, What should I do ? I finally figured it out [root@localhost ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
datadir-kafka-0 8Gi RWO Retain Bound kubeless/zookeeper-zoo-0 slow 11h
myclaim 8Gi RWO Retain Bound kubeless/myclaim slow 11h
zookeeper-zoo-0 8Gi RWO Retain Bound kubeless/datadir-kafka-0 slow 11h
[root@localhost ~]# kubectl get pvc -n kubeless
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-kafka-0 Bound zookeeper-zoo-0 8Gi RWO slow 11h
myclaim Bound myclaim 8Gi RWO slow 11h
zookeeper-zoo-0 Bound datadir-kafka-0 8Gi RWO slow 11h
But my pod always be the status of "CrashLoopBackOff ", and restarted for more than 190 times in 11 hour
[root@localhost ~]# kubectl get pods -n kubeless
NAME READY STATUS RESTARTS AGE
kafka-0 0/1 CrashLoopBackOff 193 11h
kafka-trigger-controller-757688d57c-xm72m 1/1 Running 0 11h
kubeless-controller-manager-66868fb689-76d9v 3/3 Running 0 15h
ui-6d868664c5-hmtjb 2/2 Running 0 15h
zoo-0 1/1 Running 0 11h
and here is the describe imformation of the pod kafka-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 30m (x186 over 11h) kubelet, localhost.localdomain Container image "bitnami/kafka:1.1.0-r0" already present on machine
Warning BackOff 5m (x2258 over 11h) kubelet, localhost.localdomain Back-off restarting failed container
Warning Unhealthy 43s (x587 over 11h) kubelet, localhost.localdomain Liveness probe failed: dial tcp 192.168.0.111:9092: connect: connection refused
Hi @laurencechan, regarding the "Description" issue, I just released the version v1.0.0-beta.0
of Kafka that should solve that issue.
Apart from that, to know what's happening with kafka. Can you retrieve the logs of the pods to discover the cause of the issue? If it's somehow related with data corruption you can try to uninstall and reinstall the kafka
manifest.
Hi~ @andresmgot Thanks very much for your feedback! I have solved the problem here is the kafka-0 pod log
ERROR There was an error in one of the threads during logs loading: kafka.common.KafkaException: Found directory /bitnami/kafka/data/conf, 'conf' is not in the form of topic-partition or topic-partition.uniqueId-delete (if marked for deletion).
Kafka's log directories (and children) should only contain Kafka topic data. (kafka.log.LogManager)
[2018-09-13 11:17:01,300] ERROR [KafkaServer id=1249] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.KafkaException: Found directory /bitnami/kafka/data/conf, 'conf' is not in the form of topic-partition or topic-partition.uniqueId-delete (if marked for deletion).
Kafka's log directories (and children) should only contain Kafka topic data.
It seems I have bound the wrong pv with pvc or I used the same mount volume on the host.I correct all of that and it works~
Cool, I am glad you solved it!
Hi @andresmgot,I've tried the little demo of kafka trigger from your doc of pubsub events.It worked well just as the photo showed above.But as you can see, the function always be a message consumer.Is it possible for the function to be a message producer? I mean, I want to have this case, I publish a message in a particular topic, and this topic will trigger a function,i want the function to publish another message to a topic and trigger another function .is that possible and would you show me a little demo or something like that ?
That's something doable. All the functions have network access to Kafka brokers so it's just a matter of establishing the connection and sending messages. Unfortunately we don't have any public function with an example of that. You can also check this issue for more context: https://github.com/kubeless/kubeless/issues/88
If you manage to have a working example of that I would appreciate if you submit the example to https://github.com/kubeless/functions so others can see it as well :) .
Hi~ @andresmgot I've tried to publish a message while a function was been called , but how would the message be consumed automatically by the functions corresponding to the topic?
Hi, the messages should be received by the kafka-controller and redirected to the function so you will need to create a KafkaTrigger
for the second function as well.
Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT
What happened: $ kubectl create -f https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kubeless-v1.0.0-alpha.7.yaml --namespace kubeless error validating "https://github.com/kubeless/kubeless/releases/download/v1.0.0-alpha.7/kubeless-v1.0.0-alpha.7.yaml": error validating data: ValidationError(CustomResourceDefinition): unknown field "description" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinition; if you choose to ignore these errors, turn validation off with --validate=false
What you expected to happen: No error
How to reproduce it (as minimally and precisely as possible): See above.
Anything else we need to know?:
Environment:
kubectl version
): Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}kubeless version
): Kubeless version: v1.0.0-alpha.7