iron-io / functions

IronFunctions - the serverless microservices platform by
https://iron.io
Apache License 2.0
3.18k stars 227 forks source link

Deploy Kubernetes-production stack, function pod can't successfully connect to postgreSQL with errors. #667

Open sss0350 opened 6 years ago

sss0350 commented 6 years ago

Hello , I've successfully run kubernetes-quick stack on openshift Origin on CentOS. And trying to deploy your kubernetes-production staks on openshift Origin , https://github.com/iron-io/functions/tree/master/docs/operating/kubernetes/kubernetes-production

Here's some problems I encountered now: Since I don’t' have existing redis and postgreSQL, so I'll follow your guide. (And set some openshift required things) I can successfully run Redis and postgresSQL pods . But function pod seems can not connect to postgresDB. Is there any extra initial setting need to be done on postgresDB or Redis first?

Here's the default configmap settings, do I need to change it? MQ_URL: redis://redis-master.default DB_URL: postgres://postgres:mysecretpassword@postgresql-master.default/?sslmode=disable

Do I need to create DB schema or something myself? and change this DB_URL setting? If yes , can you provide more detail steps? Or any suggestion?

Thank you.

c0ze commented 6 years ago

@sss0350 sorry for the late response ! Unfortunately, we don't have an exact solution to your problem this time, as openshift is something we haven't really worked on. We can just say that you don't have to create any schema for Postgres. If there is any progress on this issue, we would appreciate if you could keep us informed !

c0ze commented 6 years ago

just one thing, can you try with

 export IRON_FUNCTION=$(kubectl get -o json svc functions | jq -r '.status.loadBalancer.ingress[0].ip'):8080

instead of

 export IRON_FUNCTION=$(kubectl get -o json svc functions | jq -r '.status.loadBalancer.ingress[0].hostname'):8080

(change hostname to ip)

sss0350 commented 6 years ago

Kubernetes-quick stack on openshift Origin , can't really persist bolt db data after restart pod. So I try to mount app/data to OS persistent volume (like NAS) , should be able to access across cluster nodes. It works when I start only one iron-function pod, but will fail when I try to simultaneously create two pods. One pod will fail to start and show "Error on bolt.Open" , I think it might be some kind of lock , when another pod try to access bolt DB. Is this a bug? I think somehow iron need to persist data on cluster environment. (over 2 running pods, need data sync)

c0ze commented 6 years ago

bolt is a file storage db, and it does not allow more than one process access a single file AFAIK. Bolt is meant to quickly test / evaluate in a dev environment, not really thought for a production / staging deployment. So we would recommend configuring one of the supported DBs.

sss0350 commented 6 years ago

Thanks , per your suggestion , I try to deploy kubernates production stacks as following, 1) change configmap value to: MQ_URL: redis://redis-master DB_URL: postgres://postgres:mysecretpassword@postgresql-master/?sslmode=disable 2) Add some openshift setting , edit yaml file (add service account) , add openshift privilege setting to service account ,and finally mount a persistent volumes to make db storage persistent after restart. 3) Add routes to function pod , to make function expose url static everytime. I now can successfully running 3 pods on my project , function , redis and postgreSQL. And deploy function on it also.