OkunaOrg / okuna-api

🤖 The Okuna Social Network API
https://okuna.io
MIT License
240 stars 78 forks source link

Deployment infrastructure and benchmarking #659

Open ehsanmqn opened 4 years ago

ehsanmqn commented 4 years ago

Hi, I've just deployed Okuna server locally on a physical server. I'm intended to deploy it on cloud and two questions popped up in my mind about infrastructure and benchmarking. 1) How much CPUs and RAMs at least needed to run Okuna properly? and how many users will be supported with this configuration? 2) Have previously performed any benchmarking tests (Such as load and stress testing) on Okuna server before? Is there any report about this?

Thank you, Ehsan

lifenautjoe commented 4 years ago

Hi Ehsan, we haven't ran any benchmarking tests but our current setup is 2 load balanced servers with 4 CPUs and 2 GB each and a load balanced DB of instances with 1GB ram and 2 CPUs each. We're sustaining about 6,000 users with this atm and I'm confident we could even downscale resources.

How are you deploying the API? with the okuna-cli ? If so, keep in mind this uses the django test server which is slower and possibly not as secure as running it with a uwsgi server.

ehsanmqn commented 4 years ago

Dear @lifenautjoe, Thank you for your response. It is useful for my work. Furthermore, I deployed Okuna with uwsgi. Is it possible to explain load balancing architecture for this purpose? With what attitude you used load balancing, I mean? If I want to deploy with architecture like you, what should I do and what should I use?

Apologize for my questions