maelstrom-cms / odin

An open-source domain monitoring tool built using Maelstrom ๐Ÿค– Uptime Robot + ๐Ÿง Oh Dear + ๐Ÿงช SSL Labs + โฐ Cronitor + ๐Ÿ•ต๐Ÿปโ€โ™‚๏ธ DNS Spy
Mozilla Public License 2.0
460 stars 92 forks source link

Question: Is there a guide about how big should be the memory/processor/aws instance? #43

Closed warrenca closed 4 years ago

warrenca commented 4 years ago

Hi,

Is there an available information about how big or what are the specs of the memory/processor or even an AWS instance type for running a monitoring for hundreds of endpoint?

I want to try this with at least website 100 endpoints to check, I am wondering what specific resource this application needs to achieve an efficient monitoring.

For anyone who have tried this, what's your server specs and how many endpoints are you monitoring?

Thanks in advance.

OwenMelbz commented 4 years ago

Hi,

It depends which monitors you want to run.

We use DigitalOcean so can only provide suggestions based on that.

If you're just using uptime checks then you can probably just run it on a $5 droplet which is 1GB 1 CPU

If you're going to be using the visual diffs, this spins up headless Chrome which means you'll need a little more juice.

If you're going to be using the Crawler then this will need even more to speed through things.

If you're crawling just 1 website, vs 100 websites then will again depend.

There's no clear answer for it as it's down to your requirements.

You could look into the Elastic servers which automatically scale for you when they reach resource limitations?

I recommend simply starting at a low spec server, then increase the resources as/when you need them through monitoring the speed of job processing and resource usage on your server.