mozilla / participation-metrics-org

Participation metrics planning repository
4 stars 4 forks source link

Q4 2016 Computing Resource Increase #13

Closed hmitsch closed 7 years ago

hmitsch commented 8 years ago

@sanacl will provide some load statistics, following that @hmitsch will discuss with Yousef

canasdiaz commented 8 years ago

Hi guys, we need more CPUs in the VM machine we use to run Kibana and the GrimoireLab python tools. Right now we have only two CPUs and we are not able to run all the parallel jobs we'll like to. Besides that, the platform should be more used during next months so having more parallel jobs for Kibana and our collection+enrichment process is a must. About the connection of the UI with the database (ElasticSearch) I find it a bit slow. I mean, the average response time of ElasticSearch is slower than we need to offer a good user experience.

To sum up, what we request is:

Some data:

hmitsch commented 8 years ago

@flamingspaz I am adding this to our backlog, high enough to (hopefully) make it into the next sprint: https://tree.taiga.io/project/pierros-mozilla-particiaption-systems/us/41?kanban-status=904519

canasdiaz commented 8 years ago

Hi @flamingspaz and @hmitsch, I'm reviewing the VM in order to identify why it is going down. I found OOM errors like this one.

Oct 12 08:40:22 ip-10-0-27-67 docker[336]: time="2016-10-12T08:40:22Z" level=error msg="containerd: notify OOM events" error="open memory.oom_control: no such file or directory"

It makes sense to me as we are handling more data than the week before. So, it is important for us to have more resources in the VM.

The ES latency is second priority right now.

hmitsch commented 8 years ago

Blocked by lack of ParSys time to work on the issue. Not scoped for Sprint 8.

hmitsch commented 7 years ago

Abandoned and/or done.