microscaling / microscaling

Microscaling Engine
http://app.microscaling.com
Other
300 stars 36 forks source link

Can we auto-scale our RESTful spring boot docker containers with this ? #29

Open alwaysastudent opened 8 years ago

alwaysastudent commented 8 years ago

Hi, with auto-scaling what I mean is, scale the containers alone and not provisioning of host VM infrastructure. Say, we run various different spring boot micro-services in the docker containers. Can this tool auto-scale and schedule of our docker containers based on the basis of CPU or API performance Latency ?

rossf7 commented 8 years ago

Hi, thanks for getting in touch! Yes we're working on container scaling which we call microscaling. This is to differentiate it from auto scaling with usually means scaling VMs.

We support scaling Docker containers using the Docker API and Mesos Marathon. Using Spring Boot in containers will work well and our solution is designed for microservices architectures.

At the moment we support scaling message queues. Support for scaling APIs is coming soon. In the meantime this is an example we developed using NGINX and Consul.

https://github.com/force12io/force12-lb-example

For the API scaling what would you like to use as a load balancer? NGINX, HAProxy or something else?

xiaods commented 8 years ago

@ross-makisoft could you please introduct some haporxy as LB, which way to use microscaling.

rossf7 commented 8 years ago

Hi @xiaods support for load balancers is coming soon. Its good to know that you're interested in using haproxy. Please let us know if you have any more questions on microscaling.

xiaods commented 8 years ago

in common concept, the microscaling is based on what condition to trigger the scaling. don't found any docs on introduce it.

rossf7 commented 8 years ago

The microscaling is based on a metric and a target. For our message queue integration we monitor the queue length and scale the number of containers to keep to the target. Any space capacity on the cluster can be used to run a background task.

For the load balancer scaling we'll also use a metric and a target. The exact metric is yet to be decided.

This blog post has more detail on the message queue integration. Hopefully it will be helpful.

http://blog.microscaling.com/2016/04/microscaling-with-nsq-queue.html

lizrice commented 8 years ago

Hi @xiaods, there's some more background on the load balancer demo we've done using Nginx in this blog post.

In this demo we simply randomized the number of containers behind the load balancer but for a real-world implementation it would be working to maintain a target. This target could be, for example, number of requests queued at the LB, or average web response time.

xiaods commented 8 years ago

@lizrice if i understand you want based on a target to auto scaling the container?

ludovicc commented 8 years ago

Hi. Well it''s very likely that microscaling won't be able to support every message queue technologies out there. I would like to use it with Akka for example. As I can access the queue length metric, it would be great if there was a simple HTTP api in microscaling that allows me to post any change of queue length. Do you plan to add such an API? It would fit my use-case greatly, as I have only very few messages but they take a long time to process.

lizrice commented 8 years ago

@ludovicc I think that's a great idea. I'm not sure when we could get to it, but you are welcome to make a PR if you like?

entegratellc commented 7 years ago

It would really great if this tool monitors RabbitMQ queue depth and scale containers based on it

joaoleite commented 7 years ago

Any news on using RabbitMQ?