Open santo74 opened 10 years ago
Hey Santo
This is exactly what nscale is for, servers are also thought of as "containers" so you essentially describe you deployment as boxes in boxes. Currently nscale supports amazon machine instances, VM's (for local dev) and docker containers. If you're looking at setting it with on the metal servers, or some vps other than ami's then I believe that's where nscale is going (or may already be)
@pelger, @mcollina or @rjrodger can bring more clarity than me on this
@pelger is putting up a guide for AWS, and https://github.com/nearform/nscale/tree/docs/docs you can find the new docs we are putting up.
Thanks for the feedback! I'm glad to hear that nscale is a perfect match for what I described. However, I still can't find the information I was looking for, i.e. how to use nscale in a multi-server environment. Are there any examples on this or will this be described in the new docs you referred to ?
Also I'm interested in more information related to how seneca fits into this, e.g. how to use the loadbalance-transport plugin to distribute the microservices over multiple containers - potentially even running on different physical servers (or VM's) - because in such an environment you don't necessarily know the hostnames/ip addresses beforehand and you might also want to be able to spin up extra containers/servers on the fly. Or should I ask this on the Seneca project page ?
I'm sure seneca/nscale are made for this, but as already said I'm currently missing more in depth information on this topic ;-)
regards,
Santo
I'm glad to hear that nscale is a perfect match for what I described. However, I still can't find the information I was looking for, i.e. how to use nscale in a multi-server environment. Are there any examples on this or will this be described in the new docs you referred to ?
We are currently debugging some issues on AWS support, and the docs will be released this week.
Also I'm interested in more information related to how seneca fits into this, e.g. how to use the loadbalance-transport plugin to distribute the microservices over multiple containers - potentially even running on different physical servers (or VM's) - because in such an environment you don't necessarily know the hostnames/ip addresses beforehand and you might also want to be able to spin up extra containers/servers on the fly. Or should I ask this on the Seneca project page ?
This is way more complex, you want some auto-discovery software between Seneca services. This is something we are interested in providing, but we have not written yet. We are assembling a working solution using Consul for one of our customers, but it is not idiomatic node and seneca.
Anyway, let's get in touch, maybe we can help.
Ok, that was a misconception on my part then. I thought there was some form of auto-discovery as well. Consul might indeed be a solution, but as you say it's not node / seneca specific.
Today I accidentally stumbled upon Amino, which seems like a good fit as well, but unfortunately there wasn't any activity during the last year. Also Seaport seems interesting, although a bit more limited in comparison to Amino (decentralized, failover, etc)
BTW, I'm new to Node.js so I'm still trying to find my way through it.
Santo
I'm curious about running multiple redundant instances of services, and balancing them across different nodes in a cluster. So far it seems like it's either one instance per container, or having to duplicate everything yourself.
I'm looking at this from the perspective of longshoreman.io, which has that as one of the base features.
any plans for this?
Hey Adrian, Yes this is supported and has been for some time - however we were a little slow to document I'm afraid. Take a look at https://github.com/nearform/nscale-workshop/blob/master/ex8.md which explains running nscale within AWS - hope that helps and please ping me back with any comments. Thanks!
hey @pelger
that's not quite what I mean. My question is basically about redundant deploys of applications, in a way that can easily scale out on demand.
With longshoreman we had it set up so we had 2 or more lsm hosts running, and a deploy would actually end up running 2+ instances of the service, that are spread across the hosts. The number of instances per app could be altered at runtime, being either increased or decreased.
You would also be able to add and remove servers to the cluster, to have new instances be spread across the added capacity.
hey @AdrianRossouw, I understand your question: let me try to explain.
Here are the underlining assumptions of nscale:
Hi,
I was looking for a micro-services tool for Node.js and found Seneca, which seems a really nice fit for this. However, I want to run those microservices in multiple docker containers spread over multiple servers. Initially I was thinking about coreos with etcd, but that would still need a considerable amount of manual configuration and therefore I started looking for a container management / node discovery tool which can automate most of this work for me. This led me to nscale and while it seams a wonderful tool, I can't find any information on how to use it for a multi-server setup, despite the nice wiki and tutorials.
On the seneca level the seneca-loadbalance-transport plugin might be a nice solution for point-to-point communication in a round robin fashion between services on multiple servers, but I have no clue how it could be used in combination with nscale to distribute the services in docker containers over multiple servers.
Is it possible to give me some more information on how that would be done, or isn't nscale ready (yet) for such environments ?
Thanks,
Santo