Closed pierreozoux closed 10 years ago
then, how would the mysql data come back into the user's data volume? I think for now the simplest setup is to have the wordpress database inside the user's data volume, and its files (whether an sqlite file or a mysql data folder) inside the folder names 'wordpress'. That way the user can take it anywhere.
We would use http://docs.docker.com/userguide/dockervolumes/, and so the data would be under /data/john/mysql
so all mysql servers mount this same folder? Or just the one that is currently in use? how does the folder get synchronized to the failover server? wouldn't it be a lot easier to have the mysql server inside the wordpress container that is running on john's website, instead of having it on another server?
also, you will still need dns failover in case the webserver goes down. separating the database server from the webserver just introduces more possible points of failure
i think having multiple database servers only makes sense if you also have multiple webservers, with a loadbalancer in front
and then i would use a standard set up with 1 master and n slaves, where each slave is ready to be promoted to master whenever the master goes down.
most people who use these orchestration tools have (at least) one website that is too big for one single server. That is why they use multiple servers for subtasks. We, on the other hand, have a lot of websites, each of which is too small to fill up a whole server. We use containers to let people run whatever software they want inside a small slice of one server. moving a part of such a container to a separate (mysql) backend server doesn't make sense to me.
We should orchestrate which user is on which server. When a server is overloaded we can move a few hundred users to a different one. We can also have a hot fail-over using either DNS or a loadbalancer. But I think the backend of each container image should stay inside that container image? At least that's how I have been building the first images so far.
Of course there would be various webserver. At least 2 that consumes mysql services (among other services). And there is a load balancer in front. (namely our backend).
I agree that it is more complicated. For me, as we will deal with thousands of services, this should be a SOA. And in this sense, we are exactly like airbnb. Facing the same problems.
My question is simple then, how do you automatic failover with a wordpress docker image? You have n slaves, fine, but how do the slaves get synchronised? How do you promote a slave, automatically?
the way I'm doing it now is that whenever I want to do maintenance, I manually switch DNS between my Luxemburg server and my Switzerland server, with a 5 minute TTL. I don't use mysql replication yet (just rsync).
I can see there may be a point in the future where rsync become too unreliable and DNS switching is not fast enough, but it's not the biggest problem I see at this point. I think we can worry about this next year? For now I'm more worried about getting more services added. Let's discuss this topic tomorrow, though!
In fact, I'm using container linking now only at the 'bouncer' level, to link the SNI offloader (the container that listens on port 443, where the https request comes in) to the various user containers based on hostname.
For now, I use only one server per IP address, and one container per user, per application, that is also running. So I don't have this problem yet. I think the logical next step is to leave containers linked but stopped (to save RAM), and then starting the already linked container on demand.
CoreOS makes the assumption that you want to run services at dynamically assigned hosts, but this is not the case for us, I think. The ambassador pattern starts to be necessary when one IndieHoster has so many services that they don't fit on, say, 12 bare-metal servers anymore.
This discussion is too broad in fact, I'll close it.
But after some reflexions, I agree on some points. We should have server-wide services, like databases(mysql..), haproxy, emails and jabber.
After reading all: http://www.slideshare.net/bobtfish/docker-confjune2014 http://nerds.airbnb.com/smartstack-service-discovery-cloud/ http://clockworkcubed.com/2014/05/consul-and-synapse-service-discovery-and-elastic-load-balancing/ http://jasonwilder.com/blog/2014/02/04/service-discovery-in-the-cloud/ http://jasonwilder.com/blog/2014/07/15/docker-service-discovery/ http://www.consul.io/intro/vs/smartstack.html http://igor.moomers.org/smartstack-vs-consul/ I feel this is the path: https://coreos.com/blog/docker-dynamic-ambassador-powered-by-etcd/
Ouh, I'm getting excited :)
So the idea would be to have a manifest file for each of app we support.
I will write a BDD scenario: Given a user (john) wants to access his wordpress the first time And the user has already an account with indiehosters When he goes to his app store page And he clicks on wordpress Then he is redirected to john.indiegue.st/wordpress And our user sees a waiting page Then our backend catches this http request And our backend understands that there is no wordpress for this user And our backend read the manifest file for wordpress And our backend satisfies MySQl dependencie (Given a user (john) wants to access his mysql the first time...) And our backend satisfies all dependencies And our backend send the http request to the service ambassador And the service ambassador responds
The idea is that I don't want poor failover made by hand. Technology is mature for kickass failover. I want to have a rocking service. When one of the VM is down, I don't want the service down for the user :) So yes, one MySQL per user, but a replicated master-master one! And every services consuming MySQL are able to do it so, even if one MySQL instance is down :)
I'm still hoping that we don't have to write this manifest file, and could handle it at the Fleet or Docker level.
And about some services that are shared among users (mail, jabber..), I strongly believe we should use the same scemas as for users. We should dog food it ;) It's not a special case, it's just that the user is Michiel instead of John ;)
And I don't think we will run backup of services of each others (cross hosters). I will personaly have 3 VMs, and they'll backup each other. It's either that, or we share a common cluster (3 VMs also, but we can grow them to more).