Open joshuacox opened 9 years ago
That particular configvalue was for when we were using octohost/tentacles
- but that's been abandoned. It's not used anymore.
I've been toying with some clustering as well and although I've gotten it to work a few times, it's very fragile.
I want to take a look at using docker swarm
and docker compose
to help with some of what you're describing - making multiple machines work together better - but just haven't had time.
I'd really like it if used some of the standard tools and plugins - so that it was more compatible than it is currently.
I'm going to be building an openstack as a home lab experiment. I definitely am going to experiment with a couple of these clustering jobs like: http://deis.io/ https://flynn.io/ http://controlcenter.io/
But I'm in total agreement about clustering setups being fragile. It takes a ton of work to get most things into a state where you can cluster it, and your work is not over once you get it there. I mean sure once all apps are built to: http://12factor.net/ specifications maybe this will be easier. But I digress.
So one method, would be this config file method, given the config file then octo can branch off and do other things than the normal docker build and run. There could be specified multiple other octohosts that the repo would be shipped off to and build and ran in multiple datacenters. Or perhaps one of the aforementioned projects as another clustering provider. At first, this would be best suited for app replication, SQL and data storage would be left to other methods. However, I can envision methods of accomplishing that as well through automated methods.
Another, kind of silly thought. How about we go about this inversely? In that you can run 'privileged' docker containers, has anyone run octohost inside a docker container yet? And if so, has anyone put octohost inside deis/flynn/etc?
I've been working on this in my spare time over the last few days and it's working pretty well:
docker run -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=octoregistry -e AWS_KEY=KEY_GOES_HERE -e AWS_SECRET=SECRET_KEY_GOES_HERE -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 -d registry
/etc/default/octohost
: echo 'DOCKER_HOST="tcp://0.0.0.0:2375"' >> /etc/default/octohost
and echo 'PRIVATE_REGISTRY="localhost:5000"' >> /etc/default/octohost
export DOCKER_HOST="tcp://0.0.0.0:2375"
cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/id_rsa root@ip.address.here "sudo gitreceive upload-key your-name"
Now you have 3 octohost nodes and they're connected via Consul and Docker Swarm:
root@octo1:~# consul members
Node Address Status Type Build Protocol
octo1 10.132.37.251:8301 alive server 0.5.0 2
octo2 10.132.38.99:8301 alive server 0.5.0 2
octo3 10.132.38.96:8301 alive server 0.5.0 2
root@octo1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0b8ea17176e2 registry:latest "docker-registry" 42 seconds ago Up 15 seconds 10.132.38.99:5000->5000/tcp octo2/cranky_sammet
0203f3706756 registry:latest "docker-registry" 42 seconds ago Up 20 seconds 10.132.38.96:5000->5000/tcp octo3/grave_lalande
c08051c5dabd registry:latest "docker-registry" About a minute ago Up About a minute 10.132.37.251:5000->5000/tcp octo1/silly_mayer
root@octo1:~# docker pull octohost/nginx:1.7
octo1: Pulling octohost/nginx:1.7... : downloaded
octo2: Pulling octohost/nginx:1.7... : downloaded
octo3: Pulling octohost/nginx:1.7... : downloaded
When you git push to the first node, it:
DarronFroeseDD@: git push octo master
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 372 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Base: swarmtest
remote: Put repo in src format somewhere.
remote: Building Docker image.
remote: Sending build context to Docker daemon 145.4 kB
remote: Sending build context to Docker daemon
remote: Step 0 : FROM octohost/nginx:1.7
remote: ---> ae035f72e906
remote: Step 1 : ADD . /srv/www/
remote: ---> 41b774f889f9
remote: Removing intermediate container 677fdc84432e
remote: Step 2 : EXPOSE 80
remote: ---> Running in 7a0950c71543
remote: ---> 983284caa54e
remote: Removing intermediate container 7a0950c71543
remote: Step 3 : CMD nginx
remote: ---> Running in 78e6fb411cc6
remote: ---> e02fccf446d7
remote: Removing intermediate container 78e6fb411cc6
remote: Successfully built e02fccf446d7
remote: Adding http://swarmtest.104.236.126.138.xip.io
remote: Adding http://swarmtest.swarm.octohost.io
remote: Adding http://swarm.octohost.io
remote: Pushing swarmtest to a private registry.
remote: octo1: Pulling localhost:5000/swarmtest:latest...
remote: octo2: Pulling localhost:5000/swarmtest:latest...
remote: octo3: Pulling localhost:5000/swarmtest:latest...
remote: octo2: Pulling localhost:5000/swarmtest:latest... : downloaded
remote: octo3: Pulling localhost:5000/swarmtest:latest... : downloaded
remote: octo1: Pulling localhost:5000/swarmtest:latest... : downloaded
remote: Your site is available at: http://swarmtest.104.236.126.138.xip.io
remote: Your site is available at: http://swarmtest.swarm.octohost.io
To git@104.236.126.138:swarmtest.git
a0b7909..9496c89 master -> master
Small video of it in action: https://asciinema.org/a/19043
This is really good stuff! There needs to be a hollowed out volcano somewhere where we can show this technology off to people who can appreciate what awesome things are coming together here. Not saying that you HAVE to be a super villain to need/and/or appreciate what's going on here. I'm going to play around with this some more. Cheers!
Ha yeah - it's still too complicated to pull off easily - will see if there's a better way to deploy a cluster than a bunch of shell scripts. Maybe some Terraform.
On Wed, Apr 22, 2015 at 9:35 AM Josh Cox notifications@github.com wrote:
This is really good stuff! There needs to be a hollowed out volcano somewhere where we can show this technology off to people who can appreciate what awesome things are coming together here. Not saying that you HAVE to be a super villain to need/and/or appreciate what's going on here. I'm going to play around with this some more. Cheers!
— Reply to this email directly or view it on GitHub https://github.com/octohost/www/issues/6#issuecomment-95239603.
I am thinking terraform would be the logical next step seeing as how we've already got consul in the mix, the other hashicorp products are an obvious fit. I don't like how Atlas is not open source though, there needs to be an alternative to it.
Yes - as soon as I am back (next week) - I'm going to start on it with Terraform.
This week I'm traveling and working on a presentation.
grats on the presentation! Where and do you have the presentation in github somewhere? Love to see it.
I'm presenting at Devopsdays NYC on Friday - it's titled "Scuba and Ops":
http://www.devopsdays.org/events/2015-newyork/proposals/ScubaAndOps/
We'll see how it goes to see where it's posted - it's a short one - only 5 minutes - and it's a bit of a different type of talk than I've done in the past.
Ok, after looking at the
/etc/default/octohost
file, I'm looking at this block:Can anyone elaborate on their clustering setup?
I'm still toying around with things like deis and flynn, and it would be really cool to have abilities for certain images to be HA clustered to kingdom come. However, the main reason I'm still using octo is a lot of services don't cluster so easily. So I've been thinking of how to handle fig and now docker-compose inside of octo. I think the answer to all questions might be some sort of octo-config file that could be included in a repo. This could provide a very robust alternative to the 'magic comments' hack. And provide us a way of specifying multiple docker files in a single git push, or a fig.yml docker-compose and even let us say have certain images be pushed elsewhere to other octohosts in parallel, or even other providers (e.g. deis, flynn, etc). And if the file is not present operate as normal, keeping backward compatibility.