Capgemini / Apollo

:rocket: An open-source platform for cloud native applications based on Apache Mesos and Docker.
http://capgemini.github.io/devops/apollo/
MIT License
723 stars 105 forks source link

Rackspace #656

Closed bkarypid closed 8 years ago

bkarypid commented 8 years ago

This is an Openstack implementation for Apollo, geared towards and tested in Rackspace public cloud.

wallies commented 8 years ago

have you looked at the other openstack branch we had started? also can we rename this to rackspace instead of openstack, as there are little differences between rackspace and openstack.

bkarypid commented 8 years ago

I did have a look at the other branch, but it was quite outdated and I thought it would be easier to just start from scratch really, plus as you said there are some differences in the Rackspace implementation of Openstack, especially in networking, which Terraform can't handle for Rackspace at the moment. Details on these limitations can be found in the getting started doc. I'll change the directories/names from openstack to rackspace as well and push back(i had them like this initially).

bkarypid commented 8 years ago

Done and retested in Rackspace, I can give you a demo tomorrow before you consider merging if you'd like.

wallies commented 8 years ago

yep demo would be good. two questions, the default user, why are we using root, also can you look how we mounted storage for each slave for /var/lib/docker in aws

bkarypid commented 8 years ago

See my comment about the root user for Rackspace above. I already had a look at the additional cloud init config for mesos slaves in aws, however I'm not sure whether the same thing can be done for Rackspace, at least the ebs formatting bit, so perhaps the mounting process might need to be a bit different. We can discuss more about this tomorrow.

wallies commented 8 years ago

so looks like terraform openstack has support for volumes on instances like this https://github.com/CiscoCloud/mantl/blob/master/terraform/openstack/instance/main.tf, you can then reference the volume device in the cloud config

sheerun commented 8 years ago

shouldn't it be named "openstack" instead of "rackspace"? Isn't the only variable for terraform script the identity url?

wallies commented 8 years ago

it could be but rackspace has some little differences like you cant create networks, it uses the identity url, and authentication to this differently to other openstack providers

bkarypid commented 8 years ago

Sorted out the ssh user issue via CoreOS cloud config for both mesos masters and slaves, now the only user allowed to ssh to the CoreOS instances in Rackspace is core, so that's what Ansible uses as well (and the PATH to python/pypy is the same as in the other providers). Also, thankfully the Terraform Openstack provider's openstack_blockstorage_volume_v1 resource works fine with Rackspace, so storage for /var/lib/docker is also now mounted in the mesos slave instances.

enxebre commented 8 years ago

Just dropped few really minor comments, looks cool to me, could we just clean and squash the git history little bit?

tayzlor commented 8 years ago

dropped a few minor comments, code looks mostly good, can you fix and squash?

One last question(s) from me - what's the difference here between 'rackspace' and a 'pure openstack' approach? How easy is it to create a pure openstack approach from this? could it be made modular? can we address that in a follow-up issue?

bkarypid commented 8 years ago

Closing this to open another one to master. I believe that the modules could be reused for a pure Openstack implementation, in a similar fashion to aws public / private setup. The main difference would be that an Openstack implementation can use networking resources, which isn't possible with Rackspace. The modules should be able to be reused by both providers, which would have different main.tf files, for example the RS provider would pass the (already existing) network IDs to the compute resource modules from variables, whereas the Openstack provider would pass them by reading outputs from eg. a networking module. A slight difference (according to the docs) is that when specifying a network for a compute instance resource, either the uuid (network id) or the network name can be passed (one would suffice), however Rackspace needs both. Both these attributes are exported by the networking/subnet resources though, so we could still use the same setup for both, at least in theory. It might be a bit redundant, but would allow for reusability.