cityindex-attic / logsearch

[unmaintained] A development environment for ELK
Apache License 2.0
24 stars 8 forks source link

Add job to trigger day/night scaling #334

Closed sopel closed 10 years ago

sopel commented 10 years ago

This should implement a build task for the successfully tested functionality from #327, which in turn will be run via a Jenkins job.

mrdavidlaing commented 10 years ago

I'll figure out how to support the aws-cli

@sopel - can you create a set of AWS credentials. A "global" set for all jobs on the Jenkins build server? A set specifically for this job? What is best policy

sopel commented 10 years ago

I've commented in Flowdock regarding the aws-cli - concerning the AWS credentials:

Given this Jenkins server is logsearch specific, I'd rather not use dedicated credentials at all, but facilitate a resp. IAM role for the EC2 instances (that's also how the aws-cli works for the Labs AWS Backups in the generic Jenkins cluster), which is much more convenient I think, besides being more secure in the first place etc.

The Jenkins slaves are already using a resp. jenkins-slave role (currently shared between all clusters, we likely want to change that down the road), however, the master hasn't been started with an IAM role yet - can you change this via BOSH? If not, this would be another reason to constrain all but meta builds to the slaves, which would also ease maintaining the build environments considerably.

mrdavidlaing commented 10 years ago

@sopel - My cursory reading of the BOSH AWS instance creation code suggests that it can't currently assign IAM roles.

I've asked the bosh-users mailing list for confirmation.

For now, lets assume that our Jenkins Master instance won't have an IAM role.

I propose that we start with dedicated AWS credentials / job (possibly shared between jobs) stored using the Inject passwords to the build as environment variables job config feature, since that (a) mirrors the way we use git credentials and (b) gives the greatest job permission granularity. image

Eventually we want all our builds to execute in an LXC container (eg, docker). Lets simulate that environment for now using a separate build slave and build out a lprovision_logsearch_sysadmin.sh script that provisions a base Ubuntu 12.04 VM/container with the dependancies required.

The first thing the job will do is run:

sudo provision_logsearch_sysadmin.sh

The provisioning script should install / configure (if missing):

  1. git
  2. Ruby
  3. Python
  4. awscli

@sopel - please can you create a new build slave; and make sure that the jenkins user can run sudo without a password

@dpb587 - please can you start writing said provision_logsearch_sysadmin.sh script.

sopel commented 10 years ago

Sounds good as a general direction - I do not think that we need a new build slave for this, insofar the current one offers 1-4 already and also uses the IAM role jenkins-slave, which can receive additional permissions as we see fit (thereby obsoleting the need for dedicated AWS credentials right now, can of course be added regardless if so desired).

This doesn't solve the one issue which triggered this topic in the first place though, namely the slave being on a dynamically assigned IP address and as such not able to reach through the firewall for the deployment job. Or are you proposing to launch a slave on a dedicated on demand EC2 instance (in contrast to the current auto scaling ones) and assign it an elastic IP? Either way, there are a few options to tackle this:

Let's discuss this during today's hangout.

sopel commented 10 years ago

Decision number 1: AWS credentials will be provisioned per job/context within Jenkins with the minimum possible IAM requirements.

sopel commented 10 years ago

Decision number 2: every job will ensure and recreate the required build environment via a resp. build task/script; ideally this will be embedded in a container later on (e.g. a Docker image), which will allow to (optionally) optimize build times by pre committing the required environment as we go.

sopel commented 10 years ago

Decision number 3: the firewall issue will be circumvented by facilitating the existing SSH tunnel option.

sopel commented 10 years ago

Decision number 4: we'll reprovision the existing slave (which has partially pre configured build environments for Node.js/Python/Ruby etc.) from a pristine Ubuntu 12.04 LTS image to ensure every job can prepare its required environment itself.

sopel commented 10 years ago

@dpb587, @mrdavidlaing - in hindsight the latter is a bit questionable, insofar this won't help as long as we do not use something like Docker right away to isolate the jobs, so we'd be back at the usual language/platform runtime plus virtual environment configuration.

mrdavidlaing commented 10 years ago

When the provisioning scripts start stepping on each others toes, we'll isolate jobs on different slaves to simulate docker containers.

When we implement docker containers, we save some money.

sopel commented 10 years ago

Seems to be a misunderstanding - I'm simply saying that it is 100% guaranteed that several jobs will require e.g. the Python and Ruby runtimes from the get go and there is a well established and simple to use pattern to deal with this, which doesn't require containerization right away, namely to only install the resp. language runtime, its package manager and its virtual environment manager and use virtual environments for each job, which ensures the isolation.

Accordingly I propose to relax the 'pristine' Ubuntu instance requirement in this regard and consider each job to be pristine at the point where it has created its virtual environment. I agree that each job should still validate and install the language runtime requirements out of principle, but just don't care if they are installed already (rather be happy that this comparatively long installation is already done ;)

I'll proceed with this assumption, if nobody objects against using virtual environments within the build tasks/scripts.

dpb587 commented 10 years ago

Closing this in favor of the proposed implementation on #341