philcali / sbt-aws-plugin

A simple AWS EC2 IO inside of an sbt console
MIT License
8 stars 0 forks source link

AWS EC2 SBT Plugin

This plugin allows maintaining EC2 environments from inside the sbt console. The plugin is particularly useful for deploying groups of dependent cloud servers for tasks like Selenium grid, Akka clusters, Mongo config servers, etc.

You can specify monitoring for a group, and triggers for when they are "hot". You can optionally specify a ssh client to run commands on instances.

Essentially, it's possible to build, and deploy to target EC2 environments, via an sbt build script we know and love.

Installation

build.sbt

addSbtPlugin("com.github.philcali" % "sbt-aws-plugin" % "0.1.0")

or via git uri:

lazy val root = project.in( file(".") ).dependsOn( awsPlugin )
lazy val awsPlugin = uri("git://github.com/philcali/sbt-aws-plugin")

Additional plugin information (like global plugins, etc) can be found at the sbt plugin documentation.

Requirements

Plugin Structure

At this point in the plugin's life, the main points are:

Logical groups are named describeImageRequests, or NamedAwsRequests. These are added via the awsEc2.requests key.

The ability to create environment from these logical groups is crucial, and that's where awsEc2.actions comes into play. This is a collection of NamedAwsActions.

Built-in actions are:

Once an instance is created or started, the awsEc2.started callback is invoked. This callback is particularly useful for mapping elastic IP addresses, monitors, and status checks to newly created instances in the logical group.

Once instances are denoted to being hot, the awsEc2.running callback is invoked. This callback is far more useful as it might involve organized coupling of two or more logical groups.

Finally, there's the NamedSshScript. This is an optional setup, but enhances the plugin's automated capabilities with the ability to execute remote commands and file transfers, via ssh and Scala code.

Ideal Setup

Now you can create an EC2 instance to test deployment, integration, etc, for your Apps.

Learn by Example

In this example, assume we're building a Selenium test program that connects to a hub, linked by UI nodes, hitting the application server where your app is deployed.

It's pretty clear our logical groups here:

Assuming our aws.sbt contains the credentials, and ssh.sbt contains the ssh client info, we need to define the image requests.

{
  "owners": ["your ownerID", "someone else's id?"],
  "filters": [
    { "name": "tag:Type", "value": "Selenium Hub" }
  ]
}
{
  "owners": ["ownerId"],
  "filters": [
    { "name": "tag:Type", "value": "Selenium Nodes" }
  ]
}
{
  "owners": ["ownerId"],
  "filters": [
    { "name": "name", "value": "Java7 App Server" }
  ]
}

It is recommended to customize the awsEc2.configuredInstance key, to attach instance size, security group, or key pair info for the create action.

awsEc2.configuredInstance := {
  case ("hub", image) =>
  awsEc2.defaultRunRequest(image, "m1.small")
    .withMinCount(1)
    .withMaxCount(1)
    .withSecurityGroups("Selenium Grid Server")
  case ("nodes", image) =>
  awsEc2.defaultRunRequest(image)
    .withMinCount(1)
    .withMaxCount(1)
    .withSecurityGroups("UI Group")
  case ("app", image) =>
  awsEc2.defaultRunRequest(image, "m1.small")
    .withMinCount(1)
    .withMaxCount(1)
    .withSecurityGroups("App Group")
}

At this point, you can now run awsEc2Run create nodes in the shell, and it will create all of the UI node instances. Automatically creating the instances doesn't give us much if they can't wire themselves up to one another. In the Selenium grid architecture, the hub must be running, and nodes connect themselves to it. Let's create a couple of NamedSshScripts to launch the grid and connect the nodes to it.

val seleniumJar = "java -jar selenium-server.jar"

awsSsh.scripts += NamedSshScript("grid", execute = {
  _.exec(s"${seleniumJar} -role hub > /dev/null &2>1 &")
})

awsSsh.scripts += NamedSshScript("node", execute = {
  client =>
  val query = MongoDBObject("group" -> "hub")
  awsMongo.collection.value.findOne(query) match {
    case Some(instance) =>
    val hubUrl = s"http://${instance("publicDns")}:4444/grid/register"
    client.exec(s"${seleniumJar} -role node -hub ${hubUrl} > /dev/null &2>1 &")
    case None => Left("Please create the hub >:(")
  }
})

awsSsh.scripts += NamedSshScript("deploy", execute = {
  sshClient =>
  val jar = "~/" + (jarName in assembly).value
  val assemblyJar = (outputPath in assembly).value.getAbsolutePath

  sshClient.upload(assemblyJar, jar).right.map {
    _.exec("java -jar " + jar)
  }
})

Now that the scripts are in place, we can execute them when the instance is hot, by tying it in awsEc2.running.

awsEc2.running := {
  instance =>
  val execute = (script: NamedSshScript) => {
    awsSsh.retry(delay = awsEc2.pollingInterval.value) {
      awsSsh.connectScript(instance, awsSsh.config.value)(script)
    }
  }
  instance.get("group") foreach {
    case "hub" =>
    awsSsh.scripts.value.find(_.name == "grid") foreach (execute)
    case "nodes" =>
    awsSsh.scripts.value.find(_.name == "node") foreach (execute)
    case _ =>
    streams.value.log.info("Instance is running.")
  }
}

The running callbacks will fire upon logical group alert:

> awsEc2Run create *
> awsEc2Run alert hub
> awsEc2Run alert nodes
> awsEc2Run alert app
> assembly
> awsSshRun deploy app

Assuming that test:run will launch the Selenium test suite with an arg to take in the hub url and app url:

> awsEc2Run status app
> awsEc2Run status hub

That'll give you the public DNS of the hub and app, respectively.

> test:run http://hubPublicDns:4444/wd/hub http://appPublicDns

Run it as much as you like until you are ready to destroy the groups.

> awsEc2Run terminate *

Obviously, this process could be a improved a bit if the runner could access the mongo ec2 instance collection.