Closed ErisDS closed 11 years ago
I have an ec2 for one.
So the issue at hand is not really where - we will have our own servers. But rather what things do we need to do around this
I was thinking of taking a look at this next via a vagrant ec2 plugin. We can have it spin up an ec2 instance, install node, nginx, clone the repo at a specific branch/commit and have some basic auth with just user/password on the whole site.
This sounds fantastic. How would we setup basic auth? Can it be an option as part of the plugin or would it need to be a separate piece of config?
I've been looking at the HttpAuthBasic module for nginx and I think it will do what we want. I'm going to run some tests on the vagrant box and follow up.
I've confirmed that the auth_basic config lines work on our vagrant machine and nginx. We get one prompt for username and pass and then the site loads like normal without any other prompts.
I'm trying to expand it to work with the vagrant ec2 plugin and get it set up so we don't check in our AWS keys or the auth user/pass. The basic setup will have to just be that the person doing the deploying will set some special puppet facts in the Vagrant file like:
puppet.facter = {
# Secure the test site
"aws_secret" => 'blah',
"aws_key" => 'blah',
"secureSite" => true,
"userFilePath" => '/home/vagrant/.htpasswd',
'userName' => 'ghost',
'userPass' => 'ghost'
}
Eventually we could load these values from a deployment_settings.rb
or something and just add that file to the .gitignore.
A conservative estimated timeline is by Friday evening, but I'm gonna push for tomorrow night.
Thinking further about this method of deployment now that I understand better how it works, I realise that whilst it's great for each of the contributors doing personal deployments from their vagrant setups, it may not be suitable for what I had in mind in terms of test servers. I also realise that what I had in mind wasn't clear.
I imagined that the Ghost AWS account would have a set of instances which were always the same. One would be where we run blog.tryghost.org, would only be updated by me/John but would be accessible to the public. The others could all be updated by any core contributor, and would be private/password protected. There would be one which always ran the latest stable version (master) and might have a domain name assigned to it such as ghosttestingmaster.tryghost.org, one which ran the next/up-coming stable version (0.2.0 for us now) and a suite (say 3-5) of test servers for contributors to deploy their own feature branches (ghosttesting[1..5].tryghost.org).
The purpose is to have a set of known locations for testing, so that we could get an army of QA engineers keeping an eye on our work, as well as regularly test each others work. Large features/changes could be rolled out when a PR is sent so that the new code could easily be tested by more people, as wells as the code being read. The testing servers might also be useful for collaborating on large new features.
I'm really not sure what the best way to do this is, but I think we need to separate out provision and deploy? Just have a couple of shell scripts - one to provision the servers correctly with our dependencies so it's a reliable process, and one to actually do a deploy which takes the server & git details (repo, branch) and puts the right stuff in the right place?
I've been thinking about this quite a lot There are two main issues here which are vastly complicating deployments:
For 1) We need to be able to clone ghost, build the sass & templates - effectively do a build - and then ship the build to the deployment server, rather than trying to get a deployment server with ruby etc all set up. We need a build server for nightlies, so perhaps (long term) we can use that server also to trigger deployments. For now, we can use our local machines as a build server - we all have the dependencies. We just need a script which we can point at a github repo & refspec (branch/tag/commit) and at a server, and we should be golden.
For 2) we have to roll our own VPS on EC2 with our own set of requirements. I think it should be reasonably easy to set this up and scriptify it so that it is repeatable.
Therefore, I think the shortest path with least resistance outcome for this is to end up with two basic scripts which do our work for us. I'm working on this now, but if anyone else has the time and willing I'd love for someone to take over :)
I've never done anything like this before, but here's my attempt at a shell script for configuring an ec2 instance: https://gist.github.com/ErisDS/b75be8bfe12c337a17bb
If I put the example helloworld node.js app in /var/www/app.js, it all works.
It probably needs some more configuration, but I think it's kinda the right idea? Main thing that is missing is the option to configure the server to be password protected
Now we need a script to deploy to a server configured this way.
Instructions
Looks good. I just followed on my own ec2 instance and got the same results.
I thought about this a lot last night too and I'd like to build a command line tool sort of like express. We could call it ghost-cli
and have it be a separate project that gets npm installed globally (like express or grunt-cli)
I think there are some really cool things we could do like
I could elaborate more on this as individual stories if we got another repo going.
@jgable Could you possibily give me a hand by getting that config script to optionally add basic auth? So if you run the script with --password MySecurePhrase it sets it up with the username ghost and MySecurePhrase as the password
With regards to ghost-cli, that's totally in the plan. However, the way you've described it you're kinda describing two different things: one being cli ghost management - which we will definitely have in the Ghost project as we'll need it to do proper installs from npm and the second thing being provisioning which sounds a lot like the hosted platform we're building :smile:
I took a shot at adding a simple if statement but didn't have time to test it out:
https://gist.github.com/jgable/a6bec2da4d9fde4debfd
instead of passing --password MySecurePhrase
it's just the first parameter so - ./ghost-ec2-config.sh MySecurePhrase
That's fab, I can figure it out from there :)
Also quick Q any particular reason for using grunt-zip instead of grunt-contrib-compress?
No reason, just found that one first, I'm fine with switching.
Cool - reason being that zip foobars windows directory separators, contrib-compress seems to get it right. We could remove dependencies entirely and use a shell command to tar the files but not sure how reliable that'd be
Does windows have a built-in tar command?
No, but msysgit does :+1:
I've updated the instructions to include how to deploy using the deploy script. You can find my new Ghost blog here: http://ghostblog.hannahwolfe.co.uk/
I've tested redeploys, db is unaffected & everything seems to work ok. I'm counting this as done, and created a new task to finalise things: #178
We need to be able to run test deployments of Ghost
It's also worth considering that we want to switch over blog.tryghost.org to be a public instance of Ghost asap (preferably at 0.1.1, though perhaps at 0.2) and that we want to get all contributors to run their own blog (although whether that's public or private is obviously up to the contributor).