Aside from occasional bug fixes or security patches, Genesis is no longer actively supported.
Its successor, Evolution, has a similar feature set, as well as a built-in importer for existing Wordpress projects.
This project was recently moved across organizations in github. In order to continue using it, you'll have to update the generator:
npm update generator-genesis-wordpress -g
Rapidly create, develop, & deploy WordPress across multiple environments.
Ensure you have the latest versions of NodeJS v0.10, Vagrant v1.6.*, & VirtualBox v.4.2.*.
Install Yeoman v1, Bower v1.3.3+, Genesis WordPress Generator, & Vagrant Host Manager:
$ npm install -g yo bower generator-genesis-wordpress
$ vagrant plugin install vagrant-hostmanager
If you get EMFILE issues, try running: $ ulimit -n 4096
.
(You can check your versions by running node --version
, npm --version
, etc.)
Install Capistrano v2.15.* via Bundler & Ansible:
$ sudo bundle install
$ sudo easy_install pip
$ sudo pip install ansible
Use the Genesis WordPress Generator for scaffolding.
First, ensure you're using the latest version of Genesis WordPress with Bower:
$ bower update
Next, use Vagrant to create & provision your local environment:
$ vagrant up
Now open http://local.mysite.com (or whatever your site's domain name is)!
If the site doesn't load for you, you may have to manually provision your local machine:
$ vagrant provision
Or, update your local /etc/hosts
with Vagrant Host Manager:
$ vagrant hostmanager
Finally, if things worked while you were at the office but broke when you got home, you probably need to just get Vagrant a new IP address:
$ vagrant reload
When you're done working on your site, suspend the VM to save on CPU & memory:
$ vagrant suspend
You can destroy the VM entirely (while keeping your local files) to save on disk space:
$ vagrant destroy
First, ensure your project on Github can be accessed by remote servers. To do this,
access the project's Settings -> Deploy Keys in Github and add provisioning/files/ssh/id_rsa.pub
.
Next, assuming the server has been provisioned, deploy your code on Github:
$ bundle exec cap production deploy
The latest code is now live:
> http://production.mysite.com/
If you deploy to staging
, the name of the current branch (e.g. my-feature
) is deployed:
> http://my-feature.staging.mysite.com/
In the rare event the changes weren't supposed to go live, you can rollback to the previous release:
$ bundle exec cap production deploy:rollback
Note that deployments use the project's Github repository as the source, not your local machine!
Suppose you have just provisioned & deployed to a new server, but the site obviously won't work without a database or uploaded images.
You can overwrite the remote database with your local VM's:
$ bundle exec cap production genesis:up:db
You can sync your local files to the remote filesystem:
$ bundle exec cap production genesis:up:files
Or, you can perform both actions together:
$ bundle exec cap production genesis:up
Once a site is live, you rarely need to sync anything up to the remote server. If anything, you usually sync changes down.
In the rare case you need to destructively sync files to the remote server (meaning, delete files on remote that don't exist locally), there is a special use command:
$ bundle exec cap production genesis:up:mirror
Suppose you have a live site that you need to work on locally. Like the previous section, you can sync down the database, the files (e.g. uploaded images), or both:
$ bundle exec cap production genesis:down:db
$ bundle exec cap production genesis:down:files
$ bundle exec cap production genesis:down
The following environments are expected to exist and resolve via DNS to simplify deployment & provisioning:
local
(e.g. http://local.mysite.com)staging
(e.g. http://staging.mysite.com/, http://my-feature.staging.mysite.com/)production
(e.g. http://production.mysite.com/, http://www.mysite.com/, http://mysite.com/)If you're deploying to a new machine (e.g. production.mysite.com), you first need to provision it:
$ bundle exec cap production genesis:provision
If there is an error, you may be prompted to re-run the command with an explicit username/password:
$ bundle exec cap production genesis:provision -S user=myuser -S password=mypassword
From that point on, tasks will use a private key (provisioning/files/ssh/id_rsa
).
In the event you already have a live site, you can modify the settings in deployment/stages/old.rb
to
migrate the old server to a new server:
# Start the local VM
$ vagrant up
# Provision the new server
$ bundle exec cap production provision
$ bundle exec cap production deploy
# Download the old site to local
$ bundle exec cap old genesis:down
# Upload the old site to production
$ bundle exec cap production genesis:up
Now you can switch DNS for http://www.mysite.com/ to point to http://production.mysite.com/'s IP!
Most of the functionality regarding remote servers are handled by custom Capistrano tasks, which you can see by running:
$ bundle exec cap -T genesis
cap genesis:down # Downloads both remote database & syncs remote files into Vagrant
cap genesis:down:db # Downloads remote database into Vagrant
cap genesis:down:files # Downloads remote files to Vagrant
cap genesis:logs # Tail Apache error logs
cap genesis:permissions # Fix permissions
cap genesis:provision # Runs project provisioning script on server
cap genesis:restart # Restart Apache + Varnish
cap genesis:ssh # SSH into machine
cap genesis:start # Start Apache + Varnish
cap genesis:stop # Stop Apache + Varnish
cap genesis:up # Uploads Vagrant database & local files into production
cap genesis:up:db # Uploads Vagrant database into remote
cap genesis:up:files # Uploads local project files to remote
cap genesis:up:mirror # **DESTRUCTIVELY** overrides files on remote
cap genesis:teardown # Remove any existing remote deployment files; counterpart to cap's built-in deploy:setup
Now run any one of those commands against an environemnt:
$ bundle exec cap local genesis:restart
If you're seeing this:
$ bundle exec cap staging genesis:ssh
deploy@staging.example.com's password:
Then the deploy
user's ssh keys on your remote server do not match the keys in your local repository.
You should first ensure that your local repository is up to date, thereby ensuring you are using the latest versioned ssh keys.
$ git checkout master
$ git pull origin master
$ bundle exec cap staging genesis:ssh
If the problem persists, this means that the keys on your remote server are out of date or otherwise incorrect, and you must re-provision by specifying a username and password:
$ bundle exec cap staging genesis:provision -S user=userWithRootOrSudoAccess -S password=usersHopefullyStrongPassword
If you're seeing this:
$ bundle exec cap staging genesis:ssh
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
d3:4d:b4:4f:d3:4d:b4:4f:d3:4d:b4:4f:d3:4d:b4:4f.
Please contact your system administrator.
Add correct host key in ~/.ssh/known_hosts to get rid of this message.
Offending RSA key in ~/.ssh/known_hosts:68
RSA host key for staging.example.com has changed and you have requested strict checking.
Host key verification failed.
Then you have at least one existing entry in your ~/.ssh/known_hosts
file (indicated, in the example above, to be on line 68), with a different key than the server is returning.
You can search for all line(s) matching the server name and/or ip address using grep
:
$ cat ~/.ssh/known_hosts | grep -n "staging.example.com"
68:staging.example.com,192.168.1.42 ssh-rsa AAAAB3NzaCd34db33f...
Now, remove those lines from said file, using your text editor of choice.
If you're seeing this:
servers: ["production.yourwebsite.com"]
[production.yourwebsite.com] executing command
** [production.yourwebsite.com :: out] Permission denied (publickey).
** [production.yourwebsite.com :: out] fatal: The remote end hung up unexpectedly
Then you probably need to add the SSH keys to your GitHub repo. Open provisioning/files/ssh/id_rsa.pub
and copy/paste the entire contents (the ssh-rsa key) to your repo by visiting Settings > Deploy Keys > Add deploy key.
For more help on this, refer to the GitHub Docs.
If you're seeing this:
SSH authentication failed! This is typically caused by the public/private
keypair for the SSH user not being properly set on the guest VM. Please
verify that the guest VM is setup with the proper public key, and that
the private key path for Vagrant is setup properly as well.
Then you're probably missing the Vagrant Public Key in your authorized_keys
. To add it run:
curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub >> ~/.ssh/authorized_keys
VBoxManage
If you're seeing this:
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
The you'll need to restart VirtualBox with:
sudo /Library/StartupItems/VirtualBox/VirtualBox restart
If you're seeing this:
==> local: Setting hostname...
Vagrant attempted to execute the capability 'change_host_name'
on the detect guest OS 'linux', but the guest doesn't
support that capability. This capability is required for your
configuration of Vagrant. Please either reconfigure Vagrant to
avoid this capability or fix the issue by creating the capability.
You've hit a known issue with vagrant 1.8.5 (the latest version at the time of this writing). You will need to patch vagrant as described here, or wait for the upcoming 1.8.6 release.
If you're seeing this:
** [out :: production.yourwebsite.com] bower ENOTEMPTY ENOTEMPTY, rename '/tmp/deploy/bower/jquery-14131-V6RuH0'
** [out :: production.yourwebsite.com]
** [out :: production.yourwebsite.com] Stack trace:
** [out :: production.yourwebsite.com] Error: ENOTEMPTY, rename '/tmp/deploy/bower/jquery-14131-V6RuH0'
** [out :: production.yourwebsite.com]
You've been bitten by a known concurrency bug, and need to update bower on your remote server to 1.3.3 or higher:
npm install -g bower
apt-get -f install
If you're seeing this:
** [out :: staging.yourwebsite.com] E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
** [out :: staging.yourwebsite.com] ERROR: hostname is not a legal parameter in an Ansible task or handler
You likely have a full /boot
partition, which is preventing Ubuntu's package manager from functioning normally. To fix this, you'll need to remove outdated linux kernels -- this must be done on the remote server.
You may go through the steps manually, or use a python script that must be run with root or sudo permissions.
wget -O /tmp/purge-kernels.py https://raw.githubusercontent.com/EvanK/ubuntu-purge-kernels/master/purge-kernels.py
sudo python /tmp/purge-kernels.py
Afterward, you should have apt
and dpkg
clean up after themselves, update the grub kernel list, and reboot the server:
sudo apt-get -f install
sudo apt-get -f autoremove
sudo update-grub
sudo reboot now
Due to a known issue for MySQL 5.5 or below, the innodb storage engine will consume but never release disk space by default:
deploy@production:~$ sudo ls -alh /var/lib/mysql/ib{data1,_logfile0,_logfile1}
-rw-rw---- 1 mysql mysql 1.6G Aug 29 12:50 ibdata1
-rw-rw---- 1 mysql mysql 5.0M Aug 29 12:50 ib_logfile0
-rw-rw---- 1 mysql mysql 5.0M Aug 29 12:44 ib_logfile1
This issue is resolved with a change to my.cnf
(in the provisioning provided by genesis v0.3.2), and the following steps:
sync down a copy of your target database ( production
in this example:
bundle exec cap production genesis:down:db
reprovision target stage, to update my.cnf
(with genesis v0.3.2 or higher):
bundle exec cap production genesis:provision
ssh in: bundle exec cap production genesis:ssh
sudo service mysql stop
sudo rm /var/lib/mysql/ib{data1,_logfile0,_logfile1}
sudo service mysql start
reprovision a second time, to recreate wordpress db and user:
bundle exec cap production genesis:provision
sync up your db copy:
bundle exec cap production genesis:up:db
You may also need to restart services afterward: bundle exec cap production genesis:restart
aliases
variableup:mirror
commandgenesis:up:mirror
command (#149)genesis:provision
works despite Net:SSH
failures (#131)genesis:up:limited
for rsync'ing only the shared folders (#80)Gemfile
to generator (#126)genesis:backup
and genesis:down
(122)provision.yml
f334764$
in passwords (113)cleanup
after deploy
(112)genesis:backup:db
command (120)5afaf80
, fixes v0.2.38auto_correct
Vagrant SSH portdeploy:update_code
(#82)production.vcl
(see #28, and 3fd9d0c)receive/wordpress.vcl
(see 9c2f358)wp-(login|admin)
instead of passing (see 89cb137)postfix
(#72)genesis:teardown
(#55) & fix date.timezone
(#73)deploy
user to www-data
group (#70)genesis:permissions
(#54)vagrant up
prior to genesis:up:db
and genesis:down:db
(#59)natdnshostresolver1
to resolve DNS (#65)sudo
instead of invoke_command
(#41)775
and owned by deploy:www-data
(#31)512M
genesis:permissions
deploy
& genesis:files:up
curl
as default module.sql
files by defaultchmod 600
the ssh key only if it existswww-
to production-
wp_get_attachment_url
chmod 600
the ssh key when running genesis:down/up
genesis:restart
after genesis:down:*
/server-status
genesis:down:*
and genesis:up:*
/bin/bash
genesis:restart
runs on all genesis:up
commands:branch
git branch
returns nothinggenesis:tail
to genesis:logs
4xx
& 5xx
error codeslocal.
, wp-login
, and wp-admin
genesis:restart
after deploy:restart
restart
, start
, stop
shared_children
chmod 600 id_rsa
0.1.6
v0.1.8
option_siteurl
to fix redirects in wp-admin
ssh
& remove WP_SITEURL
wp
capistrano task namespace to genesis
cache
to rsync_exclude
folders