The folks over at Spatie have open sourced the checklist they run each site/project through before hitting the “Go Live” switch. This kind of stuff should be open sourced more often.
The list is a good starting point to creating your own list, but of course your mileage may vary depending on the type of project you are building, the type of environment you run on, the scale your project runs at, the technology used, etc.
On a recent project I collaborated on, deployment happened via git-ftp.py, a Python script which automatically publishes your git repository to an FTP server.
The script itself works with with an git-rev.txt file on the FTP server which keeps track of the last published commit. When deploying via git-ftp.py, the script only uploads the changes made since the last published commit.
git-ftp.py relies on GitPython which itself can easily be installed using easy_install if you don’t want to worry about dependencies and the like.
Check your python version at the Terminal by running python(quit the python prompt by running exit() or hitting CTRL+D)
Place git-ftp.py in any folder you like (I’ve placed mine in ~/Library)
Deploying with git-ftp.py
Before being able to deploy with git-ftp.py, you’ll have to provide it some FTP credentials. To do so, create a file ftpdata inside the (hidden) .git folder of your project (so the file is /path/to/project/.git/ftpdata). Set the contents of it to something like this:
Note: you can add per-branch credentials if you want. Just duplicate the block and change [master] to the name of the branch you’re targetting.
Once configured, you can start deploying using this command:
The script will output a list of all files that were uploaded.
Note: If you run git-ftp.py for the very first time on an FTP server containing an already published version of the project you should first place a git-rev.txt file on the server. Set the contents of the file to the SHA1 of the last commit which is already present on the server. Otherwise git-ftp.py will upload the whole repository which is not necessary.
Pro tip #1: set up an alias and save some time
In order to not having to type the entire publishing command all the time, set up an alias in .bash_profile. Run these commands at the Terminal:
One of the things I like about GitHub is the fact that it sports a gh-pages branch. Anything you push to it, is automatically published on your GitHub subdomain http://username.github.com/projectname/.
Inspired by this GitHub publishing flow, I’ve set up a likewise method on our web servers at work: a branch which gets published automatically onto our web server whenever we push code to it. This way we can eliminate the manual (and tedious) task of FTP’ing to the server (or opening up a network share) and copying the files onto the server in order to publish.
The web server at work is a Windows 2008 R2 machine running a WAMP stack to serve the (mostly static) sites. Each hosted subsite is configured a vhost and is stored in its own subfolder on disk. Apache is being run as a separate user which has limited access to the filesystem.
Next to the web server we also have a private Git server running Linux to/from wich code is pushed/pulled. Repositories are accessible via HTTPS and authenticating to this server is done via an HTTP Username & Password (not via a SSH key).
Of course your mileage may vary:
you might be running all-linux machines with shell access enabled (in which case you might be better off with Capistrano);
or you might have one single server function as both the web and Git server;
Be sure to verify with your favorite text editor that you’ve got the correct version. If it’s correct you’ve successfully deployed your site onto the web server. Break out the champagne, but don’t open it yet, we’ve got some more work to do.
You might not know this but whenever you clone a Git project onto disk, you’ll end up with a (hidden) .git directory in the root of your project. In that directory, everything about the repository clone is stored: commits, branches, hooks, ignore files, remotes, etc.
Now that you’ve deployed the project onto the web server, that .git directory will also be present in your vhost DocumentRoot, meaning that it — and its files — are now publicly accessible via http://subdomain.ikdoeict.be/.git/
To prevent this adjust your apache config to disallow the .git folder from being accessed over HTTP.
php_admin_value open_basedir "c:/apache/htdocs/vhosts/student.ikdoeict.be;c:/php/temp"
Deny from all
Getting the server to fetch updates
Now, how to get changes onto the web server? Make changes on your local machine as you’d normally do, and push them upstream when commited. Be sure to merge your changes in the gh-pages branch as that’s the version that’s served on the web server.
On the web server, do a git pull to get the latest version
git pull origin gh-pages
Prerequisite: “standalone” git pull
In order to automate deployment, git pull should run fine without any user intervention such as requiring to enter a password.
As our setup requires HTTP username + password authentication it’ll ask for the password each time I push/pull something. In order to bypass this edit .git/config so that the remote url contains the password. If you’re doing a fresh clone, you can clone the repository as https://user:[email protected]/project.git straightaway.
Beware though: everything is stored plaintext! If other people have access to the machine, this isn’t a good idea! Also be sure to have implemented the security step above.
PHP, do your thing!
Up until now we can deploy changes onto the server, yet it still requires us to log in to the server and manually invoke a git pull. What if we could just open up a webpage in our browser which does the updating for us?
Luckily for us, PHP has a built-in command shell_exec, which allows you to execute a command via the shell and which returns the output. Given this, it’s fairly easy to knock up a PHP script that executes a git pull
… and — for the final time — do a manual git pull on the web server.
git pull origin gh-pages
Once the file is on the web server, you can from then on update the version on the server by visiting http://subdomain.ikdoeict.be/update.php. The file will give output when done.
Note: Since Apache is running as a limited user, you must give that user R/W permissions on your DocumentRoot so that it can perform the git pull and do all basic CRUD file manipulations. You can test this by making a local change, pushing it, and visiting update.php: if no error occurs, it’s all fine and dandy!
Putting the magic in automagic
Deploying new versions goes smooth by now, but it’s not fully automated … yet.
You might not know this but Git supports hooks. Hooks are executed after a certain action is performed and are stored in .git/hooks/. For example: after a commit, you could let a hook send out a tweet with your commit message.
One of the hooks that’s interesting for us is the post-receive hook, a server-side hook which is executed after a client has performed a push. We can take this hook, and let it call the update.php script for us. This way, we just have to push, and the web server will be updated automagically.
First, create the hook by renaming the sample provided
mv post-receive.sample post-receive
chmod +x post-receive
Second — and most important — adjust the hook’s content so that it calls the update.php script
In addition to having moved my mail to Google Apps for your Domain I decided to take it to the next level about a month ago by using Dropbox for storing both personal as professional (viz. 3RDS-related) data in the cloud. This decision made me more mobile than ever, as I can access any file, any time, any place; This as well from my PC as from my Mac as from any other Computer that’s connected to the internet. Over the weekend I raised the bar again and started experimenting with Dropbox resulting in a method by which one can (ab?)use that very same Dropbox as an automagic Web Publishing Solution (as an alternative to SVN and other technologies).