Lesson learned: running Jenkins on Digital Ocean

When I first decided to join Growella as the Director of Technology, one of the most appealing parts was the utter lack of technical debt; after all, if you haven’t built anything yet, you probably don’t have a whole lot of technical debt to worry about!

I decided that we would set out from the start to do things “right” (with “right” being a relative term, based on my past experiences and personal opinions on software development), and one of the first things I wanted to introduce was a Continuous Integration (CI) and Continuous Delivery (CD) workflow.

What are Continuous Integration and Continuous Delivery?

Continuous Integration is a development practice wherein new features, fixes, and other changes are continually being integrated into the codebase. Every time a new change is pushed to the remote repository, an automated build is executed to ensure that tests are passing, standards are met, and that the branch being worked on isn’t in conflict with the main (usually master) branch.

Continuous Delivery, on the other hand, means that at any time we’re able to safely deploy our code to production. We do this by doing all of our work in feature branches, then merging each discrete feature into staging (to test on our staging server) and master (when it’s been reviewed and is ready for production; the common name for this particular branching workflow is “Git Flow”). A good continuous delivery setup means your team can deploy dozens, if not hundreds of time a day.

Enter Jenkins

In order to facilitate a good continuous integration workflow, it’s important to have a server watching for changes and building the software. This entails pulling in dependencies, running any sort of build process (Grunt, Gulp, etc.), and getting the software ready for deployment. An extremely popular tool for this is Jenkins, an open-source, Java-based automation server.

Jenkins' butler logoUsing Jenkins, we can watch our code repositories for changes and automatically run any number of tasks. Run tests and check coding standards on new commits? Easy. Ensure the app is built without issues? No problem. Jenkins is extremely powerful and highly configurable.

When I started building the engineering infrastructure for Growella, I started with Jenkins running on a $5/month Digital Ocean droplet. I was new to working with Java applications, but getting Jenkins configured was surprisingly straight-forward thanks to Digital Ocean’s setup guide.

Eventually, I had Jenkins automatically deploying changes made to the staging branch for the Growella.com repository being deployed via SFTP to our staging server, then sending notifications to Growella’s #engineering Slack channel.

Atomic deployments

I first learned about the concept of “atomic” deployments years ago, while I was maintaining some Rails applications. Capistrano had just hit the scene, offering easy application deployments with zero downtime and an easy path to revert; since working with Capistrano, I’ve considered it the gold standard for deployments.

If you’re not familiar with how Capistrano works under the hood, it’s brilliant in its simplicity: a fresh copy of your application is checked out into a releases directory, shared resources (configuration files, logs, uploads, etc.) get symlinked into place (plus any other scripting that you specify), then a current symlink (which acts as your application’s document root) is updated to point to the latest release. Capistrano keeps track of the last few releases, so at any point you could run something like cap production deploy to release the latest code, then cap production deploy:rollback to revert to the previous release; all Capistrano has to do is update the target for the current symlink.

I wasn’t able to find anything readily available for atomic deployments with Jenkins, so I wrote my own very simple bash script to handle it:

ls -1 | sort -r | tail -n +6 | xargs rm -rf

After a successful deployment, I’d have Jenkins execute that command in my releases directory, which (in order):

  1. Gets a list of each release, where the directory name corresponds to the Unix timestamp of the deployment, with one release per-line
  2. Filters the list by name in reverse order (newest releases on top)
  3. Filters the list to only the 6th entry onwards (effectively letting me keep the current deployment plus 4 previous deployments)
  4. Pipes the oldest deployment(s) to rm -rf, which removes the old release.

It wasn’t pretty, but it was effective in rolling off old releases. It didn’t handle rollbacks (I’d have to manually update the current symlink), but it was close enough for our needs.

Why we’re not still using Jenkins

Great, so I had Jenkins set up to build and deploy our code in an atomic fashion, automatically rolling off old releases while still giving us a safety net should we ever need to revert. Sounds like that’s one less problem to deal with, right?

Did I mention that Jenkins is written in Java? A language known for being rather resource intensive? As a result, the $5, 512MB Digital Ocean droplet needed to be kicked every few deployments to make sure it was still running. A continuous integration and delivery server that continuously needs to be restarted is more of a continuous pain in the butt than a reliable asset for an engineering team.

Could I have simply thrown more money at the problem, upgrading to a larger server? Absolutely. Is it something I could have sunk a few days into, learning how to squeeze every little bit of performance out of Jenkins on a low-power server? Probably, but that wouldn’t have been an effective use of my time. If you have tips, I’d love to hear them.

Instead, Growella moved to DeployBot, which offers atomic deployments out of the box, provides a nice interface for configuring everything, and — most importantly — lets me focus on building Growella. It also costs about the same as it would for me to run Jenkins on an adequately-performant server, but I get to leverage the knowledge and support of another team.

Jenkins was a fun experiment, and I wouldn’t rule out playing with it again, but a good engineer recognizes when it’s best to fight and when it’s best to switch up his/her tooling to use something more reliable.

Leave a Reply