Journey to continuous integration

Reading Time: 6 minutes

When I first heard the term Continuous Integration (CI), my heart leaped like a gazelle. Poetic yet descriptive, entrancing and seductive. I knew instantly its true meaning and knew my software team had to get on board and do Continuous Integration today. — Nobody Ever.

Let’s be honest, the words “Continuous Integration” are a terrible piece of terminology. I respect Martin Fowler who coined the term so I’m going to give him a pass on this one, but when many people first hear “Continuous Integration” their face shows a tortured combination between utter boredom and a sense of creeping doom, maybe with a dash of “what’s that smell?” If you’re already familiar with the jargon, imagine what it’s like for a beginner. When you say “Continuous Integration”, they hear “endless tax forms” or “infinite bureaucracy”. The tragedy is, of course, that while the words are a scourge, the thing itself is wonderful.

Bad names aside, what is CI all about?

CI is the software equivalent of a factory run on robots where deliveries of neatly wrapped source code are made to the cargo bay, fresh off the developers keyboards. Tireless automatons take it from there and well-tested, working software rolls off the production line at the other end. What could be more beautiful?

In software this virtual factory with the robots is called a build. At Atlassian we have build systems, dedicated build engineers, and our product, Bamboo, a Continuous Integration and Deployment tool. CI is all about the builds.

In any non-trivial software project, multiple developers are working at once on different parts of a system. Naturally developers should test their work before they push code to the repository. This testing, however, is not exhaustive and the developer’s laptop may not count as a clean, standardized deployment environment.

Needless to say there are some problems that only show up once all the work of different people are put together on a shared installation and subjected to more exhaustive testing. This testing should be automated as much as possible.

Even before rigorous testing, problems can sometimes be discovered at the integration point. Even a basic sanity check of combining all the components under active development with the specific target deployment platform can reveal a problem. On occasion we’ve discovered that our own software is unable to start up on an integration environment even though it works with no drama on a developer’s laptop.

So even if you do not have automated tests, you can still gain some value from CI. And when you add automated tests, one by one, their value will be far greater than without CI.

Why is integration continuous?

While I’ve joked about the terminology, it’s useful to understand the basis for it. The meaning of integration is bringing the parts together on one or more deployment platforms, like making an installable app or deploying a cloud service to an environment. Some of the typical environments we use for our projects include development, test, QA, dog food, staging, and production. Not all projects will need all stages in this deployment pipeline. Integration includes all the compiling, packaging, perhaps code generation, pre-processing, macro expansion, static defect detection, minification, obfuscation, automated unit testing, perhaps code signing, etc. The exact list of things that happen in a software build is different on every project.

While “integration” may not be everybody’s word for building deployable artifacts, everybody uses the word “testing”. Do we all mean the same thing by that word? When we test, we check to see that things perform the way they should. Nothing earth shattering about testing, but automated testing, now that’s another matter. It’s testing, but done without any human intervention.

So that’s integration and automated testing, But what’s with this continuous stuff?

Typically, continuous integration means that all the software build processes that must occur to turn code as it exists in the source repository into a working software product all happen as often as possible — ideally on every commit, every time somebody makes a small change to the code.

Let me tell you how the JIRA team adopted continuous integration and why.

Back in 2005, the JIRA team did not do continuous integration. We had a monstrous hairball of a nightly build script. It was both a horror and a delight. It did everything. The latest version of the code was checked out from source control, compiled, and packaged into multiple distinct editions with different functionality. Each edition was tested every night while the developers slept like babies. Come morning there was a detailed report of what went wrong. Nothing should have gone wrong, since developers test and scrutinize their work, but to no-one’s surprise, sometimes things went wrong. OK, OK, things went wrong constantly — at least they did once the team grew to a dozen or so. Perhaps by rights the developers should not have been sleeping like babies but sitting bolt upright, unable to sleep for the bug crimes they knew they’d committed that day.

Before going continuous, the JIRA build accumulated such a broad and deep range of automated tests that to execute on three JIRA editions all tests against each permutation of database, edition, JVM version, application server and distribution variant that we supported at the time, the total time required exceeded 24 hours. It could no longer be even called nightly. We joked with hollow dread at renaming it to a weekly build.

When the team moved to a continuous integration process, that same process was initiated on every checkin of code by a developer.

At least, that was the theory.

The true ethos of continuous integration is that the builds are happening in response to new code being checked in. The time delay between checkin and the completion of a build should be minimized to ensure that the developer who may have introduced a problem can return to the code and fix it or roll back his or her changes without getting too deep into a new task. It’s all about fast feedback.

If we had opted to go the other way and strode foolishly towards a weekly build, we would have had ever longer time delays between creating a bug and discovering or fixing it. We would also have increased the disruption to our team mates and ultimately reduce the feature throughput of the development team.

Building every single commit is often talked about as an ideal, but when you have several factors growing quickly, this ideal becomes exponentially more difficult and expensive.

When the number of developers and the complexity in the software both grow, the time required to compile, build and test will grow to exceed the typical time between commits. This is no big deal on smaller code bases with fewer committers as commits happen to land together only occasionally. But when it’s a problem, there are three possible consequences:

  1. Builds need to be queued up,
  2. Builds must be executed in parallel, or
  3. Multiple commits need to be built together.

In the JIRA case we had some of each. Any of these can be OK, depending on the project and the extent.

Broken builds

Builds break. Of course they shouldn’t, but nobody on a large team project can run every conceivable test exhaustively on every check-in. If you disagree, consider sustained load tests that necessitate hours of running time in order to produce a valid result (known as soak time). There will be some kind of test that is run on your code after you have pushed your changes to a central repository.

Depending on the details of your branching strategy, responding to a broken build can be an urgent matter. If it can affect other people’s work, you must respond and make sure you don’t block them.

Having a culture of responding to and fixing broken builds promptly, or, as Bamboo Developer Esther Asenjo likes to put it, a Culture of Green, is not just a matter of professional pride. It’s a systematic engineering response that enables and promotes quality at high-speed. Check out Esther’s talk on this from AtlasCamp.

Allowing builds to remain red for hours or even days can be corrosive to a team’s work. The benefit of using red and green, though it can be culturally exclusive and challenging for people with color blindness, at least draws an unambiguous distinction between what is working and what is broken. It’s binary. If you don’t have a binary rule, what is your rule? You must get this straight. Ultimately everyone must know the difference between success and failure at every level of detail. This is what your builds should be. You shouldn’t get into the game of “shades of gray” between red and green because you rob your team of the definition and therefore the hope of achieving clear success.

Successfully delivering measurable value

This is the real meaning of Continuous Integration. It’s not the software equivalent of “infinite bureaucracy”. It’s much more like the idea of “ultimate clarity” or “irrefutable success”. Hmmm… maybe I’m on to something. Maybe I should start coining some terms of my own!

Learn more about continuous integration and continuous delivery pipelines.