Build-as-code

I am using Jenkins for a couple of years now. Starting at the University of Konstanz, it gave me the ability to not only monitor student projects but also to release them as well-tested as part of open-source software like jSCSI, Treetank and Perfidix.

While configuring Jenkins Jobs over the GUI works fine for small projects, for large-scale infrastructures including commonly defined workflows, it just doesn’t scale. Furthermore, in complex architecture, storing the receipt to build and deploy the software becomes as essential as the software itself.

One should really store the job itself side-by-side to the sourcecode of the program.

Possibilities to generate builds from code

Jenkins offers afaik three different possibilities to generate builds automatically (besides coding a Selenium Testcase emulating the clicks on the GUI).

config.xml

The first and straight-forward way is to use the internal representation of Jenkinsjobs, the config.xml: Just create a “valid” config.xml locally (or automatically). Jenkins offers different interfaces to inject these config.xmls: REST, jenkins-cli or even the filesystem of the instance:

#via REST
curl -X POST --header "Content-Type:application/xml" -d @config.xml INSTANCE/createItem?name=JOBNAME
#via jenkins-cli, download it first from INSTANCE/cli/
cat config.xml | java -jar jenkins-cli.jar -s INSTANCE create-job JOBNAME
#via file
cp config.xml /var/jenkins_home/jobs/JOBNAME/config.xml 
#and reload configuration from disk

Even though there are different ways to publish config.xmls in the instance, there is are several severe problems:

  • There is no XSD behind it since its structure depends on the plugins installed and used in the jobs. Pushing an invalid config.xml to the instance can result in weird results like partially generation of jobs and views. One example is the generation of views where the elements in a list must be alphabetically sorted.
  • The lifecycle of the jobs is not covered. Of course creating a continuous job is as easy as creating its backup before testing something in the job. Cleaning up unused jobs is unfortunately not as popular.

DSL-Plugin

Defining Jobs as Groovyscripts is common since a couple of years because of the well-established DSL-Plugin. One writes a script, defines it as job-seed and Jenkins generates, updates and deletes related jobs. Combined with source control, builds are stored as code alongside to other source and become a part of the sourcecode lifecycle including committing, branching and releasing. Groovy scripts can be better validated syntactically as schema-less XML. The API is thereby well documented .

An example for a standard maven continuous job, building all branches in a Git-Repository, is given below:

mavenJob("test.continous") {
    description('Building all branches with mvn test')
    wrappers {
        timestamps()
        colorizeOutput()
    }
    jdk('Oracle JDK 1.8 64-Bit')
    scm {
        git {
            branch('**')
            remote {
                name('origin')
                url(URL)
            }
        }
    }
    triggers {
        scm('H/5 * * * *')
    }
    goals('clean test')
    rootPOM(pom.xml)
}

For nightly and release jobs, adjacent jobs would be needed to be generated.

A great tutorial is available over here.

Pipelines

With the introduction of Jenkins 2.0, Jenkins Pipelines came up. But why another plugin? Isn’t DSL not powerful enough to handle all upcoming tasks to automated job management with ease?

Pipelines do not offer a way to manage jobs but to create pipelines. Despite focussing on the trigger (time, SCM-change, manual), pipelines focus on a workflow to handle sourcecode.

An example of handling jSCSI is given below:

#!groovy

pipeline {
    agent any
    tools {
        maven 'Maven 3.5.0'
        jdk 'jdk8'
    }
    stages {
        stage('Unit Tests') {
            steps {
                sh 'mvn -B test'
                junit '**/target/surefire-reports/junitreports/*.xml'
            }
        }
        stage('When on master, Deploy Snapshot and analyze for sonar') {
            when {
                branch 'master'
            }
            steps {
                sh 'mvn -B clean -DskipTests=true clean deploy'
                withSonarQubeEnv('codequality.toolsmith.ch') {
                    sh 'mvn -B org.jacoco:jacoco-maven-plugin:prepare-agent test'
                    sh 'mvn -B sonar:sonar'
                }
            }
        }
    }
}

Based on the branch currently build, a sonar analysis is performed. Despite, TestNG-tests are executed anyhow.

Two of a kind

Pipelines can be written in two different kinds: Scripted Pipelines or Declarative Pipelines. The example above is written in the declarative style: Information about trigger and SCM are inherited by Jenkins itself making the pipeline lean and easy readable. Declarative Pipelines are less directly extensible, missing features should be imported as Shared Library. Scripted Pipelines are more powerful but also more difficult to read. Written in plain groovy, they directly offer turing-completeness.

What to choose

Using Pipelines instead of classical jobs for fixed defined purposes needs a major change in mind. We are always build the code, the way how it is built must be defined either over branches or with the help of additional configuration files in the source. If you adapt yourself to the pipelining concept, you stop trying to generate different pipelines for different purposes and adhere the new built-in functionalities of pipelines like lean the build-as-code concept as well as the nice visualization in Blue Ocean.

Links