Skip to main content

Jenkins: Docker Build Pipelines

·3 mins

Curse of the Polyglot #

Most Jenkins environments I’ve seen are very homogeneous - a Java shop, a Ruby shop, etc. My projects are anything but. Looking at my machine I’ve got:

  • Java 6, 7, 8
  • Python 2.7, 3.5
  • Ruby 2.2.3

Keeping track of each of these versions on the Jenkins server would be a pain. Its easier to containerize the build environment.

Installing Docker #

To build and start our build container we run the following commands:

Docker not installed? Installing Docker

Once Docker is installed on your Jenkins instance, lets configure our project.

Dockerfile #

This Dockerfile builds a Jekyll project:

FROM ruby:2.3

RUN apt-get update >/dev/null \
    && apt-get install -y locales libmagic-dev >/dev/null

RUN mkdir -p /opt/project
ADD . /opt/project/
WORKDIR /opt/project
RUN bundle install --jobs $(nproc) --path=/cache/bundler

RUN bundle exec jekyll build

Every commit we make to the master branch will trigger a pipeline build:

Pipeline #

Pipelines are new to Jenkins 2.x and are based on a Groovy DSL.

With this script we’re preforming the following steps:

  • Checkout
  • Build
  • Lint
  • Deploy
Jenkins build pipeline: checkout, build, lint, and deploy
node {
    env.AWS_BUCKET = "my-s3-bucket"
    stage 'Checkout'
    git credentialsId: 'jenkins credentials', url: '[email protected]:repo.git', branch: 'master'
    withCredentials([[$class: 'StringBinding', credentialsId: 'AWS_ACCESS_KEY', variable: 'AWS_ACCESS_KEY'], [$class: 'StringBinding', credentialsId: 'AWS_ACCESS_SECRET', variable: 'AWS_ACCESS_SECRET'], [$class: 'StringBinding', credentialsId: 'AWS_REGION', variable: 'AWS_REGION']]) {
        stage 'Build'
        def snapshot = docker.build 'domain.name:snapshot'
        snapshot.inside('-e "AWS_ACCESS_KEY=$AWS_ACCESS_KEY" -e "AWS_ACCESS_SECRET=${env.AWS_ACCESS_SECRET}" -e "AWS_REGION=$AWS_REGION" -e "AWS_BUCKET=${env.AWS_BUCKET}"') {
            stage 'Lint'
            sh 'cd /opt/project && bundle exec htmlproofer --alt-ignore /amazon-adsystem/ _site'
            stage 'Deploy'
            sh 'cd /opt/project && bundle exec deploy_jekyll_s3 --verbose deploy'
            echo 'deploying blog to S3'
        }
    }
}

Lets break this down:

git: checkout the master git repository from Github using the SSH address

withCredentials: this is a part of the Credentials Binding Plugin that allows us to inject secrets and credentials stored on the Jenskins server. The Pipeline Syntax wizard helps generate this block.

inside: since Docker creates an isolated container, it won’t automatically inheirit Jenkins ENV variables. This sis solved by adding them to the Docker container: '-e "AWS_ACCESS_KEY=$AWS_ACCESS_KEY" -e "AWS_ACCESS_SECRET=${env.AWS_ACCESS_SECRET}" -e "AWS_REGION=$AWS_REGION" -e "AWS_BUCKET=${env.AWS_BUCKET}"'

This string takes any docker run commands as well: run options

Inside the Docker execution block we want to lint and deploy our Jekyll site. Note the cd /opt/project before each step.

the WORKDIR specified in the Dockerfile doesn’t maintain scope in the Pipeline inside block. Since we installed the gems to /opt/project, we’ll need to specify it for each build step.

Running the build shows we’re executing the build steps, but we get access denied errors in the S3 bucket. This is because our Jenkins user doesn’t have the correct permissions.

Configure AWS permissions #

To deploy our Jekyll project to S3 we need to configure an IAM role and allow uploads from our Jenkins server to S3. Here’s the S3 bucket policy:

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "AddPerm",
			"Effect": "Allow",
			"Principal": "*",
			"Action": "s3:GetObject",
			"Resource": "arn:aws:s3:::my-s3-bucket/*",
			"Condition": {
				"StringNotLike": {
					"aws:Referer": [
						"https://my-s3-bucket.s3-website-us-east-1.amazonaws.com/*",
						"http://my-s3-bucket.s3-website-us-east-1.amazonaws.com/*"
					]
				}
			}
		},
		{
			"Sid": "AddPerm",
			"Effect": "Allow",
			"Principal": {
				"AWS": "arn:aws:iam::987737556516:user/jenkins"
			},
			"Action": [
				"s3:PutObject",
				"s3:PutObjectAcl",
				"s3:GetObject",
				"s3:DeleteObject"
			],
			"Resource": "arn:aws:s3:::my-s3-bucket/*"
		},
		{
			"Sid": "AddPerm",
			"Effect": "Allow",
			"Principal": {
				"AWS": "arn:aws:iam::987737556516:user/jenkins"
			},
			"Action": "s3:ListBucket",
			"Resource": "arn:aws:s3:::my-s3-bucket"
		}
	]
}

This allows the Jenkins IAM user (don’t use your Root credentials) to upload, sync, and list objects in the S3 bucket. The first clause blocks direct access to the S3 bucket, allowing access through CloudFront.

shit it!

And there you have it - deploying a Jekyll blog to S3 via Github, Jenkins, and Docker.