<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
<title type="text">CI/CD Life</title>
<generator uri="https://github.com/mojombo/jekyll">Jekyll</generator>
<link rel="self" type="application/atom+xml" href="https://cicd.life/feed.xml" />
<link rel="alternate" type="text/html" href="https://cicd.life" />
<updated>2025-03-20T19:00:31-04:00</updated>
<id>https://cicd.life/</id>
<author>
  <name>Matt Bajor</name>
  <uri>https://cicd.life/</uri>
  <email>matt@notevenremotelydorky.com</email>
</author>


<entry>
  <title type="html"><![CDATA[Update from Matt]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/brief-update-from-matt/" />
  <id>https://cicd.life/brief-update-from-matt</id>
  <published>2022-06-28T00:20:00-04:00</published>
  <updated>2022-06-28T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;brief-update-from-matt-bajor--unauthorized&quot;&gt;Brief update from Matt Bajor / Unauthorized&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/moonrock_mountain_gallery.jpg&quot; alt=&quot;Yarn&quot; /&gt;
&lt;em&gt;Image of Moonrock Mountain by Matt Bajor&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hi everyone!
As you may have noticed, there has not been much movement here on CICD 4 Life. I’ve been working on various other projects,
including a venture into building and touring large scale art. While I am still working in DevOps (at Workday), my general
direction involves moving towards art and sculpture.&lt;/p&gt;

&lt;p&gt;However, until we get there I intend to keep updating here and queue up a few new posts on Kubernetes, GKE, and Jenkins.&lt;/p&gt;

&lt;p&gt;Until then please check out my new website Winterworks: &lt;a href=&quot;https://winter.works/&quot;&gt;Fine Art Firepits and Dichroic Sculpture&lt;/a&gt;&lt;/p&gt;


  &lt;p&gt;&lt;a href=&quot;https://cicd.life/brief-update-from-matt/&quot;&gt;Update from Matt&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on June 28, 2022.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Containerizing local commands]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/containerize-local-commands/" />
  <id>https://cicd.life/containerize-local-commands</id>
  <published>2018-08-29T00:20:00-04:00</published>
  <updated>2018-08-29T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;running-yarn-from-a-container&quot;&gt;Running &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;yarn&lt;/code&gt; from a container&lt;/h1&gt;
&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/yarn-logo.png&quot; alt=&quot;Yarn&quot; /&gt;
&lt;em&gt;Image from https://yarnpkg.com/en/&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Within a continuous integration / continuous delivery system one of the hardest
problems to deal with as the system and teams grow is build time dependency management.
At first, one version of a few dependencies is totally manageable in traditional ways.
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;yum install&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;apk add&lt;/code&gt; is the perfect solution for a single team or project.&lt;/p&gt;

&lt;p&gt;Over time as the CI system begins to grow and support more and more teams this solution
starts to show weaknesses. Anyone who has had to install &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rvm&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;nvm&lt;/code&gt;, or any of the
other environment based tool version managers within a CI system knows the pain of
getting things working right in a stable way. In addition, distributing the configuration
for these tools can be just as challenging, especially when dealing with multiple
versions of a tool.&lt;/p&gt;

&lt;p&gt;One way to help address these complications (in addition to a few others) is to
contain your tooling in Docker containers. This gives a few advantages over traditional
package managers:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;The software is entirely contained within Docker and can be installed or removed
without fuss by any job that needs it&lt;/li&gt;
  &lt;li&gt;Containers can (and should) be built in house as to be reproducibly built at a minimum
and fully audited and tracked within the change management tool if required&lt;/li&gt;
  &lt;li&gt;Multiple versions and configurations can exist side by side without any interference&lt;/li&gt;
  &lt;li&gt;Containers can be shared between teams and shell wrappers distributed via Homebrew&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;how-can-we-run-yarn-in-a-container-for-a-local-project&quot;&gt;How can we run yarn in a container for a local project?&lt;/h2&gt;

&lt;p&gt;TL;DR: Docker volumes&lt;/p&gt;

&lt;p&gt;The way the process works is similar to other Docker workflows:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;We build a Docker image that has whatever tool we need&lt;/li&gt;
  &lt;li&gt;Then any configuration (no secrets!) are layered in to the image&lt;/li&gt;
  &lt;li&gt;When run the container, we mount our current working directory into the container&lt;/li&gt;
  &lt;li&gt;The command is executed in the running container, but acting on the local directory
that is volume mounted&lt;/li&gt;
  &lt;li&gt;The command finishes and all output is captured in our local directory&lt;/li&gt;
  &lt;li&gt;The container exits and since we’ve used &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--rm&lt;/code&gt;, is completely gone&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives us the effects of the tool without modifying other parts of the filesystem
as one does in normal software installation. One issue that this process has as stated
is that the command to run the container can be a bit unwieldy. In order to simplify
things in this regard we must sacrifice the ‘no changes to the filesystem’ benefit by
shipping a shell wrapper. Nonetheless, shipping a shell script or allowing others to
create their own is a bit easier than full package / config management.&lt;/p&gt;

&lt;p&gt;Anyways, now that we have an idea of how it will work, let’s take a look at how it does
work.&lt;/p&gt;

&lt;h3 id=&quot;step-1-build-a-docker-image-for-yarn&quot;&gt;Step 1: Build a Docker image for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;yarn&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;The first item we need is a Docker image that contains our tooling. In this case
we’re going to install &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;yarn&lt;/code&gt;, a common dependency manager for Node, similar to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;npm&lt;/code&gt;.
Since Yarn depends upon Node, we can create a container that has the specific
version of each that is needed by the team using it. In this case we will install
the latest Node and Yarn packages, but pinning them to other versions would be a
fairly simple task.&lt;/p&gt;

&lt;p&gt;Let’s take a look at our Dockerfile here:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# We&apos;re using alpine for simplicity. We could make it smaller&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# by downloading the tarball and adding to scratch with a multi-stage&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Docker build&lt;/span&gt;
FROM alpine

&lt;span class=&quot;c&quot;&gt;# Install yarn which should pull in node as a dependency&lt;/span&gt;
RUN apk add &lt;span class=&quot;nt&quot;&gt;--update&lt;/span&gt; yarn

&lt;span class=&quot;c&quot;&gt;# We will configure the cache to use our volume mounted workspace&lt;/span&gt;
RUN yarn config &lt;span class=&quot;nb&quot;&gt;set &lt;/span&gt;cache-folder /workspace/.yarn-cache

&lt;span class=&quot;c&quot;&gt;# Using an entrypoint allows us to pass in any args we need&lt;/span&gt;
ENTRYPOINT &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;yarn&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This super simple Dockerfile will give us a container that has &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;yarn&lt;/code&gt;
as an entrypoint. We can now build the image using a command like so:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;docker build &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; technolog/run-yarn .&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This will give us the container named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;technolog/run-yarn&lt;/code&gt; that we can test
with a command like this:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;docker run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-ti&lt;/span&gt; technolog/run-yarn &lt;span class=&quot;nt&quot;&gt;--version&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# 1.3.2&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Excellent, yarn works! However, in this configuration we have no way
to have it operate on a local &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;package.json&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;node_modules&lt;/code&gt;. Everything
is still in the container. We will fix that by using a Docker volume mount.&lt;/p&gt;

&lt;h3 id=&quot;step-2-volume-mount-the-current-directory-into-the-docker-container&quot;&gt;Step 2: Volume mount the current directory into the Docker container&lt;/h3&gt;
&lt;p&gt;If we were to just run the container with the example above, nothing
is going to happen outside of the container. What we need to do is make
the current directory accessible inside the container with a Docker
volume. This is a simple task and looks something like:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;docker run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-ti&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;:/workspace&quot;&lt;/span&gt; technolog/run-yarn init
&lt;span class=&quot;c&quot;&gt;# yarn init v1.3.2&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question name: left-left-pad&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question version (1.0.0): 10.0.0&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question description: A left pad for the left pad&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question entry point (index.js):&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question repository url: https://github.com/technolo-g/left-left-pad&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question author: Not me!&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question license (MIT):&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# question private: no&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# success Saved package.json&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Done in 76.15s.&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; ./package.json
&lt;span class=&quot;c&quot;&gt;# {&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   &quot;name&quot;: &quot;left-left-pad&quot;,&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   &quot;version&quot;: &quot;10.0.0&quot;,&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   &quot;description&quot;: &quot;A left pad for the left pad&quot;,&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   &quot;main&quot;: &quot;index.js&quot;,&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   &quot;repository&quot;: &quot;https://github.com/technolo-g/left-left-pad&quot;,&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   &quot;author&quot;: &quot;Not me!&quot;,&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   &quot;license&quot;: &quot;MIT&quot;&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# }&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Here we can see that the file was created locally (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;./&lt;/code&gt;) and contains all of
the info we provided to yarn running in the Docker container. Pretty neat! One
thing you may notice is that the Docker command is growing a bit. This exact
command (or the one you create) doesn’t roll off of the fingers and so can be
hard to have everyone typing the same thing. There are a few solutions to this
minor issue and one of them is using bash aliases like so:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;alias &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;yarny&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;docker run --rm -ti -v &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;\$&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;(pwd):/workspace&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; technolog/run-yarn&quot;&lt;/span&gt;
which yarny
&lt;span class=&quot;c&quot;&gt;# yarny: aliased to docker run --rm -ti -v &quot;$(pwd):/workspace&quot; technolog/run-yarn&lt;/span&gt;
yarny &lt;span class=&quot;nt&quot;&gt;--version&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# 1.3.2&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;If we are using this command in a lot of places and especially within the build
system, a slightly more robust wrapper may be required. Sometimes dealing with
the shell and it’s intricacies is best left to ZSH developers and a script is
a more unambiguous approach. What I mean by that is a script that encapsulates the
Docker command and is then installed on the user or machine’s path. Let’s take
a look at one for yarn:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash -el&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# -e Makes sure we exit on failures&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# -l Gives us a login shell&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Define a function that replaces our command&lt;/span&gt;
yarn&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;c&quot;&gt;# docker run&lt;/span&gt;
    &lt;span class=&quot;c&quot;&gt;# Remove the container when it exits&lt;/span&gt;
    &lt;span class=&quot;c&quot;&gt;# Mount ./ into the container at /workspace&lt;/span&gt;
    &lt;span class=&quot;c&quot;&gt;# Allow interactive shell&lt;/span&gt;
    &lt;span class=&quot;c&quot;&gt;# Provision a TTY&lt;/span&gt;
    &lt;span class=&quot;c&quot;&gt;# Specify the container and pass any args&lt;/span&gt;
  docker run &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--volume&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;:/workspace&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--interactive&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--tty&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    technolog/run-yarn &lt;span class=&quot;nv&quot;&gt;$@&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Run the function, passing in all args&lt;/span&gt;
yarn &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$@&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now if we make this file executable and run it, we should have a fully working
yarn installation within a container:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x yarn
./yarn &lt;span class=&quot;nt&quot;&gt;--version&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# 1.3.2&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;step-3-distribute-the-software&quot;&gt;Step 3: Distribute the software&lt;/h3&gt;
&lt;p&gt;The final step is getting these commands to be available. The beauty of this
solution is that there are many ways to distribute this command. The script can
live in a variety of places, depending on your needs:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;In the code repo itself for a smaller team&lt;/li&gt;
  &lt;li&gt;Included and added to the path of the build runner&lt;/li&gt;
  &lt;li&gt;Distributed locally with Homebrew&lt;/li&gt;
  &lt;li&gt;Kept in a separate repo that is added to the path of builds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It depends on your environment, but I prefer to make the scripts available
by keeping them all in one repo, cloning that repo, and adding it to the
path on a build. This allows the scripts to change with the build and
versions to be pinned via git tags if needed. Every team can include
the scripts they need and use the version that works for them if they
have to pin.&lt;/p&gt;

&lt;h3 id=&quot;step-4-&quot;&gt;Step 4: ….&lt;/h3&gt;
&lt;p&gt;Run the build with whatever tools are required&lt;/p&gt;

&lt;h3 id=&quot;step-5-cleanup&quot;&gt;Step 5: Cleanup&lt;/h3&gt;
&lt;p&gt;Now that we’re done with the tools, let’s wipe them out completely. We will
do that using Docker’s &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;prune&lt;/code&gt; command:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;docker &lt;span class=&quot;nb&quot;&gt;rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-fv&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;docker ps &lt;span class=&quot;nt&quot;&gt;-qa&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;||&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;INFO: No containers to remove&quot;&lt;/span&gt;
docker system prune &lt;span class=&quot;nt&quot;&gt;--all&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--force&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--volumes&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This will kill any running containers and then prune (delete):&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Any stopped containers&lt;/li&gt;
  &lt;li&gt;Any unused networks&lt;/li&gt;
  &lt;li&gt;Any unused images&lt;/li&gt;
  &lt;li&gt;Any build cache&lt;/li&gt;
  &lt;li&gt;Any dangling images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pretty much anything we would worry about interfering with the next build.
If there are containers (such as the drone itself) that must be kept alive,
the command is a bit different, but more or less the same.&lt;/p&gt;

&lt;h2 id=&quot;enhancements&quot;&gt;Enhancements&lt;/h2&gt;
&lt;h3 id=&quot;buildsh&quot;&gt;build.sh&lt;/h3&gt;
&lt;p&gt;Building the Docker image repeatably and consistently is key to this whole
approach. Changing how the container works depending on who builds it will
lead to the same pitfalls of bad dependency management: mainly broken builds.&lt;/p&gt;

&lt;p&gt;Here is an example &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build.sh&lt;/code&gt; that I would use for the above container:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash -el&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# If PUSH is set to true, push the image&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;push&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PUSH&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Set our image name here for consistency&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;technolog/run-yarn&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Run the build, adding any passed in params like --no-cache&lt;/span&gt;
docker build &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$@&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$image&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;dirname&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$0&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$push&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;true&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
  &lt;/span&gt;docker push &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$image&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;versioning&quot;&gt;Versioning&lt;/h3&gt;
&lt;p&gt;Once teams begin using this framework, you’ll find each develops a set
of version requirements that may not match all the rest of the teams. When
you find yourself in this scenario, it is time to begin versioning the images
as well. While &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;:latest&lt;/code&gt; should probably always point at the newest version,
it’s also reasonable to create &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;:vX.X&lt;/code&gt; tags as well so teams can pin to
specific versions if desired.&lt;/p&gt;

&lt;p&gt;In order to do this, you can add a Docker build argument or environment
variable to install a specific version of a piece of software and use
that version to tag the image as well. I am going to leave this as an exercise
for the user, but the steps would be:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Read the version in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build.sh&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Pass that version as a &lt;a href=&quot;https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg&quot;&gt;build arg&lt;/a&gt;
to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker build&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;In the Dockerfile, read that ARG and install a specific version of the software&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This becomes a bit more complex when sharing between teams and requiring different
versions of both node and yarn, but it can be managed with a smart versioning scheme.&lt;/p&gt;

&lt;h2 id=&quot;disclaimer&quot;&gt;Disclaimer!&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;This methodology does not encourage just pulling random images from Docker hub and running
them!&lt;/strong&gt; You must &lt;strong&gt;always&lt;/strong&gt; use your good judgement when deciding what software to run
in your environment. As you see here, we have used the trusted Alpine Docker image and
then installed yarn from trusted Alpine packages ourselves. We did not rely on a random
Docker image found on the hub, nor did we install extra software that was not required or
executed untrusted commands (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;curl | sudo bash&lt;/code&gt;). This means as long as we trust Alpine,
we should be able to trust this image, within reason. As my Mum would say:
&lt;strong&gt;Downloading unknown or unsigned binaries from the Internet will kill you!&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This is a powerful and flexible technique for managing build time dependencies within
your continuous integration / continuous delivery system. It is a bit overkill if you
have a single dependency and can change it without affecting anything unintended. However,
if you, like me, run many versions of software to support many teams’ builds, I think
you’ll find this to be a pretty simple and potentially elegant solution.&lt;/p&gt;


  &lt;p&gt;&lt;a href=&quot;https://cicd.life/containerize-local-commands/&quot;&gt;Containerizing local commands&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on August 29, 2018.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Making HTTP GET and POST Requests to Docker Hub in a Jenkinsfile]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/making-http-api-calls-to-dockerhub-from-jenkinsfile/" />
  <id>https://cicd.life/making-http-api-calls-to-dockerhub-from-jenkinsfile</id>
  <published>2017-08-13T00:20:00-04:00</published>
  <updated>2017-08-13T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;p&gt;&lt;strong&gt;NOTE: If you intend on using techniques such as these and allowing such wide
open functionality in Jenkins, I recommend that you run your entire Jenkins
build system without outbound internet access by default. Allow access from the
build system network segment only to approved endpoints while dropping the rest
of the traffic. This will allow you to use potentially dangerous, but extremely
powerful scripts while maintaining a high level of security in the system.&lt;/strong&gt;&lt;/p&gt;

&lt;h1 id=&quot;api-calls&quot;&gt;API Calls&lt;/h1&gt;

&lt;p&gt;&lt;img src=&quot;/images/JenkinsfileDockerHubAPI.png&quot; alt=&quot;Jenkinsfile calling Docker API&quot; class=&quot;center-image&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Making HTTP calls in a Jenkinsfile can prove tricky as everything has to remain
serializable else Jenkins cannot keep track of the state when needing to restart
a pipeline. If you’ve ever received Jenkins’ &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;java.io.NotSerializableException&lt;/code&gt;
error in your console out, you know what I mean. There are not too many clear-cut
examples of remote API calls from a Jenkinsfile so I’ve created an example
Jenkinsfile that will talk to the Docker Hub API using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;GET&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;POST&lt;/code&gt; to
perform authenticated API calls.&lt;/p&gt;

&lt;h2 id=&quot;explicit-versioning&quot;&gt;Explicit Versioning&lt;/h2&gt;

&lt;p&gt;The Docker image build process leaves a good amount to be desired when it comes
to versioning. Most workflows depend on the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;:latest&lt;/code&gt; tag which is very
ambiguous and can lead to problems being swallowed within your build system. In
order to maintain a higher level of determinism and auditability, you may
consider creating your own versioning scheme for Docker images.&lt;/p&gt;

&lt;p&gt;For instance, a version of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;2017.2.1929 #&amp;lt;year&amp;gt;-&amp;lt;week&amp;gt;-&amp;lt;build #&amp;gt;&lt;/code&gt; can express
much more information than a simple &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;latest&lt;/code&gt;. Having this information available
for audits or tracking down when a failure was introduced can be invaluable, but
there is no built-in way to do Docker versioning in Jenkins. One must rely
on an external system (such as Docker Hub or their internal registry) to keep
track of versions and use this system of record when promoting builds.&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;/images/Versioning.png&quot; alt=&quot;Versioning Scheme&quot; class=&quot;center-image&quot; /&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;This versioning scheme we are using is not based on Semver&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, but it does
encode within it the information we need to keep versions in lock and also
will always increase in value. Even if the build number is reset, the date +
week will keep the versions from ever being lower that the day previously.
Version your artifacts however works for your release, but please make sure of
these two things:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The version string never duplicates&lt;/li&gt;
  &lt;li&gt;The version number never decreases&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;interacting-with-the-docker-hub-api-in-a-jenkinsfile&quot;&gt;Interacting with the Docker Hub API in a Jenkinsfile&lt;/h2&gt;

&lt;p&gt;For this example we are going to connect to the Docker Hub REST API in order to
retrieve some tags and promote a build to RC. This type of workflow would be
implemented in a release job in which a previously built Docker image is being
promoted to a release candidate. The steps we take in the Jenkinsfile are:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/docker_registry_logo.png&quot; alt=&quot;Docker Registry&quot; class=&quot;image-right&quot; height=&quot;250px&quot; width=&quot;250px&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Provision a node
    &lt;ul&gt;
      &lt;li&gt;Stage 1
        &lt;ul&gt;
          &lt;li&gt;Make an HTTP POST request to the Docker Hub to get an auth token&lt;/li&gt;
          &lt;li&gt;Use the token to fetch the list of tags on an image&lt;/li&gt;
          &lt;li&gt;Filter through those tags to find a tag for the given build #&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;Stage 2
        &lt;ul&gt;
          &lt;li&gt;Promote (pull, tag, and push) the tag found previously as ${version}-rc&lt;/li&gt;
          &lt;li&gt;Push that tag to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;latest&lt;/code&gt; to make it generally available&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a fairly complex looking Jenkinsfile as it stands, but these functions
can be pulled out into a shared library&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; to simplify the Jenkinsfile. We’ll
talk about that in another post.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;jenkinsfile&quot;&gt;Jenkinsfile&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-groovy&quot; data-lang=&quot;groovy&quot;&gt;&lt;span class=&quot;err&quot;&gt;#&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;!&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;groovy&lt;/span&gt;
&lt;span class=&quot;cm&quot;&gt;/*
 NOTE: This Jenkinsfile has the following pre-requisites:
   - SECRET (id: docker-hub-user-pass): Username / Password secret containing your
     Docker Hub username and password.
   - ENVIRONMENT: Docker commands should work meaning DOCKER_HOST is set or there
     is access to the socket.
*/&lt;/span&gt;
&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;groovy.json.JsonSlurperClassic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// Required for parseJSON()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// These vars would most likely be set as parameters&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;imageName&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;technolog/serviceone&quot;&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;build&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;103&quot;&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Begin our Scripted Pipeline definition by provisioning a node&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;node&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// First stage sets up version info&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;stage&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;Get Docker Tag from Build Number&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;

        &lt;span class=&quot;c1&quot;&gt;// Expose our user/pass credential as vars&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;withCredentials&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;([&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;usernamePassword&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;credentialsId:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;docker-hub-user-pass&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nl&quot;&gt;passwordVariable:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;pass&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nl&quot;&gt;usernameVariable:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;user&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)])&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;// Generate our auth token&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;token&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getAuthTokenDockerHub&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pass&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

        &lt;span class=&quot;c1&quot;&gt;// Use our auth token to get the tag&lt;/span&gt;
        &lt;span class=&quot;n&quot;&gt;tag&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getTagFromDockerHub&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;imageName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;token&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Example second stage tags version as -release and pushes to latest&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;stage&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;Promote build to RC&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
        &lt;span class=&quot;c1&quot;&gt;// Enclose in try/catch for cleanup&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;try&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;// Define our versions&lt;/span&gt;
            &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;versionImg&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;${imageName}:${tag}&quot;&lt;/span&gt;
            &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;latestImg&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;${imageName}:latest&quot;&lt;/span&gt;

            &lt;span class=&quot;c1&quot;&gt;// Login with our Docker credentials&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;withCredentials&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;([&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;usernamePassword&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;credentialsId:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;docker-hub-user-pass&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nl&quot;&gt;passwordVariable:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;pass&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nl&quot;&gt;usernameVariable:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;user&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)])&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
                &lt;span class=&quot;n&quot;&gt;sh&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker login -u${user} -p${pass}&quot;&lt;/span&gt;
            &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

            &lt;span class=&quot;c1&quot;&gt;// Pull, tag, + push the RC&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;sh&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker pull ${versionImg}&quot;&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;sh&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker tag ${versionImg} ${versionImg}-rc&quot;&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;sh&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker push ${versionImg}-rc&quot;&lt;/span&gt;

            &lt;span class=&quot;c1&quot;&gt;// Push the RC to latest as well&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;sh&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker tag ${versionImg} ${latestImg}&quot;&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;sh&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker push ${latestImg}&quot;&lt;/span&gt;
        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;catch&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;err&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;// Display errors and set status to failure&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;FAILURE: Caught error: ${err}&quot;&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;currentBuild&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;result&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;FAILURE&quot;&lt;/span&gt;
        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;finally&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
            &lt;span class=&quot;c1&quot;&gt;// Finally perform cleanup&lt;/span&gt;
            &lt;span class=&quot;n&quot;&gt;sh&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;docker system prune -af&apos;&lt;/span&gt;
        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// NOTE: Everything below here could be put into a shared library&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// GET Example&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;// Get a tag from Docker Hub for a given build number&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;getTagFromDockerHub&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;imgName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;authToken&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Generate our URL. Auth is required for private repos&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;url&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;URL&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;https://hub.docker.com/v2/repositories/${imgName}/tags&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parsedJSON&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parseJSON&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getText&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;requestProperties:&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Authorization&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;JWT ${authToken}&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]))&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// We want to find the tag associated with a build&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;// EX: 2017.2.103 or 2016.33.23945&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;regexp&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;^\\d{4}.\\d{1,2}.${build}\$&quot;&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Iterate over the tags and return the one we want&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;result&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parsedJSON&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;results&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;findAll&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;regexp&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
            &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;
        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// POST Example&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;// Get an Authentication token from Docker Hub&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;getAuthTokenDockerHub&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pass&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Define our URL and make the connection&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;url&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;URL&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;https://hub.docker.com/v2/users/login/&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;conn&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;openConnection&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;// Set the connection verb and headers&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;conn&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setRequestMethod&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;POST&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;conn&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setRequestProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Content-Type&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;application/json&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;c1&quot;&gt;// Required to send the request body of our POST&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;conn&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;doOutput&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Create our JSON Authentication string&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;authString&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;{\&quot;username\&quot;: \&quot;${user}\&quot;, \&quot;password\&quot;: \&quot;${pass}\&quot;}&quot;&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Send our request&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;writer&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;OutputStreamWriter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;conn&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;outputStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;writer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;write&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;authString&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;writer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;flush&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;writer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;close&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;conn&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;connect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Parse and return the token&lt;/span&gt;
    &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parseJSON&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;conn&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;content&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;text&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;token&lt;/span&gt;

&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Contain our JsonSlurper in a function to maintain CPS&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;parseJSON&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;json&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;groovy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;json&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;JsonSlurperClassic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;parseText&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;json&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;script-security&quot;&gt;Script Security&lt;/h2&gt;

&lt;p&gt;Due to the nature of this type of script, there is definitely a lot of trust
assumed when allowing something like this to run. If you follow the process we
are doing in &lt;a href=&quot;/u1-p1-planning-jenkins-docker-ci-infrastructure/&quot;&gt;Modern Jenkins&lt;/a&gt;
nothing is getting into the build system without peer review and nobody but
administrators have access to run scripts like this. With the environment locked
down, it can be safe to use something of this nature.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/secure_small.png&quot; alt=&quot;Security Alert&quot; class=&quot;image-right&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Jenkins has two ways in which Jenkinsfiles (and Groovy in general) can be run:
sandboxed or un-sandboxed. After reading &lt;a href=&quot;http://unethicalblogger.com/2017/08/03/donut-disable-groovy-sandbox.html&quot;&gt;Do not disable the Groovy Sandbox&lt;/a&gt;
by &lt;a href=&quot;http://unethicalblogger.com/about&quot;&gt;rtyler&lt;/a&gt; (&lt;a href=&quot;https://twitter.com/agentdero&quot;&gt;@agentdero&lt;/a&gt; on Twitter),
I will never disable sandbox again. What we are going to do instead is
whitelist all of the required signatures automatically with Groovy. The script
we are going to use is adapted from my friend &lt;a href=&quot;https://github.com/brandon-fryslie/rad-shell&quot;&gt;Brandon Fryslie&lt;/a&gt;
and will basically pre-authorize all of the required methods that the pipeline
will use to make the API calls.&lt;/p&gt;

&lt;h2 id=&quot;pre-authorizing-jenkins-signatures-with-groovy&quot;&gt;Pre-authorizing Jenkins Signatures with Groovy&lt;/h2&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;url-httplocalhost8080script&quot;&gt;URL: http://localhost:8080/script&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-groovy&quot; data-lang=&quot;groovy&quot;&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval&lt;/span&gt;

&lt;span class=&quot;nf&quot;&gt;println&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;INFO: Whitelisting requirements for Jenkinsfile API Calls&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Create a list of the required signatures&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;requiredSigs&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method groovy.json.JsonSlurperClassic parseText java.lang.String&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.io.Flushable flush&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.io.Writer write java.lang.String&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.lang.AutoCloseable close&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.HttpURLConnection setRequestMethod java.lang.String&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.URL openConnection&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.URLConnection connect&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.URLConnection getContent&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.URLConnection getOutputStream&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.URLConnection setDoOutput boolean&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.URLConnection setRequestProperty java.lang.String java.lang.String&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;new groovy.json.JsonSlurperClassic&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;new java.io.OutputStreamWriter java.io.OutputStream&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods findAll java.lang.String java.lang.String&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getText java.io.InputStream&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getText java.net.URL java.util.Map&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;

    &lt;span class=&quot;c1&quot;&gt;// Signatures already approved which may have introduced a security vulnerability (recommend clearing):&lt;/span&gt;
    &lt;span class=&quot;s1&quot;&gt;&apos;method java.net.URL openConnection&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Get a handle on our approval object&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;approver&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ScriptApproval&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Aprove each of them&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;requiredSigs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;each&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;approver&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;approveSignature&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;it&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;println&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;INFO: Jenkinsfile API calls signatures approved&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;After running this script, you can browse to
&lt;a href=&quot;http://localhost:8080/scriptApproval/&quot;&gt;Manage Jenkins -&amp;gt; In-process Script Approval&lt;/a&gt;
and see that there is a list of pre-whitelisted signatures that will allow our
Jenkinsfile to make the necessary calls to interact with the Docker Hub API.
You’ll notice there is one method in there that they mark in red as you can see
the potential security issues with it. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;java.net.URL openConnection&lt;/code&gt; can be an
extremely dangerous method to allow in an unrestricted environment so be careful
and make sure you have things locked down in other ways.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;http://semver.org/&quot;&gt;http://semver.org/&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://jenkins.io/doc/book/pipeline/shared-libraries/&quot;&gt;https://jenkins.io/doc/book/pipeline/shared-libraries/&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/making-http-api-calls-to-dockerhub-from-jenkinsfile/&quot;&gt;Making HTTP GET and POST Requests to Docker Hub in a Jenkinsfile&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on August 13, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 3 / Part 4: Configuring the GitHub Jenkins Plugin]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u3-p4-configuring-jenkins-github-groovy/" />
  <id>https://cicd.life/u3-p4-configuring-jenkins-github-groovy</id>
  <published>2017-07-31T00:20:00-04:00</published>
  <updated>2017-07-31T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;p&gt;&lt;img src=&quot;/images/GithubJenkinsInteraction.png&quot; alt=&quot;GitHub + Jenkins&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;setting-up-the-jenkins--git-interactions&quot;&gt;Setting up the Jenkins &amp;lt;–&amp;gt; Git Interactions&lt;/h2&gt;

&lt;p&gt;One task that just about every build system needs to complete before doing
anything is getting the code that it will be testing and packaging. For most
software development shops in the world, this code is kept in a source code
repository in a version control system (VCS). The most common implementation
that I have been exposed to is Git. Linus Torvalds with a few kernel developers
wrote the initial version of git while working on the kernel after BitKeeper’s
copyright holder revoked free use&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. Git is what powers GitHub (surprise,
surprise) and is just about the most popular VCS out there. We are even using it
to develop our Jenkins CI system!&lt;/p&gt;

&lt;p&gt;In order to interact with GitHub, we are going to need a couple of things setup:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/Bender_Rodriguez.png&quot; alt=&quot;Bender Rodriguez&quot; class=&quot;image-right&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A machine user that can be managed independently of your customers&lt;/li&gt;
  &lt;li&gt;A SSH key secret in the credentials store&lt;/li&gt;
  &lt;li&gt;Configuration that says to use the credentials in the store for cloning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once we have these in place we should be able to create a job that clones a repo
and does something.&lt;/p&gt;

&lt;h2 id=&quot;creating-a-machine-user&quot;&gt;Creating a machine user&lt;/h2&gt;

&lt;p&gt;Teams of developers change constantly with people being added and removed from
the roster quite frequently. To use credentials that are tied to a person is not
good practice as it does not separate the actions of the user and the actions of
the system. For this reason we will create our own machine user in GitHub and 
configure her to do the builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: Do not stage or commit anything during this step. We will be encrypting
the sensitive files in the next step.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Browse to &lt;a href=&quot;http://github.com/&quot;&gt;http://github.com/&lt;/a&gt; in an Incognito window (we
need to create a new account and you’re probably logged in).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Create an account for your machine user. It is not against GitHub’s T.O.S.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;
to create a machine user, but it is to script this process. You can’t reuse
your existing email, so you must have a secondary email setup for this user.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Verify the email and then browse to &lt;a href=&quot;https://github.com/settings/profile&quot;&gt;Settings&lt;/a&gt;
in the upper right hand corner.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Fill out the info that makes sense and then head over to &lt;a href=&quot;https://github.com/settings/keys&quot;&gt;SSH and GPG Keys&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;We now need to create an ssh-key for the machine user. We will use a new type
of key based on this article: &lt;a href=&quot;https://blog.g3rt.nl/upgrade-your-ssh-keys.html&quot;&gt;https://blog.g3rt.nl/upgrade-your-ssh-keys.html&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# cd into our secrets dir&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;secure

&lt;span class=&quot;c&quot;&gt;# Generate a ed25519 key in the new RFC4716 format with a random good passphrase.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Something like: &apos;t0$VQki3RWVim!K*rzA1&apos; and then make sure to note the pw. We&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# will be using it in a moment&lt;/span&gt;
ssh-keygen &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-a&lt;/span&gt; 100 &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; ed25519 &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; git-ssh-key.priv

&lt;span class=&quot;c&quot;&gt;# Rename the pubkey&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;mv &lt;/span&gt;git-ssh-key.priv.pub git-ssh-key.pub

&lt;span class=&quot;c&quot;&gt;# Record that passphrase we used to generate the key in secure/git-ssh-key.pw&lt;/span&gt;
vi git-ssh-key.pw

&lt;span class=&quot;c&quot;&gt;# Record the GitHub username for the user in git-ssh-key.user&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;CICDLifeBuildBot&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; git-ssh-key.user&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;ul&gt;
  &lt;li&gt;Now add the contents of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git-ssh-key.pub&lt;/code&gt; to the SSH Keys in your GitHub
Settings page with a good name. We now have a machine user and need to
configure Jenkins to use it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;/images/GithubSSHKey.png&quot; alt=&quot;GitHub SSH Key&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;add-the-secrets-to-the-credential-store&quot;&gt;Add the secrets to the credential store&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/images/black_cartoon_safe_small.png&quot; alt=&quot;Add to Credential Store&quot; class=&quot;image-right&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now that we have the Git credentials that should in theory work for cloning and
pushing we need to get them added into the Jenkins credential store. How we will
do this is via the init system Groovy and some Docker volume mounts.&lt;/p&gt;

&lt;h3 id=&quot;mount-the-secrets-into-the-master-container&quot;&gt;Mount the secrets into the master container&lt;/h3&gt;

&lt;p&gt;Since we are still in development, let’s just add our secure dir to the compose
file as a mount. When we get to prod we will change this up.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;deploymasterdocker-composeyml&quot;&gt;deploy/master/docker-compose.yml&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;nn&quot;&gt;...&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  # Jenkins master&apos;s configuration&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  master:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    image: mastering-jenkins/jenkins-master&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    ports:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - &quot;8080:8080&quot;&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    volumes:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - plugins:/usr/share/jenkins/ref/plugins&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - warfile:/usr/share/jenkins/ref/warfile&lt;/span&gt;
       &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;${PWD}/../../secure:/secure:ro&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  &lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  # Jenkins plugins configuration&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  plugins:&lt;/span&gt;
&lt;span class=&quot;nn&quot;&gt;...&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Restart the service using our script&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkinsdeploymaster&quot;&gt;PWD: ~/code/modern-jenkins/deploy/master&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Restart&lt;/span&gt;
./start.sh

&lt;span class=&quot;c&quot;&gt;# Confirm that the files have been mounted where we expect&lt;/span&gt;
docker &lt;span class=&quot;nb&quot;&gt;exec&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; master_master_1 &lt;span class=&quot;nb&quot;&gt;ls&lt;/span&gt; /secure
&lt;span class=&quot;c&quot;&gt;#README.md  git-ssh-key.priv  git-ssh-key.pub  git-ssh-key.pw  git-ssh-key.user&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Restarting the master will mount the directory &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/code/modern-jenkins/secure&lt;/code&gt; at
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/secure&lt;/code&gt; inside the container. Since we specified &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ro&lt;/code&gt; at the end of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VOLUME&lt;/code&gt;
definition, the secrets themselves will be read-only. With the secrets loaded
into the environment, we can begin developing a script that configures GitHub.&lt;/p&gt;

&lt;h2 id=&quot;writing-groovy-to-configure-github&quot;&gt;Writing Groovy to configure GitHub&lt;/h2&gt;

&lt;p&gt;Configuring Jenkins plugins with Groovy can seem to be more like an art than a
science. Each plugin operates a little bit differently and since Jenkins has
been so popular for so long, plugins are in all different states of repair. The 
basic gist of how to do any plugin is:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Read the values in from the filesystem or ENV that you need to configure&lt;/li&gt;
  &lt;li&gt;Get a handle on the configuration object for the plugin&lt;/li&gt;
  &lt;li&gt;Create a new instance of the configuration objects with the values you want&lt;/li&gt;
  &lt;li&gt;Update the main configuration with your newly created one&lt;/li&gt;
  &lt;li&gt;Save the config object&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Any plugin can be configured this way and it’s normally a matter of familiarizing
yourself with the plugin’s data models + classes enough to create what you need.
Let’s take a look at the (well documented) Groovy script that configures our
GitHub plugin.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;url-httplocalhost8080script&quot;&gt;URL: &lt;a href=&quot;http://localhost:8080/script&quot;&gt;http://localhost:8080/script&lt;/a&gt;&lt;/h4&gt;
&lt;h4 id=&quot;imagesjenkins-pluginsfilesinitgroovyd02-configure-github-clientgroovy&quot;&gt;images/jenkins-plugins/files/init.groovy.d/02-configure-github-client.groovy&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-groovy&quot; data-lang=&quot;groovy&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// 02-configure-git-client.groovy&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;// Thanks to chrish:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;// https://stackoverflow.com/questions/33613868/how-to-store-secret-text-or-file-using-groovy&lt;/span&gt;
&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;jenkins.model.*&lt;/span&gt;
&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;com.cloudbees.jenkins.plugins.sshcredentials.impl.*&lt;/span&gt;
&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;com.cloudbees.plugins.credentials.domains.*&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;com.cloudbees.plugins.credentials.*&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;


&lt;span class=&quot;c1&quot;&gt;// Read our values into strings from the volume mount&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;privKeyText&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;File&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;/secure/git-ssh-key.priv&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;text&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;trim&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;passPhraseText&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;File&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;/secure/git-ssh-key.pw&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;text&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;trim&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;sshUserText&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;File&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&apos;/secure/git-ssh-key.user&apos;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;text&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;trim&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Get a handle on our Jenkins instance&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;jenkins&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Jenkins&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getInstance&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Define the security domain. We&apos;re making these global but they can also&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;// be configured in a more restrictive manner. More on that later&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domain&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Domain&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;global&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Get our existing Credentials Store&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;store&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;jenkins&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getExtensionList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;s1&quot;&gt;&apos;com.cloudbees.plugins.credentials.SystemCredentialsProvider&apos;&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;)[&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;].&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getStore&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Create a new BasicSSHUserPrivateKey object with our values&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;gitHubSSHKey&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;BasicSSHUserPrivateKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;CredentialsScope&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;GLOBAL&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;git-ssh-key&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;sshUserText&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;BasicSSHUserPrivateKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;DirectEntryPrivateKeySource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;privKeyText&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;passPhraseText&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;GitHub Machine User SSH Creds&quot;&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Add the new object to the credentials store&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;store&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addCredentials&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;domain&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;gitHubSSHKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Get the config descriptor for the overall Git config&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;desc&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;jenkins&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;hudson.plugins.git.GitSCM&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Set the username and email for git interactions&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;desc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setGlobalConfigName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;${sshUserText}&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;desc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setGlobalConfigEmail&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;${sshUserText}@cicd.life&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Save the descriptor&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;desc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;save&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Echo out (or log if you like) that we&apos;ve done something&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;println&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;INFO: GitHub Credentials and Configuration complete!&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;If things are setup and working right, you should see our INFO statement printed
out below the console. To confirm it worked, browse to the &lt;a href=&quot;http://localhost:8080/credentials/store/system/domain/_/&quot;&gt;credentials store&lt;/a&gt;
confirm that we now have a SSH Key secret with the id of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git-ssh-key&lt;/code&gt;. If you
click on “Update” you should see the SSH key (don’t worry, it is armored and you
can’t show the passphrase from here). While this is not 100% secure, it is still
much better than baking the credentials into the image or a lot of other methods
people use to expose secrets. Incremental improvements are what I always say!&lt;/p&gt;

&lt;h2 id=&quot;add-the-groovy-to-the-docker-image&quot;&gt;Add the groovy to the Docker image&lt;/h2&gt;

&lt;p&gt;After confirming that the script works, add it to the plugins Docker image by
dropping it in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;images/jenkins-plugins/files/init.groovy.d&lt;/code&gt; with the name of
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;02-configure-github-client.groovy&lt;/code&gt;. Then you can rebuild the image and restart
the service to verify that everything is working on boot.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-1&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Create images/jenkins-plugins/files/init.groovy.d/02-configure-github-client.groovy&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# with the content from the script console.&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Build the image&lt;/span&gt;
./images/jenkins-plugins/build.sh

&lt;span class=&quot;c&quot;&gt;# Restart the service&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;deploy/master
./start.sh
docker-compose logs &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Watch the log output from the containers to confirm that the scripts are running
(the name of the scripts is output in the console out) and that there are no
stack traces or other unexpected errors. When the system is fully up and running,
browse to the GUI and check to see if the credential has been installed. I think
you’ll be pleasantly surprised!&lt;/p&gt;

&lt;h2 id=&quot;testing-our-changes&quot;&gt;Testing our changes&lt;/h2&gt;

&lt;p&gt;We should in theory be able to test our changes by creating a tiny little job that
just clones a repo. Follow me and we’ll give it a run for it’s money:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Browse to &lt;a href=&quot;http://localhost:8080&quot;&gt;http://localhost:8080&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Click &lt;a href=&quot;http://localhost:8080/view/all/newJob&quot;&gt;New Item&lt;/a&gt; in the top left&lt;/li&gt;
  &lt;li&gt;Name it what you like, select freestyle project, and click OK. This will drop
you at the configuration screen.&lt;/li&gt;
  &lt;li&gt;Under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Source Code Management&lt;/code&gt;, select &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Git&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;For the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Repository URL&lt;/code&gt; enter: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git@github.com:technolo-g/modern-jenkins.git&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;For the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Credentials&lt;/code&gt; choose &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;GitHub Machine User SSH Creds&lt;/code&gt;
&lt;img src=&quot;/images/test_job_scm.png&quot; alt=&quot;Git Config&quot; /&gt;&lt;/li&gt;
  &lt;li&gt;Scroll down to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Build&lt;/code&gt; and add a shell step&lt;/li&gt;
  &lt;li&gt;Within the shell step type &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;echo &quot;INFO: dir contents are&quot; &amp;amp;&amp;amp; ls&lt;/code&gt;
&lt;img src=&quot;/images/test_job_step.png&quot; alt=&quot;Shell Step&quot; /&gt;&lt;/li&gt;
  &lt;li&gt;Save the job and run it&lt;/li&gt;
  &lt;li&gt;The job should be able to clone the repo and list the directory contents. If
this is not the case, you must debug what is happening here and try to get it
working again. It is critical that the changes we make in each part are
working as expected. 
&lt;img src=&quot;/images/test_job_console.png&quot; alt=&quot;Console Out&quot; /&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Congratulations! You have just performed your first and certainly not last
programmatic configuration of Jenkins! Give yourself a pat on the back because
that was no easy feat.&lt;/p&gt;

&lt;h2 id=&quot;commit-push-pr&quot;&gt;Commit, Push, PR&lt;/h2&gt;

&lt;p&gt;Take extra care when pushing branches that are supposed to have encrypted
secrets. Sometimes accidents happen and you want to know about them immediately.
Since we have added a bunch of secrets in this branch, pay attention to make
sure that all of them are encrypted and none are plaintext.&lt;/p&gt;

&lt;p&gt;If you got wicked stuck (or even a little bit off track), take a look here to
see the repo’s state at the end of this post.
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit3-part4&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit3-part4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, we can definitely run a job now so that’s good news. The bad news is that
we created it by hand (yuck!). The next installment will introduce you to a way
that eliminates use of the GUI for creating jobs. I personally think it has
completely changed the way CI systems are built and I would never hand create a
job EVER now.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.pcworld.idg.com.au/article/129776/after_controversy_torvalds_begins_work_git_/&quot;&gt;https://www.pcworld.idg.com.au/article/129776/after_controversy_torvalds_begins_work_git_/&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://developer.github.com/v3/guides/managing-deploy-keys/#machine-users&quot;&gt;https://developer.github.com/v3/guides/managing-deploy-keys/#machine-users&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u3-p4-configuring-jenkins-github-groovy/&quot;&gt;Modern Jenkins Unit 3 / Part 4: Configuring the GitHub Jenkins Plugin&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 31, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 3 / Part 3: Managing Secrets in GitHub]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u3-p3-transcrypting-secrets-github-repo/" />
  <id>https://cicd.life/u3-p3-transcrypting-secrets-github-repo</id>
  <published>2017-07-30T00:20:00-04:00</published>
  <updated>2017-07-30T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;encrypting-files-in-the-repo-with-transcrypt&quot;&gt;Encrypting files in the repo with Transcrypt&lt;/h1&gt;

&lt;p&gt;&lt;img src=&quot;/images/encryption.jpg&quot; alt=&quot;Encryption&quot; /&gt;&lt;/p&gt;

&lt;p&gt;In production we will hopefully have a secrets management system, or at a very
minimum private repos to store encrypted secrets in. For the purpose of this
demo though I will be using a public repo and do not want to expose any sensitive
data that we may need to add to this repo. &lt;strong&gt;WARNING: To be clear, storing any kind
of secrets in a git repo, encrypted or not, may not be a good idea. Consult your
local security team for advice.&lt;/strong&gt; We however, don’t have any state secrets for
this demo and very few options.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/openssl-logo-small.png&quot; alt=&quot;OpenSSL&quot; class=&quot;image-right&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Transcrypt is a shell script that uses OpenSSL to encrypt and decrypt files in
your git repo that are noted in the .gitattributes file. Let’s initialize our
repo and confirm that it is working.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# On MacOS you can use brew&lt;/span&gt;
brew &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;transcrypt

&lt;span class=&quot;c&quot;&gt;# On Linux you have to install OpenSSL then place the script in your path&lt;/span&gt;
yum &lt;span class=&quot;nb&quot;&gt;install &lt;/span&gt;openssl
  
&lt;span class=&quot;c&quot;&gt;# Download the script to our PATH&lt;/span&gt;
wget &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; /usr/local/sbin/transcrypt https://raw.githubusercontent.com/elasticdog/transcrypt/master/transcrypt
  
&lt;span class=&quot;c&quot;&gt;# Make it executable&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;sudo chmod&lt;/span&gt; +x /usr/local/sbin/transcrypt

&lt;span class=&quot;c&quot;&gt;# Confirm it works&lt;/span&gt;
trancrypt &lt;span class=&quot;nt&quot;&gt;--help&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;strong&gt;NOTE: More information on the software can be found on the README here:
&lt;a href=&quot;https://github.com/elasticdog/transcrypt&quot;&gt;https://github.com/elasticdog/transcrypt&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;initialize-the-repo&quot;&gt;Initialize the repo&lt;/h2&gt;

&lt;p&gt;We will initialize Transcrypt within the repo on a clean branch. This will
hopefully ensure that we can configure Transcrypt prior to adding any secrets
to the repo.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-1&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Check out a branch and initialize transcrypt&lt;/span&gt;
git checkout &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; chore-install_transcrypt

transcrypt
&lt;span class=&quot;c&quot;&gt;# Encrypt using which cipher? [aes-256-cbc]&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Generate a random password? [Y/n]&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Repository metadata:&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   GIT_WORK_TREE:  /Users/matt.bajor/code/modern-jenkins&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   GIT_DIR:        /Users/matt.bajor/code/modern-jenkins/.git&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   GIT_ATTRIBUTES: /Users/matt.bajor/code/modern-jenkins/.gitattributes&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# The following configuration will be saved:&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   CIPHER:   aes-256-cbc&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#   PASSWORD: &amp;lt;a really good password, trust me&amp;gt;&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Does this look correct? [Y/n]&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# The repository has been successfully configured by transcrypt.&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Add the generated PASSWORD to your LastPass, keychain, text document on your
desktop, or wherever you store secure passwords. It is very important to not
lose that passphrase.&lt;/p&gt;

&lt;p&gt;We now want to get the .gitattributes file added up and merged in so we can
test if secrets encryption is working as expected so let’s branch, commit, push
and PR.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-2&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Initialize Transcrypt in the repo&quot;&lt;/span&gt;
git push origin chore-install_transcrypt

&lt;span class=&quot;c&quot;&gt;# PR, review, merge and then catchup master&lt;/span&gt;
git checkout master
git pull
git checkout &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; test-secrets&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Let’s create a secrets dir and initialize it with a README to confirm that
transcrypt is working as expected without potentially exposing secrets on
the internet. The way transcrypt works is that it looks at the .gitattributes
in the repo root for files to encrypt. If it sees that there are paths to
encrypt, it will encrypt them before adding to the tree. It does still allow
you to view the changes as plaintext locally which sometimes can lead to 
confusion as to whether or not its working.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-3&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Look at the .gitattributes&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; .gitattributes
&lt;span class=&quot;c&quot;&gt;# #pattern  filter=crypt diff=crypt&lt;/span&gt;
  
&lt;span class=&quot;c&quot;&gt;# Echo our new pattern in there&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;secure/* filter=crypt diff=crypt&apos;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; .gitattributes
  
&lt;span class=&quot;c&quot;&gt;# Confirm that the file looks good:&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; .gitattributes
&lt;span class=&quot;c&quot;&gt;# #pattern  filter=crypt diff=crypt&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# secure/* filter=crypt diff=crypt &lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Add a README with a bit of info&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; secure/
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;# Transcrypt Secrets&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; This repo is encrypted with transcrypt&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; secure/README.md

&lt;span class=&quot;c&quot;&gt;# Commit and push&lt;/span&gt;
git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Add test secrets&quot;&lt;/span&gt;
git push origin test-secrets&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;When opening the PR, you should notice that there are two files changed:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;.gitattributes&lt;/li&gt;
  &lt;li&gt;secure/README.md&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;/images/transcrypt_commit.png&quot; alt=&quot;Transcrypt Commit&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You should also notice that the contents of README.md is encrypted. If this is
the case, give yourself a thumbs up, merge the PR, and update your local branch.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/danger_small.png&quot; alt=&quot;Danger&quot; class=&quot;image-left&quot; /&gt;&lt;img src=&quot;/images/danger_small.png&quot; alt=&quot;Danger&quot; class=&quot;image-right&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: If you see that README.md is not encrypted, do not open the PR. Instead,
delete your branch and start over. Follow the instructions on the Transcrypt site
if you need more help or information. It is important to confirm that the encryption
mechanism is working the way we expect before we put secrets into the repo.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s take this time to add a line to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;PULL_REQUEST_TEMPLATE.md&lt;/code&gt; file reminding
us to check for unencrypted secrets:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pull_request_templatemd&quot;&gt;./PULL_REQUEST_TEMPLATE.MD&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;...
&lt;span class=&quot;c&quot;&gt;# #### Contribution checklist&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
  - &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt; THERE ARE NO UNENCRYPTED SECRETS HERE
&lt;span class=&quot;c&quot;&gt;# - [ ] The branch is named something meaningful&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# - [ ] The branch is rebased off of current master}&lt;/span&gt;
...&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Now that the repo has been initialized with Transcrypt, we can begin adding some
configuration that depends on credentials. In the next segment, we will create
a machine user for GitHub and configure the Git plugin to use those credentials
when cloning.&lt;/p&gt;

&lt;p&gt;The repo from this section can be found under the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unit3-part3&lt;/code&gt; tag here:
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit3-part3&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit3-part3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u3-p4-configuring-jenkins-github-groovy/&quot;&gt;Next Post: Configuring the Jenkins GitHub plugin programmatically with Groovy&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u3-p3-transcrypting-secrets-github-repo/&quot;&gt;Modern Jenkins Unit 3 / Part 3: Managing Secrets in GitHub&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 30, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 3 / Part 2: Configure Jenkins URL]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u3-p2-configure-jenkins-url-with-groovy/" />
  <id>https://cicd.life/u3-p2-configure-jenkins-url-with-groovy</id>
  <published>2017-07-29T00:20:00-04:00</published>
  <updated>2017-07-29T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h2 id=&quot;configuring-the-jenkins-url&quot;&gt;Configuring the Jenkins URL&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/images/jenkins_url.png&quot; alt=&quot;Configure Jenkins URL with Groovy&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Currently it just so happens that the Jenkins URL in the management console does
have the proper config as it is set to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://localhost:8080&lt;/code&gt;. This is merely a
coincidence that the default value and the current address match though. Once we
start moving this thing around, it will be very important that it is set to the
right value else we’ll have all kinds of strange issues. Since it will definitely
have to be configured, let’s start here. It doesn’t hurt that it is a fairly
simple example of configuring Jenkins with an environment variable passed by
Docker Compose.&lt;/p&gt;

&lt;h2 id=&quot;passing-from-docker-compose&quot;&gt;Passing from Docker Compose&lt;/h2&gt;

&lt;p&gt;This is a setting that will change from environment to environment and so I
think the best place to set it is in the compose file. Let’s create a variable
in there that we can read it in during init and make the right configuration.
Edit the compose file and create a JENKINS_UI_URL env variable as well as volumes
for the configs themselves.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;deploymasterdocker-composeyml&quot;&gt;deploy/master/docker-compose.yml&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;c1&quot;&gt;#---&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;## deploy/master/docker-compose.yml&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;## Define the version of the compose file we&apos;re using&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#version: &apos;3.3&apos;&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;## Define our services&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#services:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  # Jenkins master&apos;s configuration&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  master:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    image: modernjenkins/jenkins-master&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    ports:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - &quot;8080:8080&quot;&lt;/span&gt;
     &lt;span class=&quot;na&quot;&gt;environment&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;JENKINS_UI_URL=http://localhost:8080&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    volumes:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - plugins:/usr/share/jenkins/ref/plugins&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - warfile:/usr/share/jenkins/ref/warfile&lt;/span&gt;
       &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;groovy:/var/jenkins_home/init.groovy.d&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  # Jenkins plugins&apos; configuration&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  plugins:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    image: modernjenkins/jenkins-plugins&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#    volumes:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - plugins:/usr/share/jenkins/ref/plugins&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#      - warfile:/usr/share/jenkins/ref/warfile&lt;/span&gt;
       &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;groovy:/usr/share/jenkins/ref/init.groovy.d&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;## Define named volumes. These are what we use to share the data from one&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;## container to another, thereby making our jenkins.war and plugins available&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#volumes:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  plugins:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#  warfile:&lt;/span&gt;
   &lt;span class=&quot;na&quot;&gt;groovy&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;#&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Now that it is set, we should be able to write a groovy init script to read it.
But first, we will have to restart Jenkins to pick up the new environment:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;deploy/master
docker-compose down &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt;
docker-compose up &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Confirm that the variable is there:&lt;/span&gt;
docker inspect master_master_1 | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;JENKINS_UI

&lt;span class=&quot;c&quot;&gt;# Should output&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# &quot;JENKINS_UI_URL=http://localhost:8080&quot;,&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;With the environment variable now being available to us, we can use the script
console at &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;localhost&lt;/code&gt; to develop our script that will set the URL of our
Jenkins instance.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;url-httplocalhost8080script&quot;&gt;URL: &lt;a href=&quot;http://localhost:8080/script&quot;&gt;http://localhost:8080/script&lt;/a&gt;&lt;/h4&gt;
&lt;h4 id=&quot;imagesjenkins-pluginsfilesinitgroovyd01-cofigure-jenkins-urlgroovy&quot;&gt;images/jenkins-plugins/files/init.groovy.d/01-cofigure-jenkins-url.groovy&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-groovy&quot; data-lang=&quot;groovy&quot;&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;jenkins.model.*&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Read the environment variable&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;url&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;System&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;JENKINS_UI_URL&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Get the config from our running instance&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;urlConfig&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;JenkinsLocationConfiguration&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Set the config to be the value of the env var&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;urlConfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Save the configuration&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;urlConfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;save&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;// Print the results&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;println&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;Jenkins URL Set to &quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;url&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;This should output:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/url_output.png&quot; alt=&quot;URL Output&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;deploying-our-groovy-jenkins-configs&quot;&gt;Deploying our Groovy Jenkins Configs&lt;/h2&gt;

&lt;p&gt;We now have a working script that will manage the URL of Jenkins in any
environment that we would be deploying into simply by setting a variable
in the compose file. Now we need to make this available to the master so that
it can be executed on startup. To do that, we will add a director to the
plugins image and then mount it into the master in a similar way to how the
war and plugins work.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-1&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-plugins
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; files/init.groovy.d/

&lt;span class=&quot;c&quot;&gt;# Add the file above to this directory as&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# files/init.groovy.d/01-cofigure-jenkins-url.groovy&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Add the full directory (instead of the individual scripts) of Groovy
configuration to the Docker image and then export it as a volume.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;imagesjenkins-pluginsdockerfile&quot;&gt;images/jenkins-plugins/Dockerfile&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;...
&lt;span class=&quot;c&quot;&gt;#  # Install our base set of plugins and their dependencies that are listed in&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  # plugins.txt&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  ADD files/plugins.txt /tmp/plugins-main.txt&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  RUN install-plugins.sh `cat /tmp/plugins-main.txt`&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
   &lt;span class=&quot;c&quot;&gt;# Add our groovy init files&lt;/span&gt;
   ADD files/init.groovy.d /usr/share/jenkins/ref/init.groovy.d
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
...

&lt;span class=&quot;c&quot;&gt;#  # Export our war and plugin set as volumes&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  VOLUME /usr/share/jenkins/ref/plugins&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  VOLUME /usr/share/jenkins/ref/warfile&lt;/span&gt;
   VOLUME /usr/share/jenkins/ref/init.groovy.d
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  ...&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;We can now rebuild the image and pick up our new config that should (hopefully)
configure the URL of our Jenkins on boot!&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-2&quot;&gt;PWD: ~/code/modern-jenkins&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;./images/jenkins-plugins/build.sh

&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;deploy/master/
docker-compose down &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt;
docker system prune &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;
docker-compose up &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;
docker-compose logs &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Ok you should now see that the URL in the Jenkins management console is set to
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://localhost:8080&lt;/code&gt;! Ahhhh, it was like that before? Hmm. Ok well then
let’s break it to confirm it is working. Modify the value in the compose file to
something different and restart Jenkins:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkinsdeploymaster&quot;&gt;PWD: ~/code/modern-jenkins/deploy/master&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Change the JENKINS_UI_URL to something different in docker-compose.yml&lt;/span&gt;
perl &lt;span class=&quot;nt&quot;&gt;-pi&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;s/JENKINS_UI_URL=.*/JENKINS_UI_URL=http:\/\/derpyhooves/g&apos;&lt;/span&gt; docker-compose.yml

&lt;span class=&quot;c&quot;&gt;# Restart the stack&lt;/span&gt;
docker-compose down &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt;
docker-compose up &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;
docker-compos logs&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;/images/homer_fatfinger.png&quot; alt=&quot;Homer Fatfinger&quot; class=&quot;image-right&quot; height=&quot;300px&quot; width=&quot;300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Did that work? A typo you say? I can’t imagine it. I’ve typed docker-compose
over 1000 times today, it’s not possible for me to misspell it. In addition, if
you notice the difference between this set of commands and the one earlier, we
seem to be  drifting to chaos. Let’s take note of that, but address it after we
confirm that this current change is working as expected.&lt;/p&gt;

&lt;p&gt;Gooood, it does work :) The URL in the management console has been updated to the
new, wrong, custom value so we can confirm it works. Let’s commit our code now,
but attend to that tiny little pile of tech debt we found (starting and stopping
the system differently every time is definitely tech debt). &lt;strong&gt;NOTE: Don’t forget
to set the JENKINS_UI_URL back to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;http://localhost:8080&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-3&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;git checkout &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; feat-configure_jenkins_url
git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Configure the Jenkins URL with Groovy

This change adds an environment variable to the compose file
that sets the URL of the Jenkins instance upon boot. This is
done via the script added to init.groovy.d&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;squirrel&quot;&gt;Squirrel!&lt;/h2&gt;

&lt;p&gt;&lt;img src=&quot;/images/squirrel.jpg&quot; alt=&quot;Squirrel&quot; /&gt;&lt;/p&gt;

&lt;p&gt;So we’ve noticed something a bit stinky in our code as we’ve been going about
our business. This happens very often in our work lives and attending to little
tech debts like this is critical to having quality software. I certainly
encourage everyone to leave the code better than they found it and to refactor
things when they see something turning into a turd like object.&lt;/p&gt;

&lt;p&gt;I also encourage you to make note of these things and take care of them &lt;strong&gt;after&lt;/strong&gt;
you are in a place to save game. Switching context between one problem and
another can be very expensive mind and time wise so feel free to take a note,
create a ticket or something, then finish what you are doing (unless there is
an actual issue that needs to be addressed before your code will work). When
you submit your PR for review, then jump on that Jira and refactor your heart
away.&lt;/p&gt;

&lt;p&gt;We need to do everything, but we can only do one thing at a time. Try to be
aware of time management.&lt;/p&gt;

&lt;h2 id=&quot;lets-get-that-squirrel&quot;&gt;Let’s get that squirrel&lt;/h2&gt;

&lt;p&gt;What we noticed was that we were beginning to type the command differently
every time we did it. That seems like it should be replaced by a shell script.
Let’s create a start script so that this thing starts consistently every time:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;deploymasterstartsh&quot;&gt;deploy/master/start.sh&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash -el&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;INFO: (re)Starting Jenkins&quot;&lt;/span&gt;
docker-compose down &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt;
docker-compose up &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;INFO: Use the following command to watch the logs: &quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;docker-compose logs -f&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Write that guy out to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;deploy/master/start.sh&lt;/code&gt; and make it executable with
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;chmod +x deploy/master/start.sh&lt;/code&gt; and we’ve now got ourselves a script that
will consistently restart our app.&lt;/p&gt;

&lt;p&gt;Add that onto the branch with a good message and push, PR, merge, etc..
See, you’re getting the hang of it!&lt;/p&gt;

&lt;p&gt;Next let’s get ready to handle some secrets. Shhhhhh!&lt;/p&gt;

&lt;p&gt;The repo from this section can be found under the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unit3-part2&lt;/code&gt; tag here:
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit3-part2&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit3-part2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u3-p3-transcrypting-secrets-github-repo/&quot;&gt;Next Post: Managing Jenkins secrets in GitHub with Transcrypt&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u3-p2-configure-jenkins-url-with-groovy/&quot;&gt;Modern Jenkins Unit 3 / Part 2: Configure Jenkins URL&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 29, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 3 / Part 1: Intro to the Jenkins Groovy Init System]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u3-p1-intro-jenkins-groovy-init-system/" />
  <id>https://cicd.life/u3-p1-intro-jenkins-groovy-init-system</id>
  <published>2017-07-28T00:20:00-04:00</published>
  <updated>2017-07-28T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;p&gt;&lt;img src=&quot;/images/frink_and_bender.png&quot; alt=&quot;Professor Frink Configuring Bender&quot; /&gt;
&lt;em&gt;Image from &lt;a href=&quot;http://www.simonsingh.net/Simpsons_Mathematics/simpsorama/&quot;&gt;Simpsons Mathematics&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h1 id=&quot;configuring-the-jenkins-master-on-boot-with-groovy&quot;&gt;Configuring the Jenkins Master on Boot with Groovy&lt;/h1&gt;

&lt;p&gt;One of the greatest strengths of Jenkins is it’s ability to do almost anything.
This comes from it’s extremely customizable nature. I think of it as a scheduler
that can do any given task on any given trigger with enough configuration. What
we are building specifically though is a system to build, test, and deploy our
piece of custom software that most likely is at least a little bit different
than anything else out there requiring the same tasks. We will need to use
Jenkins’ ultra powerful customization toolkit, but do it in a way that strives
to be:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Deterministic:&lt;/strong&gt; Given a set of inputs, there should be only one output.
If that output is not as we expect, there is a problem that we should spend
time fixing. ie: removing “flaky” tests that fail for “no reason”.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Reliable:&lt;/strong&gt; The system should have high availability to the users who depend
on it. Having a system that is sometimes down and sometimes up does not
encourage teams to inject automated builds into their workflows.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Repeatable:&lt;/strong&gt; This system should be able to be recreated without persistent
data from the repo.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Agile:&lt;/strong&gt; The system should evolve to meet the needs of it’s consumers in a
sustainable way. If one team’s jobs or configs are breaking another team’s
pipelines, it is a good indication that it is time to split the monolith
into two independent systems.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Scalable:&lt;/strong&gt; As the system becomes more popular, more people are going to
utilize it. When this happens, it’s critical to be able to support the
increased capacity in not only job runners, but also collaboration from
more teams.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Luckily we can treat the code that configures the system in the same way we
treat the code the builds and runs the system :)&lt;/p&gt;

&lt;h1 id=&quot;intro-to-the-jenkins-init-system&quot;&gt;Intro to the Jenkins init system&lt;/h1&gt;

&lt;p&gt;Jenkins has a not-much-talked about feature that I have yet to see much
information on. It is the Jenkins Groovy Init system and really the only
documentation I have been able to find are two articles on the Jenkins
wiki: &lt;a href=&quot;https://wiki.jenkins.io/display/JENKINS/Post-initialization+script&quot;&gt;https://wiki.jenkins.io/display/JENKINS/Post-initialization+script&lt;/a&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  Post-initialization script
  
  Created by Kohsuke Kawaguchi, last modified by Daniel Beck on Dec 10, 2015
  
  You can create a Groovy script file &lt;span class=&quot;nv&quot;&gt;$JENKINS_HOME&lt;/span&gt;/init.groovy, or any .groovy
  file &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;the directory &lt;span class=&quot;nv&quot;&gt;$JENKINS_HOME&lt;/span&gt;/init.groovy.d/, &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;See﻿ Configuring Jenkins
  upon start up &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;more info&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; to run some additional things right after Jenkins
  starts up. This script can access classes &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;Jenkins and all the plugins. So &lt;span class=&quot;k&quot;&gt;for
  &lt;/span&gt;example, you can write something like:
  import jenkins.model.&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;
  
  // start &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;the state that doesn&lt;span class=&quot;s1&quot;&gt;&apos;t do any build.
  Jenkins.instance.doQuietDown();
  
  
  Output is logged to the Jenkins log file. For Debian based users, this is
  /var/log/jenkins/jenkins.log&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;which points to this: &lt;a href=&quot;https://wiki.jenkins.io/display/JENKINS/Configuring+Jenkins+upon+start+up&quot;&gt;https://wiki.jenkins.io/display/JENKINS/Configuring+Jenkins+upon+start+up&lt;/a&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  Jenkins can execute initialization scripts written &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;Groovy &lt;span class=&quot;k&quot;&gt;if &lt;/span&gt;they are present
  during start up. See Groovy Hook Script &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;details. The hook name &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;this
  event is &lt;span class=&quot;s2&quot;&gt;&quot;init&quot;&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt; Those executions happen at the very end of the initialization,
  and therefore this can be used to pre-configure Jenkins &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;a particular OEM
  situation. 
  
  While one can always write a plugin to participate &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;the initialization of
  Jenkins, this script-based approach can be useful as it doesn&lt;span class=&quot;s1&quot;&gt;&apos;t require any
  compilation and packaging.&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Not super impressive documentation considering how powerful this mechanism is.
Using this init system you are able to configure any aspect of the master that
you are able to using “Manage Jenkins”. This includes (but is not limited to):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The URL and name of this instance&lt;/li&gt;
  &lt;li&gt;Authentication and security settings&lt;/li&gt;
  &lt;li&gt;Secrets in the credential store&lt;/li&gt;
  &lt;li&gt;Global plugin configuration&lt;/li&gt;
  &lt;li&gt;Global tool configuration&lt;/li&gt;
  &lt;li&gt;Adding and removing nodes&lt;/li&gt;
  &lt;li&gt;Creation of jobs (though we’ll only use it to create one special job)&lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;/images/GroovyConfiguration.png&quot; alt=&quot;Groovy Configuration&quot; /&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;/images/script_console.png&quot; alt=&quot;Jenkins Groovy Script Console&quot; class=&quot;image-left&quot; height=&quot;300px&quot; width=&quot;300px&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;the-groovy-script-console&quot;&gt;The Groovy Script Console&lt;/h2&gt;

&lt;p&gt;Not only does it support configuring so much of the system, it has a direct
REPL like environment to code in. This is called the “Script Console” (available
at http://localhost:8080/script on our instance) and can be considered a shell
into the Jenkins system. This shell has the same abilities of the init system’s
shell so it makes for super quick and easy development of the scripts that we
will use.&lt;/p&gt;

&lt;h2 id=&quot;jenkins-groovy-hello-world&quot;&gt;Jenkins Groovy Hello World&lt;/h2&gt;

&lt;p&gt;Let’s kill two stones with one bird. We will do a quick little Hello World that
will introduce you to bot the syntax of groovy as well as how to use the script
console.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Stand up your development Jenkins (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cd deploy/master &amp;amp;&amp;amp; docker-compose up -d&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;Browse to the script console at &lt;a href=&quot;http://localhost:8080/script&quot;&gt;http://localhost:8080/script&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Enter the following into the box in front of you:&lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;url-httplocalhost8080script&quot;&gt;URL: &lt;a href=&quot;http://localhost:8080/script&quot;&gt;http://localhost:8080/script&lt;/a&gt;&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-groovy&quot; data-lang=&quot;groovy&quot;&gt;  &lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;jenkins.model.*&lt;/span&gt;

  &lt;span class=&quot;kt&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;jenkins&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Jenkins&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getInstance&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;jenkins&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setSystemMessage&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;I&apos;m Bender, baby! Oh god, please insert liquor!&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;// You can change the message if you please. I&apos;m not at the office :)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;ul&gt;
  &lt;li&gt;Browse back to the main Jenkins interface&lt;/li&gt;
  &lt;li&gt;Check out the cool message for all the users of your system to see. I bet your
boss will love it!&lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;/images/BenderInsertLiquor.png&quot; alt=&quot;I&apos;m Bender Baby!&quot; /&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;Nothing too crazy, but this should give you a good idea of how we are going to
configure our master to build our particular brand of software. Inside Old Mr.
Jenkins is just a series of objects (I think of them as his organs) that we can
grab and squeeze and modify to fit our needs. I hope Janky is ready to play
“Operation”! &lt;buzzer sounds=&quot;&quot;&gt;&lt;/buzzer&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u3-p2-configure-jenkins-url-with-groovy/&quot;&gt;Next Post: Configure Jenkins URL with Groovy on boot&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u3-p1-intro-jenkins-groovy-init-system/&quot;&gt;Modern Jenkins Unit 3 / Part 1: Intro to the Jenkins Groovy Init System&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 28, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 2 / Part 5: Starting Jenkins with Docker Compose]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u2-p5-writing-docker-compose-file/" />
  <id>https://cicd.life/u2-p5-writing-docker-compose-file</id>
  <published>2017-07-27T00:20:00-04:00</published>
  <updated>2017-07-27T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;the-good-ole-days&quot;&gt;“The Good Ole’ Days”&lt;/h1&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;/images/trollface_small.png&quot; alt=&quot;Trollface&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Back in aught eight when I was a kid, the way we deployed complex services was a
1000 line shell script that was neither idempotent nor checked into SCM. It just
sat there at an http endpoint, ready for &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sudo | bash&lt;/code&gt;ing (I guess normally sudo
wasn’t an issue as we ran as root :P). If it needed a tweak, you could just ssh
to the server, fire up pico, make the change, deploy your stuff, sit back and
relax while wondering why the rest of the team is complaining about deploys not
working. After all, it Works on My Machine :)&lt;/p&gt;

&lt;p&gt;While I look back with fondness at the days of yore, I can only imagine it is the
fresh Colorado air that is making me forget how crappy it is to have one deploy
work and then literally the same thing 5 minutes later fails because someone was
mucking with the script. So we’re not going to do that.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/docker-compose-logo-small.png&quot; alt=&quot;Docker Compose&quot; class=&quot;image-left&quot; height=&quot;300px&quot; width=&quot;170px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Instead, we are going to use something called Docker Compose. Docker Compose is
a project by Docker that was based on something called &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;fig&lt;/code&gt; a long time ago.
Unlike the rest of their toolkit, docker-compose is a Python application that
uses YAML to describe a service or set of services. It allows you to define
pretty much every aspect of how the services are run, what the networking and
storage systems will look like, and to fine tune how your app will work via
environment variables.&lt;/p&gt;

&lt;p&gt;There is a ton of info out there on Docker Compose &lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; so please do take a
peek. For now, let’s roll forward into the unknown and create our first compose file.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;deploymasterdocker-composeyml&quot;&gt;deploy/master/docker-compose.yml&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;# deploy/master/docker-compose.yml&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;# Define the version of the compose file we&apos;re using&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;version&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;3.3&apos;&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Define our services&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;services&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Jenkins master&apos;s configuration&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;master&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;modernjenkins/jenkins-master&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;ports&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;8080:8080&quot;&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;plugins:/usr/share/jenkins/ref/plugins&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;warfile:/usr/share/jenkins/ref/warfile&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Jenkins plugins&apos; configuration&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;plugins&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;image&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;modernjenkins/jenkins-plugins&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;plugins:/usr/share/jenkins/ref/plugins&lt;/span&gt;
      &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;warfile:/usr/share/jenkins/ref/warfile&lt;/span&gt;

&lt;span class=&quot;c1&quot;&gt;# Define named volumes. These are what we use to share the data from one&lt;/span&gt;
&lt;span class=&quot;c1&quot;&gt;# container to another, thereby making our jenkins.war and plugins available&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;volumes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;plugins&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;warfile&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;A compose file is made up of a few sections as in the example above. Here the
ones we’re using:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;version&lt;/code&gt; &lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;: Define what version of compose file this is&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;services&lt;/code&gt; &lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;: This is where we list out all of the services that we need running.
 This example is fairly straightforward, but it is possible to include any
 service your app needs in this section. You’re basically describing the full
 system and it’s interactions.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;volumes&lt;/code&gt; &lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;: This is where data storage is described. We’re using it to define
two volumes, one for plugins and one for the warfile. Upon creating this
volume, data from the container will be copied in. Since the first container
does not have anything at that path, the data from the second container is
what we get which is exactly what we want.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;networks&lt;/code&gt; &lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;: Not used here, but a way to define all container networking.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a fairly simple example of a compose file so it should be fairly
straightforward to understand. You may also notice that it’s very succinct
and to the point while still being super readable. This is why I like Docker
Compose. We can describe something extremely complex (not so much in this case)
as an easy to read YAML file.&lt;/p&gt;

&lt;h2 id=&quot;test-it-out&quot;&gt;Test it out&lt;/h2&gt;

&lt;p&gt;Ok, here’ we go girls and boys. The big reveal. Our rocket-powered-motorcycle is
fueled up and we’re ready to jump the Snake river!&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Compose up&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;deploy/master
docker-compose up &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;
docker-compose logs &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;//www.youtube.com/embed/EF8GhC-T_Mo&quot; frameborder=&quot;0&quot;&gt;&lt;/iframe&gt;

&lt;p&gt;The Jenkins app should be starting up now and once it says “Jenkins is fully up
and running” you should be able to browse to the UI at
&lt;a href=&quot;http://localhost:8080&quot;&gt;http://localhost:8080&lt;/a&gt; and bask in its Janky glory.&lt;/p&gt;

&lt;p&gt;Now that we know how to start / stop it, we should add this to the documentation.
It is important to keep these docs up to date so that anyone can jump in and
start using it without having to do a massive amount of research. Let’s add
this to the README:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;deployreadmemd&quot;&gt;deploy/README.md&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Deployment Scripts&lt;/span&gt;

Jenkins is deployed via Docker Compose. In order to run the service locally, use
the following commands:

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Get into the deploy directory&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;deploy/master

&lt;span class=&quot;c&quot;&gt;# Start the service as a daemon&lt;/span&gt;
docker-compose up &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# View logs&lt;/span&gt;
docker-compose logs &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Stop Jenkins&lt;/span&gt;
docker-compose down &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Pull new images&lt;/span&gt;
docker-compose pull
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;wtf-matt-a-6-part-blog-series-to-replace-java--jar-jenkinswar-and-6-clicks&quot;&gt;WTF Matt, a 6 part blog series to replace “java -jar jenkins.war” and 6 clicks?&lt;/h2&gt;

&lt;p&gt;hahaha, well you got me there! JK. While &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;java -jar jenkins.war&lt;/code&gt; and a few mouse
clicks could get us to the same place, it would not have been wasting nearly
enough bits :trollface:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/crazy_like_fox.jpg&quot; alt=&quot;Crazy like Fox News&quot; class=&quot;image-right&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Obviously there are two potential reasons why we did this:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;I am a crazy person&lt;/li&gt;
  &lt;li&gt;I am a just-crazy-enough person&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Luckily for you, the answer is the latter. If you’ll recall, the whole reason
I’m writing this is because I’m tired of people showing me their ugly Jenkins
and encouraging me to tell them how cute it is.&lt;/p&gt;

&lt;p&gt;The problem with most of these monstrosities is not that they don’t build the
software. If it didn’t do that it wouldn’t exist at all. The problem is that
they are &lt;strong&gt;extremely hard, time consuming, dangerous, and unfun to work on.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/spaghetti.jpg&quot; alt=&quot;Spaghetti Code&quot; class=&quot;image-left&quot; /&gt;&lt;/p&gt;

&lt;p&gt;That’s fine for something that never changes, but as it turns out we’re making
things for the internet which is growing and changing constantly meaning that 
we constantly need to change and adapt in order to move forward. This applies
very much so to our build system. It is a system that eventually everyone in the
company begins to rely on, from C levels that need profits to PMs and POs who
need features, to Engineers who need to do the actual work.&lt;/p&gt;

&lt;p&gt;When a CI system turns into a bowl of spaghetti each little necessary change
becomes a nerve-racking-afterhours-signed-out-of-slack maintenance that gives
us all PTSD after the 30th time it goes south. What we are doing here is
implementing a semi-rigid structure for our system to basically manage change
effectively while still moving fast.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/jenkins_oops.jpg&quot; alt=&quot;Jenkins Oops&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Let’s walk through some of the common Crappy Times at Jenkins High:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/cincinnati_time_waste.png&quot; alt=&quot;Cincinnati Time Waste&quot; class=&quot;image-right&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;A plugin with a broken dependency:&lt;/strong&gt; Instead of finding out after checking the
‘restart Jenkins when done’ that a plugin can’t fully resolve it’s
dependencies, we will see it when we try to build the Docker image. It is still
non-optimal that it’s happening, but it is not a prod outage and builds are
still running, preventing a Cincinnati Time Waste tournament.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Rolling credentials for the CI Git user:&lt;/strong&gt; In the old days, this required
a ton of coordination in addition to an outage. We have not yet showed it, but
when your secrets are tied to the container we are able to modify all the
required credentials, roll the system, and get back at it.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;A job that broke for “no reason”:&lt;/strong&gt; It’s always unpleasant to be surprised
by a job that is just no longer going green. When we version all of our jobs,
plugins, and master configuration, bisecting what caused a failure
(or behavior change) becomes much simpler. We just go back to the point in
which the job was running and diff the environment to what is running today.
Since we’re able to run everything locally it should be a fairly
straightforward process to replicate the problem on your laptop and lower your MTTR.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these problems we are talking about are still going to occur, but what
we’re doing is pushing the problems down to build time from runtime. We want
to find these issues in advance where they are not causing outages. We want to
be able to treat our pipelines, configuration, and infrastructure as code to avoid
the bowl of spaghetti that is fragile and unkown in nature. The teams should not
be called “10ft Pole” (my old team) that help with the build system, they
should be called “Golden Retriever Puppies” because everyone wants to play with us.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/golden_retrievers.jpg&quot; alt=&quot;Golden Retriever Puppies&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;in-conclusion&quot;&gt;In conclusion&lt;/h2&gt;

&lt;p&gt;In conclusion, I hope you are able to see how the beginnings of our system are
going to lend themselves to being a fully scalable solution that can scale to
hundreds of builds, thousands of developers, and at least 10s of different
companies you’re going to work at :)&lt;/p&gt;

&lt;p&gt;If you don’t see it quite yet then you’re going to have to trust me that we are
indeed doing this work for something and not for nothing. Anyways, no skin off
of my nose if you don’t. Just keep typing code monkey.&lt;/p&gt;

&lt;p&gt;In the next unit of this series we will begin configuring Jenkins. This will
allow you to begin making Jenkins do the stuff you need it to do. Stay tuned
for Unit 3 of Modern Jenkins: Programmatically Configuring Jenkins for Immutability with Groovy.&lt;/p&gt;

&lt;p&gt;The repo from this section can be found under the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unit2-part5&lt;/code&gt; tag here:
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit2-part5&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit2-part5&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u3-p1-intro-jenkins-groovy-init-system/&quot;&gt;Next Post: The Jenkins Groovy init system (init.groovy.d)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;http://lmgtfy.com/?q=docker+compose&quot;&gt;http://lmgtfy.com/?q=docker+compose&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://docs.docker.com/compose/compose-file/&quot;&gt;https://docs.docker.com/compose/compose-file/&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://docs.docker.com/compose/compose-file/#service-configuration-reference&quot;&gt;https://docs.docker.com/compose/compose-file/#service-configuration-reference&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://docs.docker.com/compose/compose-file/#volume-configuration-reference&quot;&gt;https://docs.docker.com/compose/compose-file/#volume-configuration-reference&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://docs.docker.com/compose/compose-file/#network-configuration-reference&quot;&gt;https://docs.docker.com/compose/compose-file/#network-configuration-reference&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u2-p5-writing-docker-compose-file/&quot;&gt;Modern Jenkins Unit 2 / Part 5: Starting Jenkins with Docker Compose&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 27, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 2 / Part 4: The Jenkins Plugin Image]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u2-p4-building-jenkins-plugin-image/" />
  <id>https://cicd.life/u2-p4-building-jenkins-plugin-image</id>
  <published>2017-07-26T00:20:00-04:00</published>
  <updated>2017-07-26T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;the-plugins-image&quot;&gt;The plugins image&lt;/h1&gt;

&lt;p&gt;&lt;img src=&quot;/images/plugins_page.png&quot; alt=&quot;Jenkins Plugins&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You may have noticed that while we called the previous image &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;jenkins-master&lt;/code&gt;,
we never did drop the war in it. In fact, the only reference to that war we’ve
seen is the very base image which sets a version, path, and a checksum. What’s the
reason for this madness?&lt;/p&gt;

&lt;p&gt;The answer is that the images we have built up until now are
only a runtime environment for this image. The master image (the one we just
built) will almost never change. When doing an upgrade the war never has new
system requirements and rarely changes the directory structure or anything like
that.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/PluginsImage.png&quot; alt=&quot;Jenkins Plugins Contents&quot; class=&quot;image-right&quot; height=&quot;400px&quot; width=&quot;400px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;What does change from deployment to deployment is the set of plugins, version of
the Jenkins war, and the configuration that interacts with those things. For
this reason I choose to run a vanilla Jenkins master container (with a few
environment variable configs passed in) and a highly customized plugin
container. This plugin container is where the binaries live and is volume
mounted by the master to provide the software itself.&lt;/p&gt;

&lt;p&gt;Let’s create it now and we can talk more about it after.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;imagesjenkins-pluginsdockerfile&quot;&gt;images/jenkins-plugins/Dockerfile&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# images/jenkins-plugins/Dockerfile&lt;/span&gt;
FROM modernjenkins/jenkins-base
MAINTAINER matt@notevenremotelydorky

LABEL &lt;span class=&quot;nv&quot;&gt;dockerfile_location&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://github.com/technolo-g/modern-jenkins/tree/master/images/jenkins-plugins/Dockerfile &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;image_name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;modernjenkins/jenkins-plugins &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;base_image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;modernjenkins/jenkins-base

&lt;span class=&quot;c&quot;&gt;# Add our plugin installation tool. Can be found here and is modified from the&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# upstream version.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-plugins/files/install-plugins.sh&lt;/span&gt;
ADD files/install-plugins.sh /usr/local/bin/

&lt;span class=&quot;c&quot;&gt;# Download the Jenkins war&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# JENKINS_URL, JENKINS_ROOT, JENKINS_WAR, and JENKINS_SHA are set in the parent&lt;/span&gt;
RUN &lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_ROOT&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;/ref/warfile &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class=&quot;nt&quot;&gt;-fsSL&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_URL&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-o&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_WAR&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_SHA&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;  &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_WAR&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; | &lt;span class=&quot;nb&quot;&gt;sha256sum&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; - &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;chown&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-R&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;:&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_ROOT&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# We will run all of this as the jenkins user as is dictated by the base imge&lt;/span&gt;
USER &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Install our base set of plugins and their depdendencies that are listed in&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# plugins.txt&lt;/span&gt;
ADD files/plugins.txt /tmp/plugins-main.txt
RUN install-plugins.sh &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; /tmp/plugins-main.txt&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Export our war and plugin set as volumes&lt;/span&gt;
VOLUME /usr/share/jenkins/ref/plugins
VOLUME /usr/share/jenkins/ref/warfile

&lt;span class=&quot;c&quot;&gt;# It&apos;s easy to get confused when just a volume is being used, so let&apos;s just keep&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# the container alive for clarity. This entrypoint will keep the container&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# running for... infinity!&lt;/span&gt;
ENTRYPOINT &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;sleep&quot;&lt;/span&gt;, &lt;span class=&quot;s2&quot;&gt;&quot;infinity&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;You can see from the Dockerfile that this image is where the action is. We have
a similar set of metadata at the top like the other images, then we add a file
named &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;install-plugins.sh&lt;/code&gt; This file is from the upstream Jenkins Docker image and
it’s purpose is to install a set of plugins as well as any depdendencies they
have. It can be downloaded from the link provided in the Dockerfile &lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Then we go on to download the jenkins war and check it’s SHA. If the SHA does
not match what we have in the base image, this step will fail and you know that
something amiss is going on. Since the version and the SHA are both set in the
very base image they should always match. There is never a scenario in which
those two do not match up.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/shasum.png&quot; alt=&quot;SHA 256 Sum&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Once the war and tools are installed we can install our set of plugins. The
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;install-plugins.sh&lt;/code&gt; script needs the war to run so now we should be ready.
What this script is doing in the background is interacting with the Jenkins
Update Center to attempt to install each plugin that is listed in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;plugins.txt&lt;/code&gt;.
It will reach out to download the plugin and check for any depdendencies the
plugin may have. If there are any, it will download those, resolve transitive
deps and so on until the full set of plugins defined by us are installed along
with any deps they need to function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: This is different than the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;plugins.sh&lt;/code&gt; file that is out there. That
script will not resolve dependencies and makes it very hard to audit which
plugins you actually need.&lt;/strong&gt;&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Add plugin resolver&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-plugins
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; files/
wget &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; files/install-plugins.sh &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-plugins/files/install-plugins.sh
&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x files/install-plugins.sh

&lt;span class=&quot;c&quot;&gt;# Add a very base set of plugins to plugins.txt&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Add some credential storage&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;credentials&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; files/plugins.txt
&lt;span class=&quot;c&quot;&gt;# Enable GitHub interactions&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;github&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; files/plugins.txt
&lt;span class=&quot;c&quot;&gt;# Make our blue balls green&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;greenballs&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; files/plugins.txt
&lt;span class=&quot;c&quot;&gt;# Give us Groovy capabilities&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;groovy&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; files/plugins.txt&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;If you recall we discussed that this image is only going to provide the software
itself and the Jenkins master image will provde the runtime. How that works is
that we will export our plugins and warfile via the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VOLUME&lt;/code&gt; statements at the
bottom of this Dockerfile and mount them into the master via &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--volumes-from&lt;/code&gt;.
This makes our plugins image an fully contained and versionable bundle of the
master war and any plugins we need. A little later on, we will talk about how
to include your configuration as well.&lt;/p&gt;

&lt;p&gt;Finally we have the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ENTRYPOINT&lt;/code&gt;. This version is farily simple:
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;sleep infinity&lt;/code&gt;.
What this does is keeps the container running even though we do not have a
running process in it. Since this is only a container for our wars and JPIs,
it doesn’t need to run the JVM or anything like that. It only needs to provide
it’s exported volumes. If we were to omit the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ENTRYPOINT&lt;/code&gt;, everything would
still work as expected but for the fact that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;jenkins-plugin&lt;/code&gt; container
would not be running.&lt;/p&gt;

&lt;p&gt;It would appear to be in a stopped state which for me is very confusing. The
container is being used by the master (by way of volumes) and so it
is indeed in use. The fact that Docker shows it as stopped is misleading IMO
and so this just props up the container for clarity.&lt;/p&gt;

&lt;h2 id=&quot;building-the-image&quot;&gt;Building the image&lt;/h2&gt;

&lt;p&gt;Well, we’ve got another image to build and I think by this time you know what
we’re going to do and it’s not DRY out our builders :P&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-1&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Warm up the copy machine...&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-plugins
&lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rp&lt;/span&gt; ../jenkins-master/build.sh &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
perl &lt;span class=&quot;nt&quot;&gt;-pi&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;s~jenkins-master~jenkins-plugins~g&apos;&lt;/span&gt; build.sh

&lt;span class=&quot;c&quot;&gt;# Build the image&lt;/span&gt;
./build.sh
&lt;span class=&quot;c&quot;&gt;# yay! I worked on the first time :trollface:&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;testing-the-image&quot;&gt;Testing the image&lt;/h2&gt;

&lt;p&gt;As us rafters say, the proof is at the put-in. Let’s give it a whirl!&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-2&quot;&gt;PWD: ~/code/modern-jenkins&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Start the plugins container first&lt;/span&gt;
docker container run &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; plugins &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; modernjenkins/jenkins-plugins

&lt;span class=&quot;c&quot;&gt;# Now fire up the master&lt;/span&gt;
docker container run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-ti&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--volumes-from&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;plugins &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 8080:8080 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  modernjenkins/jenkins-master
  
&lt;span class=&quot;c&quot;&gt;# Open the GUI&lt;/span&gt;
open http://localhost:8080&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;img src=&quot;/images/EmptyJenkins.png&quot; alt=&quot;Jenkins Home&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Would you look at that? Jenkins seems to be starting up swimmingly! If it
is not for you, try to debug what exactly is going wrong. There are a lot of
moving parts and this is a fairly complex system so don’t feel bad. It happens
to all of us. Except us writing blog tutorials. We are always 100% right and
our instructions work 11/10 times so you’re wrong and you should feel bad :P
Seriously though, if something is jacked up in these instructions please use
your super PR skills and help a brotha out by submitting a PR to the repo.&lt;/p&gt;

&lt;hr /&gt;

&lt;p class=&quot;image-center&quot;&gt;&lt;img src=&quot;/images/CleanUpAfterUnicorn.png&quot; alt=&quot;Unicorn Cleanup&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Unicorn from: &lt;a href=&quot;http://sperlingsmaedchen.deviantart.com/art/unicorns-fart-rainbows-381339815&quot;&gt;http://sperlingsmaedchen.deviantart.com/art/unicorns-fart-rainbows-381339815&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;cleaning-up&quot;&gt;Cleaning up&lt;/h2&gt;

&lt;p&gt;After running tests like these, we definitely need to begin thinking about
cleanup. What would happen if we tried to run the same tests again right now?
Feel free to try it, but the secret is that it won’t work. We need to delete
the remnants from the previous test before starting another so I make it a habit
to ensure a clean environmnet before I run a test and attempt to cleanup
afterwards. The command I normally use for this is the equivalent of “nuke it
‘till it glows”: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker container rm -fv $(docker ps -qa)&lt;/code&gt;.
This little gem will remove all containers, running or not as well as any
volumes they may have created (you may want to read more about that, volumes
not in the state you thought they were can ruin your day in lots of ways).&lt;/p&gt;

&lt;p&gt;One other thing you may be noticing is that no matter how diligent you are,
you’re developing a stack of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;none&amp;gt;&lt;/code&gt; images, weirdly named volumes, and orphaned
networks. This is normal cruft left behind while doing Docker development and it
can be removed by using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker system prune&lt;/code&gt; . This will remove:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;all stopped containers&lt;/li&gt;
  &lt;li&gt;all volumes not used by at least one container&lt;/li&gt;
  &lt;li&gt;all networks not used by at least one container&lt;/li&gt;
  &lt;li&gt;all dangling images (&lt;none&gt;&apos;s)&lt;/none&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE: If you really want to clean up, add a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-a&lt;/code&gt; and it will also remove
images not attached to a running container. I find that to be annoying except
when we’re in prod, but it is handy there.&lt;/strong&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;table class=&quot;rouge-table&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class=&quot;gutter gl&quot;&gt;&lt;pre class=&quot;lineno&quot;&gt;1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
&lt;/pre&gt;&lt;/td&gt;&lt;td class=&quot;code&quot;&gt;&lt;pre&gt; 
&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;~&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-------------------------------------------------------------------------&lt;/span&gt; 🐳  &lt;span class=&quot;nb&quot;&gt;unset&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;matt.bajor&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
% docker &lt;span class=&quot;nb&quot;&gt;rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-fv&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;docker ps &lt;span class=&quot;nt&quot;&gt;-qa&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;
34b9692447f6
59b24f290270
&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;~&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-------------------------------------------------------------------------&lt;/span&gt; 🐳  &lt;span class=&quot;nb&quot;&gt;unset&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;matt.bajor&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
% docker system prune
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all build cache
Are you sure you want to &lt;span class=&quot;k&quot;&gt;continue&lt;/span&gt;? &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;y/N] y
Deleted Networks:
master_default

Deleted Images:
untagged: modernjenkins/jenkins-master@sha256:8f4b3bcad8f8aa3a26da394ce0075c631d311ece10cf7c23ce60058a9e47f6ed
deleted: sha256:96c78f549467f8b4697b73eddd9da299d8fd686696b45190a2bba24ad810529a
deleted: sha256:d1f38cb683287825bbf1856efdfaa87e2a7c279ceb793f9831b88b850ae1c9a0
deleted: sha256:5371c45cef2d3c5c468aae4fd5e93c335e8e681f2aa366f6122902c45e8ec9cb
deleted: sha256:079be452ec3e99b51a35b76e67b1bb3af649c3357e3ba05d2b6bd2a8127804b4
deleted: sha256:87baad26b39521ddd0d7b12ac46b2f92344f2f8ad34f0f35c524d5c0c566b409
deleted: sha256:c348763948964e1f63c427bea6b4d38c3a34403b61aee5e7b32059a3c095af32
deleted: sha256:6f92439bdac179e8c980dc6a7eb4f9647545e9c6d34d28edbba3c922efa9ea1e
deleted: sha256:edd5cbd4dc3cb3e9ab54bb1d7f446d5638c4543f04f2b63ae1a3e87a661be7a2
deleted: sha256:7890def677cf6649567c4355ef8f10c359f71c0ac9ca6ab94d8f359a5d57f84d
deleted: sha256:2704ec820811576ee2c60b8a660753939457f88fbe6938c2039489a6047ec59c
deleted: sha256:202acc3c794ce58a5e0b0e6b3285ab5ae27c641804c905a50b9ca7d5c601b2b3
deleted: sha256:70e19603643ce03f9cbff3a8837f1ebfb33fe13df7fba66c2501be96d9a2fb93
deleted: sha256:8e757cb858613c81e5fa8fb2426d22584539c163ce4ab66d6b77bd378ee2817a
deleted: sha256:18d1a064d790f3be371fef00813efe1c78996eab042977b952f4cbf067b846e8
deleted: sha256:bddcbf75436ff49e435fe3c371337b6b12ae125e68e0d833ac6180ffd82f34d9
deleted: sha256:f4dae60dcb2542e532eb05c94abba2da00d5a36360cb1d79cb32f87bf9b9c909
deleted: sha256:12f7c2589fdbb6e8b9ac78983511df70e9613c8da42edf23ee1cdb3599437233
deleted: sha256:26b155d41fabd6881f871945586c623a485688fc67f08223df288522f7aeed87
deleted: sha256:3a7c393698419b8f4f7a1464264459d2662d9015b3d577ad8cb12e0b4ae069a5
deleted: sha256:53794a3680b75ae98f70ab567db86a1b0e262615a8194bad534edfd5d8acc2f2
deleted: sha256:13449dedb3ec5df1f1b969aa2f1f93bb2a3bed2fb3ebe7279cce52b750696031
deleted: sha256:55aae84cda94b4611f73ec70b4cc1ea7ce4bbb77a1999b585fcc46c6239fe2a5
deleted: sha256:b41674288931c4e4bcd43e9fcc0d0af8d9ddd9a31f04506050ce0f0dfc59e3e3

Total reclaimed space: 313.9MB
&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;commit-push-pr&quot;&gt;Commit, push, PR&lt;/h2&gt;

&lt;p&gt;You know the drill. Integrate early, integrate often. Make sure you actually are
looking at the work you’re merging. Afterall, it has your public name on it twice.&lt;/p&gt;

&lt;p&gt;If you did get lost (I know I had to make a minor change to my base image), take
a look at the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unit2-part4&lt;/code&gt; tag here:
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit2-part4&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit2-part4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u2-p5-writing-docker-compose-file/&quot;&gt;Next Post: Starting Jenkins with Docker Compose&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-plugins/files/install-plugins.sh&quot;&gt;https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-plugins/files/install-plugins.sh&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u2-p4-building-jenkins-plugin-image/&quot;&gt;Modern Jenkins Unit 2 / Part 4: The Jenkins Plugin Image&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 26, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 2 / Part 3: Building the Jenkins Master Image]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u2-p3-building-jenkins-master-image/" />
  <id>https://cicd.life/u2-p3-building-jenkins-master-image</id>
  <published>2017-07-25T00:20:00-04:00</published>
  <updated>2017-07-25T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;p&gt;&lt;strong&gt;NOTE: Make sure you’re checking out a branch at the beginning of each
section!&lt;/strong&gt;&lt;/p&gt;

&lt;h1 id=&quot;building-our-master-image&quot;&gt;Building our master image&lt;/h1&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;/images/jenkins_master.jpg&quot; alt=&quot;Jenkins Master&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now that we have a good base to inherit from, we can begin building out the rest
of our images inheriting from that one. The next image we need is for the master.
This image won’t contain too much other than generic configuration and a couple
tools because we want our master image itself to be as generic as possible. The
customization of each provisioned Jenkins master consists of configuration and
plugins which we will package in a separate image. We will talk more about why
it’s broken down this way later on. For now, let’s take a look at what we have
for a Jenkins master image (&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;modernjenkins/jenkins-master&lt;/code&gt;):&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;imagesjenkins-masterdockerfile&quot;&gt;images/jenkins-master/Dockerfile&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# images/jenkins-master/Dockerfile&lt;/span&gt;
FROM modernjenkins/jenkins-base
MAINTAINER matt@notevenremotelydorky

LABEL &lt;span class=&quot;nv&quot;&gt;dockerfile_location&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://github.com/technolo-g/modern-jenkins/tree/master/images/jenkins-master/Dockerfile &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;image_name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;modernjenkins/jenkins-master &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;base_image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;modernjenkins/jenkins-base

&lt;span class=&quot;c&quot;&gt;# Jenkins&apos; Environment&lt;/span&gt;
ENV COPY_REFERENCE_FILE_LOG &lt;span class=&quot;nv&quot;&gt;$JENKINS_HOME&lt;/span&gt;/copy_reference_file.log

&lt;span class=&quot;c&quot;&gt;# `/usr/share/jenkins/ref/` contains all reference configuration we want &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# to set on a fresh new installation. Use it to bundle additional plugins &lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# or config file with your custom jenkins Docker image.&lt;/span&gt;
RUN &lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; /usr/share/jenkins/ref/init.groovy.d

&lt;span class=&quot;c&quot;&gt;# # Disable the upgrade banner &amp;amp; admin pw (we will add one later)&lt;/span&gt;
RUN &lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;2.0 &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;2.0 &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_HOME&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;/jenkins.install.InstallUtil.lastExecVersion

&lt;span class=&quot;c&quot;&gt;# Fix up permissions&lt;/span&gt;
RUN &lt;span class=&quot;nb&quot;&gt;chown&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-R&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$JENKINS_HOME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; /usr/share/jenkins/ref

&lt;span class=&quot;c&quot;&gt;# Install our start script and make it executable&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# This script can be downloaded from&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-master/files/jenkins.sh&lt;/span&gt;
COPY files/jenkins.sh /usr/local/bin/jenkins.sh
RUN &lt;span class=&quot;nb&quot;&gt;chown &lt;/span&gt;jenkins /usr/local/bin/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x /usr/local/bin/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Make our jobs dir ready for a volume. This is where job histories&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# are stored and we are going to use volumes to persist them&lt;/span&gt;
RUN &lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_HOME&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;/jobs &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;chown&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;:&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;group&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_HOME&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;/jobs

&lt;span class=&quot;c&quot;&gt;# Install Docker (for docker-slaves plugin)&lt;/span&gt;
RUN yum-config-manager &lt;span class=&quot;nt&quot;&gt;--add-repo&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      https://download.docker.com/linux/centos/docker-ce.repo &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; yum makecache fast &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; yum &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; docker-ce &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; yum clean all &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Switch to the Jenkins user from now own&lt;/span&gt;
USER &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Configure Git&lt;/span&gt;
RUN git config &lt;span class=&quot;nt&quot;&gt;--global&lt;/span&gt; user.email &lt;span class=&quot;s2&quot;&gt;&quot;jenkins@cicd.life&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; git config &lt;span class=&quot;nt&quot;&gt;--global&lt;/span&gt; user.name &lt;span class=&quot;s2&quot;&gt;&quot;CI/CD LIfe Jenkins&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Main web interface and JNLP slaves&lt;/span&gt;
EXPOSE 8080 50000
ENTRYPOINT &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;/usr/local/bin/jenkins.sh&quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Looking at this Dockerfile, you may see a few new things like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;USER&lt;/code&gt; (will run
the commands after this declaration as the defined user) and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EXPOSE&lt;/code&gt; (exposes
defined ports for binding to an outside port), but for the most part it’s very
similar to the previous one. Set a few &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ENV&lt;/code&gt; vars, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;RUN&lt;/code&gt; a few commands etc.&lt;/p&gt;

&lt;p&gt;We need a build script so we’ll do the same thing that we did before (except
now we have the script in our repo) by creating a build.sh that can also push.
Let’s just duplicate this now:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-master
&lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rp&lt;/span&gt; ../jenkins-base/build.sh &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
perl &lt;span class=&quot;nt&quot;&gt;-pi&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;s~jenkins-base~jenkins-master~g&apos;&lt;/span&gt; build.sh&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Now we have a nice little build script for this image too. While a puppy might
have died when we copy/pasta’d I didn’t hear it whimper.&lt;/p&gt;

&lt;p&gt;There is one more file that we need for this image and it’s the startup script.
Since the internet was generous enough to provide one, we should just use it.
This is the script that powers the official image and I’ve got a copy of it just
for you in my repo. To retrieve it, use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;wget&lt;/code&gt;:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-1&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-master
&lt;span class=&quot;nb&quot;&gt;mkdir &lt;/span&gt;files
wget &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; files/jenkins.sh &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-master/files/jenkins.sh
&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x files/jenkins.sh&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;build-the-image-and-test-it-out&quot;&gt;Build the image and test it out&lt;/h2&gt;

&lt;p&gt;Now that we’ve got all the files created that our image depends on, let’s build
and test it a bit.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-2&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Build it&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-master
./build.sh

&lt;span class=&quot;c&quot;&gt;# Run it&lt;/span&gt;
docker container run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-ti&lt;/span&gt; modernjenkins/jenkins-master bash
docker version

&lt;span class=&quot;c&quot;&gt;# You should see the Docker client version only&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;commit-push-pr&quot;&gt;Commit, push, PR&lt;/h2&gt;
&lt;p&gt;The master image seems to be gtg so let’s get it integrated. You may now be
seeing what we mean by ‘continuous integration’. Every time we have a small
chunk of usable work, we &lt;em&gt;integrate&lt;/em&gt; it into the master branch. This keeps
change sets small and makes it easier for everyone to incorporate the steady
stream of changes into their work without spending days in Git hell.&lt;/p&gt;

&lt;p&gt;You can compare your git tree to mine at state at the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unit2-part3&lt;/code&gt; tag here:
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit2-part3&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit2-part3&lt;/a&gt;
The Docker images are also available to pull if you don’t feel like building
them for some reason.&lt;/p&gt;

&lt;p&gt;Our next move will be to build the meat of our system: the plugins container.
&lt;img src=&quot;/images/awww_yeah.png&quot; alt=&quot;Awwww Yeaahhhhh&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u2-p4-building-jenkins-plugin-image/&quot;&gt;Next Post: Building the Jenkins Plugin image&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u2-p3-building-jenkins-master-image/&quot;&gt;Modern Jenkins Unit 2 / Part 3: Building the Jenkins Master Image&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 25, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 2 / Part 2: Building the Base Jenkins Image (and Intro to Docker)]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u2-p2-building-base-jenkins-docker-image/" />
  <id>https://cicd.life/u2-p2-building-base-jenkins-docker-image</id>
  <published>2017-07-24T00:20:00-04:00</published>
  <updated>2017-07-24T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;p&gt;Note: &lt;strong&gt;I am assuming familiarity with Docker for this tutorial, but even if 
you’ve never used it I think it should still be possible to follow along. It
always helps to know your tools though so if you’re unfamiliar take some time to
do a Docker Hello World or the like. It will be worth your investment in time as
we will be using this technology throughout the tutorial. Everything we will do
is based on Docker for Mac which you can download here:
&lt;a href=&quot;https://www.docker.com/docker-mac&quot;&gt;https://www.docker.com/docker-mac&lt;/a&gt; Linux users
should be able to follow along without much adjustment too.&lt;/strong&gt;&lt;/p&gt;

&lt;h1 id=&quot;building-our-images&quot;&gt;Building our Images&lt;/h1&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;/images/under_construction.gif&quot; alt=&quot;Under Construction&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Well I think we’re fully setup with a great foundation at this point. We have
a general spec for the system we would like to create, we have a nicely
organized code repository hosted in GitHub and setup with a Grade A PR template
that will ensure we’re thinking about what we’re doing, and we have a workflow
that works for us and is reusable in any situation (nobody hates receiving PRs).
It is time to actually begin writing some code!&lt;/p&gt;

&lt;p&gt;Nearly every software vendor provides a Docker image for their piece of software
which is super awesome when spiking things out or researching a new technology.
The reality of it is though that a lot of companies have a security requirement
that all software is vetted by the security team and then consumed from internal
repositories. These repositories are served up by tools such as Artifactory &lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and
feature built-in security scanning via Black Duck &lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, permission models that allow
only certain users to publish, and promotion mechanisms for getting only
verifiable software into the environment. Pulling Docker images straight off of
the Hub does not fit into that model at all.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/artifactory_black_duck_small.png&quot; alt=&quot;Artifactory + Black Duck FTW&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For that reason, we are going to develop a set of our own images with a common
base. This gives us commonality between images which has many benefits including
flexibility to add only the software that we want to. Now our examples will use
public servers for all of this activity, but you can substitute those URLs for
the URLs of your own internal artifact repository.&lt;/p&gt;

&lt;p&gt;While we don’t want to trust every Docker image that has been published, we do
have to start our chain of trust somewhere. In our case we will start with the
CentOS 7 base image from the Docker Hub. There are lots of other great options
out there, such as Alpine and Ubuntu, but I think CentOS is perfectly fine for
this application and is what I use on a daily basis due to certain requirements.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/centos_small.png&quot; alt=&quot;CentOS&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;modernjenkinsjenkins-base&quot;&gt;modernjenkins/jenkins-base&lt;/h2&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;/images/java_logo.png&quot; alt=&quot;Java (but not from you-know-who)&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This image contains purely the JDK. Since we decided to base this image on
CentOS (for security, support, compatibility, and reliability to name a few
reasons) that is where our chain of trust begins. I personally have been
trusting CentOS DVDs for an extremely long time so I feel confident they are a
good place to start. On top of the Centos 7 base we will install the OpenJDK and
setup a few environment vars. Let’s show the whole file and then talk about
what each of the sections are.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;imagesjenkins-basedockerfile&quot;&gt;images/jenkins-base/Dockerfile&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# images/jenkins-base/Dockerfile&lt;/span&gt;
FROM centos:7
MAINTAINER matt@notevenremotelydorky

LABEL &lt;span class=&quot;nv&quot;&gt;dockerfile_location&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://github.com/technolo-g/modern-jenkins/tree/master/images/jenkins-base/Dockerfile &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;image_name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;modernjenkins/jenkins-base &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;base_image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;centos:7


&lt;span class=&quot;c&quot;&gt;# Jenkins&apos; Environment&lt;/span&gt;
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_ROOT /usr/share/jenkins
ENV JENKINS_WAR /usr/share/jenkins/ref/warfile/jenkins.war
ENV JENKINS_SLAVE_AGENT_PORT 50000
ENV &lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;jenkins
ENV &lt;span class=&quot;nv&quot;&gt;group&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;jenkins
ENV &lt;span class=&quot;nv&quot;&gt;uid&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;1000
ENV &lt;span class=&quot;nv&quot;&gt;gid&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;1000

&lt;span class=&quot;c&quot;&gt;# Jenkins Version info&lt;/span&gt;
ENV JENKINS_VERSION 2.69
ENV JENKINS_SHA d1ad00f4677a053388113020cf860e05a72cef6ee64f63b830479c6ac5520056

&lt;span class=&quot;c&quot;&gt;# These URLs can be swapped out for internal repos if needed. Secrets required may vary :)&lt;/span&gt;
ENV JENKINS_UC https://updates.jenkins.io
ENV JENKINS_URL http://mirrors.jenkins.io/war/&lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_VERSION&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;/jenkins.war

&lt;span class=&quot;c&quot;&gt;# Jenkins is run with user `jenkins`, uid = 1000&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# If you bind mount a volume from the host or a data container,&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# ensure you use the same uid&lt;/span&gt;
RUN groupadd &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;gid&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;group&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; useradd &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$JENKINS_HOME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;uid&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;group&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; /bin/bash &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Install our tools and make them executable&lt;/span&gt;
COPY files/jenkins-support /usr/local/bin/jenkins-support
RUN &lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_ROOT&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;
RUN &lt;span class=&quot;nb&quot;&gt;chown &lt;/span&gt;jenkins /usr/local/bin/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;JENKINS_ROOT&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x /usr/local/bin/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Configure to Denver timezone. I dislike debugging failures in UTC&lt;/span&gt;
RUN &lt;span class=&quot;nb&quot;&gt;unlink&lt;/span&gt; /etc/localtime &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;ln&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; /usr/share/zoneinfo/America/Denver /etc/localtime

&lt;span class=&quot;c&quot;&gt;# Install Java, Git, and Unzip&lt;/span&gt;
RUN yum &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; java-1.8.0-openjdk-devel tzdata-java git unzip &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; yum clean all&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;The above Dockerfile will be our base image that everything else will inherit
from. While we are initially only creating a single Jenkins master, you may find
that others in your organization would like their own Jenkins instance and this
pattern ensures you’re ready for it without sacrificing readability. Now let’s
talk about what is in this Dockerfile.&lt;/p&gt;

&lt;h3 id=&quot;metadata&quot;&gt;Metadata&lt;/h3&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# images/jenkins-base/Dockerfile&lt;/span&gt;
FROM centos:7
MAINTAINER matt@notevenremotelydorky

LABEL &lt;span class=&quot;nv&quot;&gt;dockerfile_location&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;https://github.com/technolo-g/modern-jenkins/tree/master/images/jenkins-base/Dockerfile &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;image_name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;modernjenkins/jenkins-base &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
      &lt;span class=&quot;nv&quot;&gt;base_image&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;centos:7
      &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;This information is critical when tracking down a source in the supply chain as
well as for new contributors who want to change how the container works.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;#&lt;/code&gt; comment at the top is just the path within the repo to the file itself&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;FROM&lt;/code&gt; defines the image that we are building on top of&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;MAINTAINER&lt;/code&gt; tells who the maintainer of this image is&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;LABEL&lt;/code&gt; section provides labels that can be accessed with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker inspect&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;environment&quot;&gt;Environment&lt;/h3&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Jenkins&apos; Environment&lt;/span&gt;
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_ROOT /usr/share/jenkins
...&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;These environment variables values that we want to have permanently baked into
the image. They will be available in any container that is instantiated from
this image or any other that inherits it. These types of variables make it easy
to bring consistency across the environment.&lt;/p&gt;

&lt;h3 id=&quot;files--commands-actually-doing-the-work&quot;&gt;Files &amp;amp; Commands (Actually doing the work)&lt;/h3&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;RUN groupadd &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;gid&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;group&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; useradd &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$JENKINS_HOME&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;uid&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;group&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; /bin/bash &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Install our tools and make them executable&lt;/span&gt;
COPY files/jenkins-support /usr/local/bin/jenkins-support
...&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;These steps actually modify our image by installing software, modifying the
filesystem, adding files from the build context, etc. They can use the ENV vars
set above or arguments passed in as well as all other kinds of manipulations.
You can see all the possible commands here:
&lt;a href=&quot;https://docs.docker.com/engine/reference/builder/&quot;&gt;https://docs.docker.com/engine/reference/builder/&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;adding-the-jenkins-support-file-to-the-repo&quot;&gt;Adding the jenkins-support file to the repo&lt;/h3&gt;

&lt;p&gt;We depend on a file called jenkins-support to make things work correctly. It is
basically a shim to get Jenkins working within a Docker container properly. It
cant be downloaded from my repo like so:&lt;/p&gt;

&lt;hr /&gt;
&lt;h3 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins&lt;/h3&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-base
&lt;span class=&quot;nb&quot;&gt;mkdir &lt;/span&gt;files
wget &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; files/jenkins-support https://raw.githubusercontent.com/technolo-g/modern-jenkins/master/images/jenkins-base/files/jenkins-support
&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x files/jenkins-support&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;notes-about-images&quot;&gt;Notes about images&lt;/h3&gt;

&lt;p&gt;Each line in a Dockerfile creates a layer and then all of these layers are
mushed together (or was it squished?) to make our root fs. This mushing process
is only ever additive so what that means is if you create a big file in one
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;RUN&lt;/code&gt; step but then remove it in another &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;RUN&lt;/code&gt; step, you’re not actually going
to see any difference in image size. The key is finding the right balance
between number of layers and size of layers. If we can keep layers under 50mb
but still split up our logical process into easily understood and intuitive
blocks (ie: doing a full yum transaction in one &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;RUN&lt;/code&gt; block) then we’re sittin’
pretty.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;From the Docker website&lt;/em&gt;
&lt;img src=&quot;/images/container-layers.jpg&quot; alt=&quot;Docker Image Layers&quot; /&gt;&lt;/p&gt;

&lt;p&gt;There is so much more I would like to tell you about best practices that I’ve
found around image creation that I will have to save it for another post. Just
know for now, we &lt;strong&gt;can never delete data that was created in a previous layer.&lt;/strong&gt;
That will directly translate into cleaning up after yourself in an atomic
action. A real example is this:&lt;/p&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Install Java, Git, and Unzip then cleanup&lt;/span&gt;
RUN yum &lt;span class=&quot;nb&quot;&gt;install&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; java-1.8.0-openjdk-devel tzdata-java git unzip &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; yum clean all&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;building-the-image&quot;&gt;Building the image&lt;/h2&gt;

&lt;p&gt;Now that we have a super awesome Dockerfile, we need to build it. Normally I
would have you do &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker image build -t blah/boring .&lt;/code&gt; etc., but today I’m going to
set your future self up for a win. We’re going to write a script right off
the bat to build this thing. I promise you that you will be rebuilding this
image at least 2 more times so let’s just go ahead and script it from the get go.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;imagesjenkins-basebuildsh&quot;&gt;images/jenkins-base/build.sh&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash -el&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# images/jenkins-base/build.sh&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Define our image name&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;image_name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;modernjenkins/jenkins-base:latest

&lt;span class=&quot;c&quot;&gt;# Accept any args passed and add them to the command&lt;/span&gt;
docker image build &lt;span class=&quot;k&quot;&gt;${&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;@&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$image_name&lt;/span&gt; &lt;span class=&quot;si&quot;&gt;$(&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;dirname&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$0&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# If we add PUSH=true to the command, it will push to the hub&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PUSH&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;true&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
  &lt;/span&gt;docker image push &lt;span class=&quot;nv&quot;&gt;$image_name&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;This will not be the last time we see this lil’ guy as we will add it to all of
the image repos. Some may say “That’s not DRY Matt!”, to which I say
“Suck a lemon!”. This code will never change and know that you can
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;cd images/blah &amp;amp;&amp;amp; ./build.sh&lt;/code&gt; really makes it easy and convenient to work with
these images. Now we run the script and out pops a baby Docker :)&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-1&quot;&gt;PWD: ~/code/modern-jenkins&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;images/jenkins-base
&lt;span class=&quot;nb&quot;&gt;chmod&lt;/span&gt; +x ./build.sh &lt;span class=&quot;c&quot;&gt;# Gotta set executable perms&lt;/span&gt;
./build.sh
&lt;span class=&quot;c&quot;&gt;# ...&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# profit!&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p class=&quot;image-left&quot;&gt;&lt;img src=&quot;/images/yey_small.png&quot; alt=&quot;yey&quot; /&gt;&lt;/p&gt;

&lt;p&gt;yey! You’ve built your first Docker image (for this project)!&lt;/p&gt;

&lt;h2 id=&quot;testing-the-image&quot;&gt;Testing the image&lt;/h2&gt;

&lt;p&gt;We can now go ahead and give this image a quick spin. It won’t be too exciting,
but we can probably run the standard test to see that Java is installed:&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-2&quot;&gt;PWD: ~/code/modern-jenkins&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Run the container and pop yourself into a shell&lt;/span&gt;
docker run &lt;span class=&quot;nt&quot;&gt;--rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-ti&lt;/span&gt; modernjenkins/jenkins-base bash

&lt;span class=&quot;c&quot;&gt;# Check for java&lt;/span&gt;
java &lt;span class=&quot;nt&quot;&gt;--version&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# damn&lt;/span&gt;
java &lt;span class=&quot;nt&quot;&gt;--help&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# ugh&lt;/span&gt;
java version
&lt;span class=&quot;c&quot;&gt;# wtf! oh right...&lt;/span&gt;
java &lt;span class=&quot;nt&quot;&gt;-version&lt;/span&gt;
openjdk version &lt;span class=&quot;s2&quot;&gt;&quot;1.8.0_141&quot;&lt;/span&gt;
OpenJDK Runtime Environment &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;build 1.8.0_141-b16&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
OpenJDK 64-Bit Server VM &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;build 25.141-b16, mixed mode&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;commit-and-push&quot;&gt;Commit and push&lt;/h2&gt;

&lt;p&gt;OK, so now that we have a nice working image, this seems like a perfect place to
call &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;shippable increment&lt;/code&gt;. Let’s commit to our branch, push it to origin, &lt;a href=&quot;/clean-up-your-git-history/&quot;&gt;clean
up any erroneous commits by squashing&lt;/a&gt;.
, and create a PR. We will then self review it, confirm everything looks up to
snuff, and merge. Then a nice little &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;git pull&lt;/code&gt; should get us all up to date
locally and we can begin work on the next increment.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-3&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;git checkout &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; feat-add_jenkins-base_image
git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Add a base image containing OpenJDK 8&quot;&lt;/span&gt;
git push origin feat-add_jenkins-base_image&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;moving-on&quot;&gt;Moving on&lt;/h2&gt;
&lt;p&gt;Now we’ll begin to build on top of our base images. If you need to see the repo’s
state at the end of this section, please rever to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unit2-part2&lt;/code&gt; tag here:
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit2-part2&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit2-part2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u2-p3-building-jenkins-master-image/&quot;&gt;Next Post: Building the Jenkins Master Docker image&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.jfrog.com/artifactory/&quot;&gt;https://www.jfrog.com/artifactory/&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.blackducksoftware.com/partners/jfrog&quot;&gt;https://www.blackducksoftware.com/partners/jfrog&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u2-p2-building-base-jenkins-docker-image/&quot;&gt;Modern Jenkins Unit 2 / Part 2: Building the Base Jenkins Image (and Intro to Docker)&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 24, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 2 / Part 1: GitHub + GitHub Flow]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u2-p1-designing-the-code-repository/" />
  <id>https://cicd.life/u2-p1-designing-the-code-repository</id>
  <published>2017-07-23T00:20:00-04:00</published>
  <updated>2017-07-23T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;storage--process&quot;&gt;Storage + Process&lt;/h1&gt;

&lt;p&gt;Now that we have a good idea of what the desired traits and abilities of our
system should be, we can begin to lay down the foundation. The very beginnings
of all projects start with a SCM (source control management) repository.&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/git_scm.png&quot; alt=&quot;GitHub Logo&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;multi-or-single-repo&quot;&gt;Multi or single repo?&lt;/h2&gt;

&lt;p&gt;We want our repo to be intuitively laid out as there is the chance that almost
every engineer in the company will need to interact with it at some point. This
means that we should separate concerns within the tree and make things easy
to find. In the past we tended to use a lot of separate repos and to treat them
as if they were fully independent and could be shared with any other team as is.
I am a fan of this idea for certain use cases, but what I have found is that it
makes things very hard to find on a system that you contribute to infrequently.
I prefer instead to start out with a mono-repo and if there comes a time in which
we need to break it up, do it then. Making a bunch of different repos for a project
like this right off the bat is a premature optimization IMO.&lt;/p&gt;

&lt;h2 id=&quot;note&quot;&gt;NOTE:&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;I’m going to treat this as a tutorial so you should be able to follow along by
running what’s in the code blocks&lt;/strong&gt;&lt;/p&gt;

&lt;h2 id=&quot;flesh-out-the-repo-structure&quot;&gt;Flesh out the repo structure&lt;/h2&gt;

&lt;p&gt;First let’s make sure that we have a repo to work on setup in GitHub. You should
be able to do this for free as GitHub allows unlimited open source projects. Once
we get to the point of configuring our deployment, we will add some encryption to
cover the secret data.&lt;/p&gt;

&lt;h3 id=&quot;create-the-repo-and-populate-the-master-branch&quot;&gt;Create the repo and populate the master branch&lt;/h3&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/GithubOctocat_small.png&quot; alt=&quot;GitHub Logo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Create an account on GitHub &lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; if you do not have one already. GitHub is the
worlds largest hub for sharing code and lots of companies use it as their
primary SCM host with either their public or on-premise offering. We will be storing
everything we do in this tutorial on GitHub. You can sign up for free (no
credit card needed) here: https://github.com/join&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/&quot;&gt;https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and the creating your first public repo:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://help.github.com/articles/create-a-repo/&quot;&gt;https://help.github.com/articles/create-a-repo/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE: Don’t check the “Initialize this repo with a README” button.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can name it anything you want, but I have named mine &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;modern-jenkins&lt;/code&gt;. Clone
this somewhere locally (I use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~/code&lt;/code&gt; for all of my repos) and let’s begin 
initializing the repo. I prefer to start with an empty commit when creating a
new repo as we can then PR everything that ever hits the master branch. You
can do this like so:&lt;/p&gt;

&lt;hr /&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Setup your Git author information first:&lt;/span&gt;
git config &lt;span class=&quot;nt&quot;&gt;--global&lt;/span&gt; user.email matt@notevenremotelydorky.com
git config &lt;span class=&quot;nt&quot;&gt;--global&lt;/span&gt; user.name &lt;span class=&quot;s2&quot;&gt;&quot;Matt Bajor&quot;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Navigate to your code directory&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ~/code

&lt;span class=&quot;c&quot;&gt;# I have created a Github project here:&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# https://github.com/technolo-g/modern-jenkins&lt;/span&gt;
git clone git@github.com:technolo-g/modern-jenkins.git
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;modern-jenkins

&lt;span class=&quot;c&quot;&gt;# Populate the master branch. This will be our first and ONLY commit directly to&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# master. Everything else will go through a Pull Request&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;--allow-empty&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Initial Commit&quot;&lt;/span&gt;
git push origin master&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;begin-with-our-github-flow-workflow&quot;&gt;Begin with our GitHub Flow workflow&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://guides.github.com/introduction/flow/&quot;&gt;https://guides.github.com/introduction/flow/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be using GitHub Flow &lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; to work with our repo. Visit the link if you
want to get a good idea of how it works, but the gist of it is that the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;master&lt;/code&gt;
branch is where stable code lives. It should &lt;strong&gt;always be releasable&lt;/strong&gt;. When we
want to make a change, we will create a feature branch, make changes, then Pull
Request &lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; the changes back into master. Since the branches only live until
the change is merged into master, they are considered to be “short lived”. There
are some other workflows such as Git Flow &lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; that work differently
with “long lived” branches and other workflows, but they can lead to
more of a headache with large, long running projects in my experience.&lt;/p&gt;

&lt;p&gt;Our workflow will be:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Cut a feature branch off of master&lt;/li&gt;
  &lt;li&gt;Make changes to the branch to implement the feature&lt;/li&gt;
  &lt;li&gt;Push your changes to the remote (GitHub)&lt;/li&gt;
  &lt;li&gt;Create a Pull Request to merge your changes into master&lt;/li&gt;
  &lt;li&gt;(self-)Review the changes and comment and/or approve&lt;/li&gt;
  &lt;li&gt;Merge changes into master and delete the feature branch&lt;/li&gt;
  &lt;li&gt;Pull the new master locally&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;create-repo-skeleton&quot;&gt;Create repo skeleton&lt;/h3&gt;

&lt;p&gt;Let’s begin creating our repo!&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Starting from our empty master, checkout a feature branch&lt;/span&gt;
git checkout &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; feat-repo_skeleton

&lt;span class=&quot;c&quot;&gt;# Create a base readme&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;# Dockerized Jenkins Build System&apos;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; README.md

&lt;span class=&quot;c&quot;&gt;# Create a place for our images&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; images/ &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;# Docker Images&apos;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; images/README.md

&lt;span class=&quot;c&quot;&gt;# Create a place for our deployment config&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; deploy/ &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;# Deployment Scripts&apos;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; deploy/README.md

&lt;span class=&quot;c&quot;&gt;# A spot for the DSL&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;mkdir&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; dsl/ &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;# Jenkins Job DSL&apos;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; dsl/README.md&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;add-a-pull_request_template&quot;&gt;Add a PULL_REQUEST_TEMPLATE&lt;/h3&gt;

&lt;p&gt;Whenever we change the master branch we will be submitting a Pull Request
against the repo. GitHub gives you the ability to template the contents of
the Pull Request form. Since we’re probably working by ourselves here, you
may wonder “WTH Matt, why are we setting up collaboration tools when I’m clearly
doing this tutorial by myself?” There are a couple of reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Get used to the process:&lt;/strong&gt; As the system evolves and as it grows there
will be more and more people adding to this repo. If we start off pushing
to master it’s much easier to continue that tradition and end up with
everyone pushing to master all willy nilly. If we start off with a robust
process, it has a habit of sticking around.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Fighting complacency:&lt;/strong&gt; Since we’ll be self-reviewing most of this code,
it can be really easy to just click ‘Merge’ without thinking of what you’re
doing. This PR template has a small checklist that will keep you honest
with yourself. If you know you have a tendency to skip over something go
go ahead and add that to the checklist too.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Change management:&lt;/strong&gt; Going from working app to working app requires
keeping an eye on what’s changing whenever they change. When things do go
awry (and they will), PRs will help untangle the mess much quicker than a
steady stream of commits to the master branch. In theory, it is much easier
to tell when a PR breaks a repo instead of a single commit in the history.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Thinking about your work in chunks:&lt;/strong&gt; We really will be adding just a set
of commits to a repo to get to our final form, but if we treat our work
like that it’s never done. Instead, we should think about small chunks of
work that bring value and can be deployed as a whole. Agile calls this a
releasable increment. These chunks should make it easier to reason about
what impact the change may have.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even if I haven’t convinced you that this is important, I’m going to put it in
 a code block which will force your hand anyways. Ha!&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pull_request_templatemd&quot;&gt;PULL_REQUEST_TEMPLATE.md&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; &amp;gt; PULL_REQUEST_TEMPLATE.md
#### What does this PR do?

#### Why did you take this approach?

#### Contribution checklist

- [ ] The branch is named something meaningful
- [ ] The branch is rebased off of current master
- [ ] There is a single commit (or very few smaller ones) with a [Good commit message](https://github.com/torvalds/subsurface-for-dirk/blob/master/README#L92) that includes the issue if there was one
- [ ] You have tested this locally yourself


#### A picture of a cute animal (optional)
&amp;lt;img src=&quot;https://68.media.tumblr.com/7b36a31855ed619f91b8fc4416d0cafc/tumblr_inline_o6b4ngEE551sdwbtb_540.png&quot; width=&quot;350&quot;/&amp;gt;
EOF&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;integrating-our-changes-into-master&quot;&gt;Integrating our changes into master&lt;/h2&gt;

&lt;p&gt;Now that we have a layout we like and a fancy new PR template, let’s get them
into the master branch. We will do this via PR.&lt;/p&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-codemodern-jenkins-1&quot;&gt;PWD: ~/code/modern-jenkins/&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Add the changes to our branch&lt;/span&gt;
git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Create initial repo structure and boilerplate&quot;&lt;/span&gt;
git push origin feat-repo_skeleton&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;p&gt;Now that the changes are pushed up, browse to your repo in your favorite web
browser. Mine is located at
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins&quot;&gt;https://github.com/technolo-g/modern-jenkins&lt;/a&gt;.
You should see a green button that says “Compare &amp;amp; Pull Request”:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/Compare_PR.jpg&quot; alt=&quot;Compare &amp;amp; pull request&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Click that and
it will take you to a form that allows you to set the title of the PR as well as
a description. Enter both and click “Create Pull Request”. Feel free to describe
exactly what you’re doing and why. It’ good practice ;)&lt;/p&gt;

&lt;p&gt;Once it has been submitted, you can then view the changeset, comment on it if
you like (don’t feel bad talking to yourself), and approve or modify the PR.
Since this guy is pretty simple, take a quick look through your commit and make
sure there are no typos etc. If all looks good, give er’ a ‘LGTM’ in the comments
section and merge away.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: I always recommend disabling merge commits in a repo’s settings.&lt;/strong&gt; These
just muddy up the commit history and instead I prefer to use the
“Squash Merging” setting instead.
&lt;img src=&quot;https://cicd.life/images/squash_commits_github.jpg&quot; alt=&quot;Disable merge commits&quot; /&gt;
This will squash all commits in the PR down to
one and will allow you to edit the commit message before doing so. This really
makes rebases and other git surgery easier than when there are a ton of merge
commits to wade through. You can also do this before creating the PR if you like.
See my post here on &lt;a href=&quot;/clean-up-your-git-history/&quot;&gt;cleaning up your git history&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Sweet! You have made your first change on a road of many and you have done it in
a very sustainable way that gives everyone working on the project context of
what you’re doing and why. Most importantly, you’ve given your future self some
really good reminders as to what you were thinking at the time the change was made.&lt;/p&gt;

&lt;h2 id=&quot;on-to-the-next-chapter&quot;&gt;On to the next chapter!&lt;/h2&gt;

&lt;p&gt;Now that we have our skeleton there, let’s begin hashing out the actual stuff in
the repo :) If you need it, the code from this part can be found under the
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;unit2-part1&lt;/code&gt; tag here:
&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins/tree/unit2-part1&quot;&gt;https://github.com/technolo-g/modern-jenkins/tree/unit2-part1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u2-p2-building-base-jenkins-docker-image/&quot;&gt;Next Post: Building Jenkins’ base Docker image (and brief Intro to Docker)&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://github.com/join&quot;&gt;https://github.com/join&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://guides.github.com/introduction/flow/&quot;&gt;https://guides.github.com/introduction/flow/&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://help.github.com/articles/about-pull-requests/&quot;&gt;https://help.github.com/articles/about-pull-requests/&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://jeffkreeftmeijer.com/2010/why-arent-you-using-git-flow/&quot;&gt;https://jeffkreeftmeijer.com/2010/why-arent-you-using-git-flow/&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u2-p1-designing-the-code-repository/&quot;&gt;Modern Jenkins Unit 2 / Part 1: GitHub + GitHub Flow&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 23, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 1 / Part 2: Architecting a Jenkins + Docker CI System]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u1-p2-architecting-jenkins-docker-ci-system/" />
  <id>https://cicd.life/u1-p2-architecting-jenkins-docker-ci-system</id>
  <published>2017-07-22T00:20:00-04:00</published>
  <updated>2017-07-22T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;defining-the-architecture-requirements&quot;&gt;Defining the architecture requirements&lt;/h1&gt;

&lt;p&gt;We have defined the characteristics of our system so we know how we want it to
behave. In order to obtain that behavior though, we need to determine what
architectural features will get us there. Let’s go through our business 
requirements one by one and come up with some technical requirements to guide
us through the project.&lt;/p&gt;

&lt;h2 id=&quot;easy-to-develop&quot;&gt;Easy to develop&lt;/h2&gt;

&lt;p&gt;We need a way to make changes to the system in order to move it forward that is
easy to work with and as simple as possible. Developing in production may seem
easy at first, but over time this creates a lack of trust in the system as well
as a lot of downtime. Instead, let’s implement these technical features to make
changing this system sustainable:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;New developers can get going by cloning the repo and running a command&lt;/li&gt;
  &lt;li&gt;Everything can be run locally for development purposes&lt;/li&gt;
  &lt;li&gt;Resetting to a known good state is quick and easy&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;getting-started-workflow&quot;&gt;Getting started workflow&lt;/h3&gt;

&lt;hr /&gt;
&lt;h4 id=&quot;pwd-code&quot;&gt;PWD: ~/code&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Ok, let&apos;s get you setup FNG&lt;/span&gt;
git clone git@github.com:technolo-g/modern-jenkins.git

&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;modern-jenkins/deploy/master/
./start.sh

&lt;span class=&quot;c&quot;&gt;# You should be all set! You can begin development now.&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;safe-to-develop&quot;&gt;Safe to develop&lt;/h2&gt;

&lt;p&gt;The nature of a CI system is to verify code exhibits the qualities we want and
expect it to then deliver it production. This makes it inherently dangerous as 
it is extremely powerful and programmed to deliver software to production. We
need the ability to modify this system without interfering with production by
accident. To accomplish this we will implement:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Splitting development and production credentials&lt;/li&gt;
  &lt;li&gt;Disabling potentially dangerous actions in development&lt;/li&gt;
  &lt;li&gt;Explicitly working with forks when possible&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/dev_prod_yml.jpg&quot; alt=&quot;Dev vs. Prod&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;consistently-deployable&quot;&gt;Consistently deployable&lt;/h2&gt;

&lt;p&gt;This system, when deployed to production, will be used by a large portion of
your development team. What this means is that downtime in the build system
== highly paid engineers playing chair hockey. We want our system to always be
deployable in case there is an issue with the running instance and also to be
able to quickly roll back if there is an issue with a new version or the like.
To make this happen we will need:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://imgs.xkcd.com/comics/compiling.png&quot; alt=&quot;Compiling&quot; class=&quot;image-right&quot; height=&quot;200px&quot; width=&quot;200px&quot; /&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A versioning system that keeps track of the last known good deployment&lt;/li&gt;
  &lt;li&gt;State persistence across deploys&lt;/li&gt;
  &lt;li&gt;A roll-forward process&lt;/li&gt;
  &lt;li&gt;A roll-back process&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;easy-to-upgrade&quot;&gt;Easy to upgrade&lt;/h2&gt;

&lt;p&gt;The nature of software development involves large amounts of rapid change.
Within our system this mostly consists of plugin updates, Jenkins war updates,
and system level package updates. The smaller the change during an upgrade, the
easier it is to pinpoint and fix any problems that occur and the way to do that
is frequent upgrades. To make this a no-brainer we will:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Build a new image on every change, pulling in new updates each time&lt;/li&gt;
  &lt;li&gt;Automatically resolve plugin dependencies&lt;/li&gt;
  &lt;li&gt;Smoke test upgrades before deployment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/upgrade_maintain_speed.jpg&quot; alt=&quot;Dev vs. Prod&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;programmatically-configurable&quot;&gt;Programmatically configurable&lt;/h2&gt;

&lt;p&gt;Hand configuring complex systems never leads to the same system twice. There are
lots of intricacies of the config including: order of operations, getting the
proper values, dealing with secrets, and wiring up many independent systems.
Keeping track of all this in a spreadsheet that we then manually enter into the
running instance will get it done, but becomes pure nightmare mode after the
first person is done working on it. Luckily with Jenkins’ built in Groovy init
system &lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, we will be able to configure Jenkins with code in a much more
deterministic way.  Most of this functionality is built in, but we will still
need:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/i_heard_you_like_ci.jpg&quot; alt=&quot;I heard you like CI/CD&quot; class=&quot;image-right&quot; height=&quot;300px&quot; width=&quot;300px&quot; /&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A process to develop, test, and deploy our Groovy configuration&lt;/li&gt;
  &lt;li&gt;A mechanism to deploy it along with the master&lt;/li&gt;
  &lt;li&gt;The ability to share these scripts across environments&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;immutable&quot;&gt;Immutable&lt;/h2&gt;

&lt;p&gt;With a consistently deployable, programmatically configured system it is almost
as easy to redeploy instead of modifying the running instance. It is indeed
easier to just click a checkbox in the GUI to change a setting the first time
you do it, but the second, third, and fourth times become increasingly more
difficult to remember to go check that box. For this reason, we are NEVER*
going to mutate our running instance. We will instead just deploy a new version
of our instance. This will only be a process change as the architectural
features listed above will enable this process. We should however,
document this process so that it will be followed by all.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Documentation of how to get changes into production&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;100-in-scm&quot;&gt;100% in SCM&lt;/h2&gt;

&lt;p&gt;Everything we’re doing is ‘as code’ so we will store it in the source repository
‘as code’. This will include master configuration, secrets, job configuration,
infrastructure provisioning and configuration, as well as anything else the
system needs to operate. To support this we’ll need:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A SCM repo (we’re using a Git repo hosted by GitHub)&lt;/li&gt;
  &lt;li&gt;A layout that supports:
    - Docker containers
    - Deployment tooling
    - Job DSL
    - Secrets&lt;/li&gt;
  &lt;li&gt;A mechanism for encrypting secrets in the repo&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;git-repo-diagram&quot;&gt;Git Repo Diagram&lt;/h3&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
├── README.md
├── ansible
│   ├── playbook.yml
│   └── roles
├── deploy
│   ├── master
│   │   └── docker-compose.yml
│   └── slaves
├── images
│   ├── image1
│   │   └── Dockerfile
│   └── image2
│       └── Dockerfile
└── secure
    ├── secret1
        └── secret2&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;secure&quot;&gt;Secure&lt;/h2&gt;

&lt;p&gt;This pipeline can be considered the production line at a factory. It is where
the individual components of your application are built, tested, assembled,
packaged, and deployed to your customers. Any system that does the above
mentioned processes should be heavily locked down as it is integral to the
functioning of the company. Specifically we should:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/datacenter_cages.jpg&quot; alt=&quot;Datacenter Cages&quot; class=&quot;image-right&quot; height=&quot;300px&quot; width=&quot;300px&quot; /&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Keep secrets secure and unexposed&lt;/li&gt;
  &lt;li&gt;Segregate actions by user (both machine and local)&lt;/li&gt;
  &lt;li&gt;Create an audit trail for any changes to the production system&lt;/li&gt;
  &lt;li&gt;Implement least-privilege access where necessary&lt;/li&gt;
  &lt;li&gt;Apply patches as soon as is feasible&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;scalable&quot;&gt;Scalable&lt;/h2&gt;

&lt;p&gt;If scalability is thought of from day 0 on a project it is much easier to
implement than if it is bolted on later. These systems have a tend to grow over
time and this growth should be able to happen freely. This is not always an easy
task, but we will attempt to make it easier by:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Keeping individual components de-coupled for independent scaling&lt;/li&gt;
  &lt;li&gt;Not locking ourselves into a single platform or provider&lt;/li&gt;
  &lt;li&gt;Monitoring the system to point out bottlenecks&lt;/li&gt;
&lt;/ol&gt;

&lt;p style=&quot;text-align: center&quot;&gt;Scaling to infinity… and beyond!
&lt;img src=&quot;/images/buzz_lightyear.jpg&quot; alt=&quot;Scaling to infinity&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now let’s start making some stuff!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u2-p1-designing-the-code-repository/&quot;&gt;Next Post: Code Repository and Development Lifecycle for a Jenkins + Docker CI/CD System&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://wiki.jenkins.io/display/JENKINS/Post-initialization+script&quot;&gt;https://wiki.jenkins.io/display/JENKINS/Post-initialization+script&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u1-p2-architecting-jenkins-docker-ci-system/&quot;&gt;Modern Jenkins Unit 1 / Part 2: Architecting a Jenkins + Docker CI System&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 22, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Modern Jenkins Unit 1 / Part 1: Planning a Jenkins + Docker CI System]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/u1-p1-planning-jenkins-docker-ci-infrastructure/" />
  <id>https://cicd.life/u1-p1-planning-jenkins-docker-ci-infrastructure</id>
  <published>2017-07-21T00:20:00-04:00</published>
  <updated>2017-07-21T00:20:00-04:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h2 id=&quot;what-are-we-doing-here&quot;&gt;What are we doing here?&lt;/h2&gt;

&lt;p&gt;I have been introduced to far more installations of Jenkins that suck than
don’t. One of the issues with a system of this level of complexity that is
shared between many teams is that they tend to grow organically over time.
Without much “oversight” by a person or team with a holistic view of the system
and a plan for growth, these build systems can turn into spaghetti
infrastructure the same way a legacy codebase can turn into spaghetti code.&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/fire-jenkins.png&quot; alt=&quot;Devil Jenkins!&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Eventually failures end up evenly dividing themselves amongst infrastructure,
bad commits, and flaky tests. Once this happens and trust is eroded, the only
plausible fix for most consumers is to re-run the build. This lack of trust
really makes it less fun to develop quality, reliable software. Its very
important to have trust that the systems you are using to verify your work
and not the other way around. An error in a build should unambiguously point to
the code, not the system itself.&lt;/p&gt;

&lt;p&gt;While not all companies can afford to have a designated CI team supporting the
build infrastructure, it is possible to design the system initially (or
refactor) in a way that encourages good practice and stability thus elevating
developers’ confidence in the system and speed with which they can deliver new
features. This series of posts will hopefully be able to get you started in the
right direction when having to build or refactor a CI / CD system using Jenkins.&lt;/p&gt;

&lt;h2 id=&quot;describing-the-characteristics-of-the-end-state&quot;&gt;Describing the characteristics of the end state&lt;/h2&gt;

&lt;p&gt;I am a big fan of Agile software development&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; myself. I don’t believe that there
is one kind of ‘Agile’ that works for everyone or anything like that, but I do
believe 100% in the iterative approach agile takes to solve complex problems.
Breaking work into small deliverables, and using those chunks for planning at
multiple intervals such as:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Yearly: Very high level direction&lt;/li&gt;
  &lt;li&gt;Quarterly: General feature requirements&lt;/li&gt;
  &lt;li&gt;Sprint: Feature implementation details&lt;/li&gt;
&lt;/ul&gt;

&lt;p class=&quot;image-left&quot;&gt;&lt;img src=&quot;https://cicd.life/images/Agile.png&quot; alt=&quot;Agile Lifecycle&quot; /&gt;&lt;/p&gt;

&lt;p&gt;When you have multiple perspectives on the scope and movement of a project, it
really gives you the ability to manage expectations  and make the most of your
time. Since you have a general idea of what the long term goals are, you
(ideally) can then make trade-off decisions with accurate values on the scales.
This leads to less rewrites and the ability to put a bit more into the code you
write because you know it will become legacy and you know Future You will
appreciate the effort.&lt;/p&gt;

&lt;p&gt;This is perhaps in opposition to the idea of hacking your way to a solution
which is also valuable process, but more for a problem with an undefined domain.
Luckily shipping software has most of problem domain defined so we’re able to
set long, medium, and short time goals for our CI / CD infrastructure.&lt;/p&gt;

&lt;p&gt;Anyways, let’s state some of the properties we want from our build system:&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/happy_dev.png&quot; alt=&quot;Super happy developer guy&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Easy to develop:&lt;/strong&gt; A common problem with complex systems is that the process of
setting up a local development environment is just as complex, if not more so.
We will be developing just about the entire system locally on our laptop so we
should hopefully get this for minimal cost.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Safe to develop:&lt;/strong&gt; Another property that goes hand in hand with easy to develop
is safe to develop. If our local environment is always reaching out to prod to
comment on PRs or perhaps pushing branches and sending Slack notifications, it
can be misleading and confusing to figure out where exactly these things are
coming from. We will attempt to null out any non-local side effects in our
development environment to provide a safe place to work.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Consistently deployable:&lt;/strong&gt; Hitting the button and getting an error during a
deployment is one of the worst feelings ever. In most cases when we need to
deploy, we need to deploy right now! While building this project we will
always keep master releasable and work with our automation in a way that no
matter when we need to deploy, it will go.&lt;/p&gt;

    &lt;p&gt;&lt;img src=&quot;https://cicd.life/images/deploy_never_fail.jpg&quot; alt=&quot;Roll safe my friend&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Easy to upgrade:&lt;/strong&gt; The best way to mitigate dangerous upgrades is to do them
more frequently. While this may sound counterintuitive, it works for a couple
of reasons:
    &lt;ol&gt;
      &lt;li&gt;The delta is smaller the more frequently you upgrade (and deploy!)&lt;/li&gt;
      &lt;li&gt;Each time you upgrade and deploy, you get more practice at it&lt;/li&gt;
      &lt;li&gt;If you fix a small error each upgrade, eventually most errors are fixed :)
As we will see, what enables easy upgrades is a rollback safety net and a good
way to verify the changes are safe. Since we’ll never be able to catch all the
bugs, having the rollback as a backup tool is clutch.&lt;/li&gt;
    &lt;/ol&gt;

    &lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/groovy_logo_small.png&quot; alt=&quot;It&apos;s Groovy baby, groovy yeah!&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Programmatically configurable:&lt;/strong&gt; This is a giant one. Humans are
really terrible at doing or creating the same thing over and over. This is way
worse once you have a group of humans trying to do the same thing over and
over (like creating jobs or configuring the master). Since we cannot be
trusted to click the right buttons in the right sequence, we need to make sure
the system does not depend on us doing that. We will cover a variety of tools
to configure our system including the Netflix Job DSL, Groovy configuration
via the Jenkins Init system, and later on Jenkinsfiles. &lt;strong&gt;There should be
nothing that is hand jammed!&lt;/strong&gt;&lt;/p&gt;

    &lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/revision_control_small.png&quot; alt=&quot;Revision Control like a boss&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Immutable:&lt;/strong&gt; “Immutable Infrastructure” &lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; is a term that was coined by
Chad Fowler back in 2013. He proposes that the benefits of immutability in
software architecture (like what is brought by functional programming) would
also apply to infrastructure. What this translates to is that instead of
mutating a running server by applying updates, rebooting, and changing
configuration on running machines, when changes need to be made we should just
redeploy a new set of servers with the updated code instead. This makes it
much easier to reason about what state a system is in. You will see that this
is one of the main characteristics that makes this Jenkins infrastructure different
(and I think much better) than a legacy installation.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;100% in SCM:&lt;/strong&gt; Since we are committing to programmatic configuration, it is
crucial that we keep everything within source control management. We should
apply the same rigor with our CI / CD system’s codebase that we do with the
one it is testing. PRs and automated tests should be the norm with
&lt;strong&gt;anything&lt;/strong&gt; you are creating, even if it is a one man show (IMHO).&lt;/p&gt;

    &lt;p class=&quot;image-left&quot;&gt;&lt;img src=&quot;https://cicd.life/images/secure_small.png&quot; alt=&quot;Secure&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Secure:&lt;/strong&gt; Security is constantly an afterthought which is a dangerous way to
work in a build system. These systems are so complex and so critical to the
company (they actually assemble and package the product) that we MUST make
them secure by default. Sane secrets management, least access privileges,
immutable systems, and forcing an audit trail all lead to a more secure
and healthy environment.&lt;/p&gt;

    &lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/scalable_small.png&quot; alt=&quot;Scalable&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Scalable:&lt;/strong&gt; Build systems at successful companies only grow, they do not shrink.
Teams are constantly adding more code, more tests, more jobs, and more
artifacts. Normally this occurs in a polyglot environment leading to
exponential growth of requirements on the machines that are actually doing the
builds. It does not scale to have a single server with every single
requirement installed. This pattern soon becomes unwieldly and hard to work
with. We should containerize everything and separate the infrastructure
from the system running on top of it to allow independent and customized
scaling and maintenance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we can build a system that meets these requirements, we will get a lot of
stuff for free as well including:&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/moneymouthface.png&quot; alt=&quot;PROFIT&quot; /&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Easy disaster recovery&lt;/li&gt;
  &lt;li&gt;Portability to different service providers&lt;/li&gt;
  &lt;li&gt;Fun working environment&lt;/li&gt;
  &lt;li&gt;High levels of productivity&lt;/li&gt;
  &lt;li&gt;…&lt;/li&gt;
  &lt;li&gt;Profit!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;an-iterative-approach&quot;&gt;An iterative approach&lt;/h2&gt;

&lt;p&gt;Each iteration will build a shippable increment on top of the iteration before
it and eventually we will have a production Jenkins! Along the way we will learn
a bunch of stuff, build some Docker containers, write a compose file, automate
some configuration, get real groovy, and much, much more. I encourage you to stay
tuned and work through solving this problem with me.&lt;/p&gt;

&lt;p&gt;All code will be published to this blog’s git repo &lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; so that you can verify
your answers and fix inconsistencies if you get stuck along the way. The repo
should be tagged to match up with this series.&lt;/p&gt;

&lt;p&gt;Now that we have a general idea of how we want our system to behave and how we
will build it out, let’s dig into the architectural concerns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href=&quot;/u1-p2-architecting-jenkins-docker-ci-system/&quot;&gt;Next Post: Architecting a Jenkins + Docker CI System&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;http://agilemanifesto.org/&quot;&gt;http://agilemanifesto.org/&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;http://chadfowler.com/2013/06/23/immutable-deployments.html&quot;&gt;http://chadfowler.com/2013/06/23/immutable-deployments.html&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://github.com/technolo-g/modern-jenkins&quot;&gt;https://github.com/technolo-g/modern-jenkins&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/u1-p1-planning-jenkins-docker-ci-infrastructure/&quot;&gt;Modern Jenkins Unit 1 / Part 1: Planning a Jenkins + Docker CI System&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on July 21, 2017.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Intro to Docker Swarm: Part 4 - Demo]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/intro-to-docker-swarm-pt4-demo/" />
  <id>https://cicd.life/intro-to-docker-swarm-pt4-demo</id>
  <published>2015-01-29T00:00:00-05:00</published>
  <updated>2015-01-29T00:00:00-05:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h2 id=&quot;vagrant-up-up-and-away&quot;&gt;Vagrant up up and away!&lt;/h2&gt;

&lt;p&gt;The primary output of my hackweek endeavours was a Docker Swarm cluster in a
Vagrant environment. This post will go over how to get it spun up and then how
to interact with it.&lt;/p&gt;

&lt;h3 id=&quot;what-is-it&quot;&gt;What is it?&lt;/h3&gt;

&lt;p&gt;This is a fully functional Docker Swarm cluster contained within a Vagrant
environment. The environment consists of 4 nodes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;dockerhost01&lt;/li&gt;
  &lt;li&gt;dockerhost02&lt;/li&gt;
  &lt;li&gt;dockerhost03&lt;/li&gt;
  &lt;li&gt;dockerswarm01&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Docker nodes (dockerhost01-3) are running the Docker daemon as well as a couple of supporting
services. The main processes of interest on the Docker hosts are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Docker daemon: Running with a set of tags&lt;/li&gt;
  &lt;li&gt;Registrator daemon: This daemon connects to Consul in order to register and
de-register containers that have their ports exposed. The entries from this
service can be seen under the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/services&lt;/code&gt; path in Consul’s key/value store&lt;/li&gt;
  &lt;li&gt;Swarm client: The Swarm client is what maintains the list of Swarm nodes in
Consul. This list is kept under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/swarm&lt;/code&gt; and contains a list of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;ip&amp;gt;:&amp;lt;ports&amp;gt;&lt;/code&gt;
of the Swarm nodes participating in the cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Docker Swarm node (dockerswarm01) is also running a few services. Since this
is just an example a lot of services have been condensed into a single machine.
for production, I would not recommend this exact layout.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Swarm daemon: Acting as master and listening on the network for Docker commands
while proxying them to the Docker hosts&lt;/li&gt;
  &lt;li&gt;Consul: A single node Consul instance is running. It’s UI is available at
&lt;a href=&quot;http://dockerswarm01/ui/#/test/&quot;&gt;http://dockerswarm01/ui/#/test/&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Nginx: Proxying to Consul for the UI&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;how-to-provision-the-cluster&quot;&gt;How to provision the cluster&lt;/h2&gt;

&lt;h4 id=&quot;1-setup-pre-requirements&quot;&gt;1. Setup pre-requirements&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;The GitHub Repo: https://github.com/technolo-g/docker-swarm-demo&lt;/li&gt;
  &lt;li&gt;Vagrant (latest): https://www.vagrantup.com/downloads.html&lt;/li&gt;
  &lt;li&gt;Vagrant hosts plugin: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vagrant plugin install vagrant-hosts&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;VirtualBox: https://www.virtualbox.org/wiki/Downloads&lt;/li&gt;
  &lt;li&gt;Ansible: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;brew install ansible&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Host entries: Add the following lines to /etc/hosts:&lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;10.100.199.200 dockerswarm01
10.100.199.201 dockerhost01
10.100.199.202 dockerhost02
10.100.199.203 dockerhost03&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;2a-clone--vagrant-up-no-tls&quot;&gt;2a. Clone &amp;amp;&amp;amp; Vagrant up (No TLS)&lt;/h4&gt;
&lt;p&gt;This process may take a while and will download a few gigs of data. In this case
we are not using any TLS. If you want to use TLS with Swarm, go to 2b.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Clone our repo&lt;/span&gt;
git clone https://github.com/technolo-g/docker-swarm-demo.git
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;docker-swarm-demo

&lt;span class=&quot;c&quot;&gt;# Bring up the cluster with Vagrant&lt;/span&gt;
vagrant up

&lt;span class=&quot;c&quot;&gt;# Provision the host files on the vagrant hosts&lt;/span&gt;
vagrant provision &lt;span class=&quot;nt&quot;&gt;--provision-with&lt;/span&gt; hosts

&lt;span class=&quot;c&quot;&gt;# Activate your enviornment&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;source &lt;/span&gt;bin/env&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;2b-clone--vagrant-up-with-tls&quot;&gt;2b. Clone &amp;amp;&amp;amp; Vagrant up (With TLS)&lt;/h4&gt;
&lt;p&gt;This will generate certificates and bring up the cluster with TLS enabled.&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Clone our repo&lt;/span&gt;
git clone https://github.com/technolo-g/docker-swarm-demo.git
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;docker-swarm-demo

&lt;span class=&quot;c&quot;&gt;# Generate Certs&lt;/span&gt;
./bin/gen_ssl.sh

&lt;span class=&quot;c&quot;&gt;# Enable TLS for the cluster&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;use_tls: True&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;docker_port: 2376&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; ansible/group_vars/all.yml

&lt;span class=&quot;c&quot;&gt;# Bring up the cluster with Vagrant&lt;/span&gt;
vagrant up

&lt;span class=&quot;c&quot;&gt;# Provision the host files on the vagrant hosts&lt;/span&gt;
vagrant provision &lt;span class=&quot;nt&quot;&gt;--provision-with&lt;/span&gt; hosts

&lt;span class=&quot;c&quot;&gt;# Activate your TLS enabled enviornment&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;source &lt;/span&gt;bin/env_tls&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;3-confirm-its-working&quot;&gt;3. Confirm it’s working&lt;/h4&gt;
&lt;p&gt;Now the cluster is provisioned and running, you should be able to confirm it.
We’ll do that a few ways. First lets take a look with the Docker client:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;docker version
Client version: 1.4.1
Client API version: 1.16
Go version &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;client&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;: go1.4
Git commit &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;client&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;: 5bc2ff8
OS/Arch &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;client&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;: darwin/amd64
Server version: swarm/0.0.1
Server API version: 1.16
Go version &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;server&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;: go1.2.1
Git commit &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;server&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;: n/a

&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;docker info
Containers: 0
Nodes: 3
 dockerhost02: 10.100.199.202:2376
 dockerhost01: 10.100.199.201:2376
 dockerhost03: 10.100.199.203:2376&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now browse to Consul at &lt;a href=&quot;http://dockerswarm01/ui/#/test/kv/swarm/&quot;&gt;http://dockerswarm01/ui/#/test/kv/swarm/&lt;/a&gt;
and confirm that the Docker hosts are listed with their proper port like so:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/consul_swarm_cluster.png&quot; alt=&quot;Consul Swarm cluster&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The cluster seems to be alive, so let’s provision a (fake) app to it!&lt;/p&gt;

&lt;h2 id=&quot;how-to-use-it&quot;&gt;How to use it&lt;/h2&gt;
&lt;p&gt;You can now interact with the Swarm cluster to provision nodes. The images in
this demo have been pulled down during the Vagrant provision so these commands
should work in order to spin up 2x external proxy containers and 3x internal
webapp containers. Two things to note about the commands:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The constraints need to match tags that were assigned when Docker was started.
This is how Swarm’s filter knows what Docker hosts are available for scheduling.&lt;/li&gt;
  &lt;li&gt;The SERVICE_NAME variable is set for Registrator. Since we are using a generic
container (nginx) we are instead specifying the service name in this manner.&lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# Primary load balancer&lt;/span&gt;
docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:zone&lt;span class=&quot;o&quot;&gt;==&lt;/span&gt;external &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:status&lt;span class=&quot;o&quot;&gt;==&lt;/span&gt;master &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;SERVICE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;proxy &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 80:80 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  nginx:latest

&lt;span class=&quot;c&quot;&gt;# Secondary load balancer&lt;/span&gt;
docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:zone&lt;span class=&quot;o&quot;&gt;==&lt;/span&gt;external &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:status&lt;span class=&quot;o&quot;&gt;==&lt;/span&gt;non-master &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;SERVICE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;proxy &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 80:80 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  nginx:latest

&lt;span class=&quot;c&quot;&gt;# 3 Instances of the webapp&lt;/span&gt;
docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:zone&lt;span class=&quot;o&quot;&gt;==&lt;/span&gt;internal &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;SERVICE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;webapp &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 80 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  nginx:latest

docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:zone&lt;span class=&quot;o&quot;&gt;==&lt;/span&gt;internal &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;SERVICE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;webapp &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 80 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  nginx:latest

docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:zone&lt;span class=&quot;o&quot;&gt;==&lt;/span&gt;internal &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;SERVICE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;webapp &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; 80 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  nginx:latest&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now if you do a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker ps&lt;/code&gt; or browse to Consul here:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://dockerswarm01/ui/#/test/kv/services/&quot;&gt;http://dockerswarm01/ui/#/test/kv/services/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/consul_services.png&quot; alt=&quot;Consul Swarm services&quot; /&gt;&lt;/p&gt;

&lt;p&gt;You can see the two services registered! Since the routing and service discovery
part is extra credit, this app will not actually work but I think you get the
idea.&lt;/p&gt;

&lt;p&gt;I hope you have enjoyed this series on Docker Swarm. What I have discovered is
Docker Swarm is a very promising application developed by a very fast moving
team of great developers. I believe that it will change the way we treat our
Docker hosts and will simplify things greatly when running complex applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;All of the research behind these blog posts was made possible due to the
awesome company I work for: &lt;a href=&quot;https://www.rallydev.com/careers/open-positions&quot;&gt;Rally Software&lt;/a&gt;
in Boulder, CO. We get at least 1 hack week per quarter and it enables us to
hack on awesome things like Docker Swarm. If you would like to cut to the chase
and directly start playing with a Vagrant example, here is the repo that is the
output of my Q1 2014 hack week efforts:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/technolo-g/docker-swarm-demo&quot;&gt;https://github.com/technolo-g/docker-swarm-demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/intro-to-docker-swarm-pt4-demo/&quot;&gt;Intro to Docker Swarm: Part 4 - Demo&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on January 29, 2015.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Intro to Docker Swarm: Part 3 - Example Swarm SOA]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/intro-to-docker-swarm-pt3-example-architechture/" />
  <id>https://cicd.life/intro-to-docker-swarm-pt3-example-architechture</id>
  <published>2015-01-22T00:00:00-05:00</published>
  <updated>2015-01-22T00:00:00-05:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h1 id=&quot;a-docker-swarm-soa&quot;&gt;A Docker Swarm SOA&lt;/h1&gt;

&lt;p&gt;One of the most exciting things that Docker Swarm brings to the table is the
ability to create modern, resilient, and flexible architectures with very little
overhead. Being able to interact with a heterogenious cluster of Docker hosts
as if it were a single host enables the existing toolchains in use today to
build everything we need to create a beautifully simple SOA!&lt;/p&gt;

&lt;p&gt;This article is going to attempt to describe a full SOA architecture built
around Docker Swarm that has the following properties:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A hypervisor layer composed of individual Docker hosts (Docker/Registrator)&lt;/li&gt;
  &lt;li&gt;A cluster layer tying the Docker hosts together (Docker Swarm)&lt;/li&gt;
  &lt;li&gt;A service discovery layer (Consul)&lt;/li&gt;
  &lt;li&gt;A routing layer to direct traffic based off of the services in Consul (HAProxy / Nginx)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;hypervisor-layer&quot;&gt;Hypervisor Layer&lt;/h2&gt;

&lt;p&gt;The hypervisor layer is made up of a group of discrete Docker hosts. Each host
has the services running on it that allows it to participate in the cluster:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Docker daemon&lt;/strong&gt;: The Docker daemon is configured to listen on the network
port in addition to the local Linux socket so that the Swarm daemon can
communicate with it. In addition, each Dockerhost is configured to run with a
set of tags that work with Swarm’s scheduler to define where containers are
placed. They help describe the Docker host and is where any identifying
information can be associated with the Docker host. This is an example of a
set of tags a Docker host would be started with:&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;zone: application/database&lt;/li&gt;
      &lt;li&gt;disk: ssd/hdd&lt;/li&gt;
      &lt;li&gt;env: dev/prod&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Swarm daemon&lt;/strong&gt;: The Swarm client daemon is run alongside the Docker daemon
in order to keep the node in the Swarm cluster. This Swarm daemon is running
in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;join&lt;/code&gt; mode and basically heartbeats to Consul to keep its record updated
in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/swarm&lt;/code&gt; location. This record is what the Swarm master uses to create
the cluster. If the daemon were to die the list in Consul should be updated to
automatically to remove the node. The Swarm client daemon would use a path in
Consul like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/swarm&lt;/code&gt; and it would contain a list of the docker hosts:&lt;/p&gt;

    &lt;p&gt;&lt;img src=&quot;https://cicd.life/images/consul/swarm_section.png&quot; alt=&quot;View of Swarm cluster in Consul&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Registrator daemon&lt;/strong&gt;: The Registrator app&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; is what will be updating Consul
when a container is created or destroyed. It listens to the Docker socket and
upon each event will update the Consul key/value store. For example, an app named
deepthought that requires 3 instances on separate hosts and that is running on
port 80 would create a structure in Consul like this:&lt;/p&gt;

    &lt;p&gt;&lt;img src=&quot;https://cicd.life/images/consul/services_section.png&quot; alt=&quot;View of Services in Consul&quot; /&gt;&lt;/p&gt;

    &lt;p&gt;The pattern being:&lt;/p&gt;

    &lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/services/&amp;lt;service&amp;gt;-&amp;lt;port&amp;gt;/&amp;lt;dhost&amp;gt;:&amp;lt;cname&amp;gt;:&amp;lt;cport&amp;gt;&lt;/code&gt; value: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;ipaddress&amp;gt;:&amp;lt;cport&amp;gt;&lt;/code&gt;&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;service: The name of the container’s image&lt;/li&gt;
      &lt;li&gt;port: The container’s exposed port&lt;/li&gt;
      &lt;li&gt;dhost: The Docker host that the container is running on&lt;/li&gt;
      &lt;li&gt;cport: The Container’s exposed port&lt;/li&gt;
      &lt;li&gt;ipaddress: The ipaddress of the Docker host running the container&lt;/li&gt;
    &lt;/ul&gt;

    &lt;p&gt;The output of a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker ps&lt;/code&gt; for the above service looks like so:&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  &lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;docker ps
  CONTAINER ID        IMAGE                       COMMAND                CREATED             STATUS              PORTS                                   NAMES
  097e142c1263        mbajor/deepthought:latest   &lt;span class=&quot;s2&quot;&gt;&quot;nginx -g &apos;daemon of   17 seconds ago      Up 13 seconds       10.100.199.203:49166-&amp;gt;80/tcp   dockerhost03/grave_goldstine
  1f7f3bb944cc        mbajor/deepthought:latest   &quot;&lt;/span&gt;nginx &lt;span class=&quot;nt&quot;&gt;-g&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;daemon of   18 seconds ago      Up 14 seconds       10.100.199.201:49164-&amp;gt;80/tcp   dockerhost01/determined_hypatia
  127641ff7d37        mbajor/deepthought:latest   &quot;nginx -g &apos;&lt;/span&gt;daemon of   20 seconds ago      Up 16 seconds       10.100.199.202:49158-&amp;gt;80/tcp   dockerhost02/thirsty_babbage
  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This is the most basic way to record the services and locations. Registrator
  also supports passing metadata along with the container that includes key
  information about the service&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Another thing to mention is that it seems the author of Registrator intends
  the daemon to be run as a Docker container. Since a Docker Swarm cluster is
  meant to be treated as a single Docker host, I prefer the idea of running the
  Registrator app as a daemon on the Docker hosts themselves. This allows a state
  on the cluster in which 0 containers are running and the cluster is still alive.
  It seems like a very appropriate place to draw the line between platform and
  applications.&lt;/p&gt;

&lt;h2 id=&quot;cluster-layer&quot;&gt;Cluster Layer&lt;/h2&gt;

&lt;p&gt;At this layer we have the Docker Swarm master running. It is configured to read
from Consul’s key/value store under the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/swarm&lt;/code&gt; prefix and it generates its
list of nodes from that information. It also is what listens for client connections
to Docker (create, delete, etc..) and routes those requests to the proper backend
Docker host. This means that it has the following requirements:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Listening on the network&lt;/li&gt;
  &lt;li&gt;Able to communicate with Consul&lt;/li&gt;
  &lt;li&gt;Able to communicate with all of the Docker daemons&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;As of yet I have yet to see mention of making the Swarm daemon itself HA, but
after working with it there really do not seem to be any reasons that it could
not be. I expect that a load balancing proxy with TCP support (HAproxy) could
be put in front of a few Swarm daemons with relative ease. Sticky sessions would
have to be enabled and possibly an active/passive if there are state synchronization
issues between multiple Swarm daemons, but it seems like it would be doable.
Since the containers do continue to run and are accessible even in the case of a
Swarm failure we are going to accept the risk of a non-ha Swarm node over the
complexity and overhead of loadbalancing the nodes. Tradeoffs right?&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;service-discovery-layer&quot;&gt;Service Discovery Layer&lt;/h2&gt;

&lt;p&gt;The service discovery layer is run on a cluster of Consul nodes; specifically
it’s key/value store. In order to maintain quorum (n/2 + 1 nodes) even in the
case of a failure there should be an odd number of nodes. Consul has a very
large feature set&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; including auto service discovery, health checking, and a
key/value store to name a few. We are only using the key/value store, but I
would expect there are benefits to incorporating the other aspects of Consul
into your architecture. For this example configuration, the following processes
are acting on the key/value store:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The Swarm clients on the Docker hosts will be registering themselves in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/swarm&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;The Swarm master will be reading &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/swarm&lt;/code&gt; in order to build its list of Docker hosts&lt;/li&gt;
  &lt;li&gt;The Registrator daemon will be taking nodes in and out of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/services&lt;/code&gt; prefix&lt;/li&gt;
  &lt;li&gt;Consul-template will be reading the key/value store to generate the configs for the routing layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the central datastore for all of the clustering metadata. Consul is what
ties the containers on the Docker hosts to the entries in the routing backend.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Consul also has a GUI that can be installed in addition to everything else and
I highly recommend installing it for development work. It makes figuring out what
has been registered and where much easier. Once the cluster is up and running
you may have no more need for it though&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;routing-layer&quot;&gt;Routing Layer&lt;/h2&gt;

&lt;p&gt;This is the edge layer and what all external application traffic will run through.
These nodes are on the edge of the Swarm cluster and are statically IP’d and
have DNS entries that can be CNAME’d to for any services run on the cluster.
These nodes listen on port 80/443 etc.. and have the following services running:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Consul-template: This daemon is polling Consul’s key/value store (under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/services&lt;/code&gt;
and when it detects a change, it writes a new HAProxy/Nginx config and
gracefully reloads the service. The templates are written in Go templating and
the output should be in standard HAProxy or Nginx form.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;HAProxy or Nginx: Either of these servers are fully battle proven and ready for
anything that is needed, even on the edge. The service is configured dynamically
by Consul-template and reloaded when needed. The main change that happens
frequently is the modification of a list of backends for a particular vhost.
Since the list is maintained by what is actually alive and in Consul it changes
as frequently as the containers do.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a high level overview of a Docker Swarm cluster that is based on an SOA.
In the next post I will demonstrate a working infrastructure as described above
in a Vagrant environment. This post will be coming after our Docker Denver Meetup&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;
so stay tuned (or better yet, come to the Meetup for the live demo)!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;All of the research behind these blog posts was made possible due to the
awesome company I work for: &lt;a href=&quot;https://www.rallydev.com/careers/open-positions&quot;&gt;Rally Software&lt;/a&gt;
in Boulder, CO. We get at least 1 hack week per quarter and it enables us to
hack on awesome things like Docker Swarm. If you would like to cut to the chase
and directly start playing with a Vagrant example, here is the repo that is the
output of my Q1 2014 hack week efforts:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/technolo-g/docker-swarm-demo&quot;&gt;https://github.com/technolo-g/docker-swarm-demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;https://github.com/progrium/registrator &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;https://github.com/progrium/registrator#single-service-with-metadata &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;http://www.consul.io/docs/index.html &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;http://www.meetup.com/Docker-Denver/events/218859311/ &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/intro-to-docker-swarm-pt3-example-architechture/&quot;&gt;Intro to Docker Swarm: Part 3 - Example Swarm SOA&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on January 22, 2015.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Intro to Docker Swarm: Part 2 - Configuration Options and Requirements]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/intro-to-docker-swarm-pt2-config-options-requirements/" />
  <id>https://cicd.life/intro-to-docker-swarm-pt2-config-options-requirements</id>
  <published>2015-01-19T00:00:00-05:00</published>
  <updated>2015-01-19T00:00:00-05:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h2 id=&quot;minimum-requirements-to-run-a-docker-swarm-cluster&quot;&gt;Minimum Requirements to run a Docker Swarm Cluster&lt;/h2&gt;

&lt;p&gt;The minimum requirements are minimal indeed to create a Docker Swarm cluster. In
fact, it is definitely feasible (though perhaps not best practice) to run the
Swarm daemon on an existing Docker Host making it possible to implement it without
adding any more hardware or virtual resources. In addition, when running the file
or nodes&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; based discovery mechanism there is no other infrastructure (besides
of course Docker) that is required to run a basic Docker Swarm cluster.&lt;/p&gt;

&lt;p&gt;I personally believe that spinning up another machine to run the Swarm master
itself is a good idea. The machine does not have to be heavy in resources, but
it does need to have a lot of file descriptors to handle all of the tcp connections
coming and going. In the examples, I use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;dockerswarm01&lt;/code&gt; as a dedicated Swarm
master.&lt;/p&gt;

&lt;h2 id=&quot;configuration-options&quot;&gt;Configuration Options&lt;/h2&gt;

&lt;p&gt;There are a variety of configuration settings in Swarm that are sane by default,
but give a lot of flexibility when it comes to running the daemon and its
supporting infrastructure. Listed below are the different categories of config
options and the options of how they can be configured.&lt;/p&gt;

&lt;h3 id=&quot;discovery&quot;&gt;Discovery&lt;/h3&gt;

&lt;p&gt;Discovery is the mechanism Swarm uses in order to maintain the status of the
cluster. It can operate with a variety of backends, but it’s all pretty much
the same concept:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The backend maintains a list of Docker nodes that should be part of the cluster.&lt;/li&gt;
  &lt;li&gt;Using the list of nodes, Swarm healtchecks each one and keeps track of the
nodes that are in and out of the cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;node-discovery&quot;&gt;Node Discovery&lt;/h4&gt;
&lt;p&gt;Node discovery requires that everything be passed in on the command line. This
is the most basic type of discovery mechanism as it requires no maintenance of
config files or anything like that. An example startup command for the Swarm
daemon using node discovery would look like:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm manage &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; dockerhost01:2375,dockerhost02:2375,dockerhost03:2375 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;0.0.0.0:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;file-discovery&quot;&gt;File Discovery&lt;/h4&gt;
&lt;p&gt;File discovery utilizes a configuration file placed on the filesystem
(ie: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/swarm/cluster_config&lt;/code&gt;) with the format of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;IP&amp;gt;:&amp;lt;Port&amp;gt;&lt;/code&gt; to list the
Docker hosts in the cluster. Even though the list is static, healthchecking is
used to determine the list of healthy and unhealthy nodes and filter requests
going to the unhealthy nodes. An example of a file based discovery startup line
and configuration file would be:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm manage &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; file:///etc/swarm/cluster_config &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;0.0.0.0:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#/etc/swarm/cluster_config&lt;/span&gt;
dockerhost01:2375
dockerhost02:2375
dockerhost03:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;consul-discovery&quot;&gt;Consul Discovery&lt;/h4&gt;
&lt;p&gt;Consul discovery is also supported out of the box by Docker Swarm. It works by
utilizing Consul’s key value store to keep it’s list of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;lt;IP&amp;gt;:&amp;lt;Port&amp;gt;&lt;/code&gt;’s used to
form the cluster. In this configuration mode, each Docker host runs a Swarm daemon in join mode
that is pointed at the Consul cluster’s HTTP interface. This provides a little
overhead to the configuration, runtime, and security of a Docker host, but not
a significant amount. The Swarm client would be fired up as such:&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/consul.png&quot; alt=&quot;Hashicorp Consul Logo&quot; /&gt;&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm &lt;span class=&quot;nb&quot;&gt;join&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; consul://consulhost01/swarm &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;c&quot;&gt;# This can be an internal IP as long as the other&lt;/span&gt;
  &lt;span class=&quot;c&quot;&gt;# Docker hosts can reach it.&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--addr&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;10.100.199.200:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The Swarm master then reads it’s host list from Consul. It would be run with a
startup line of:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm manage &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; consul://consulhost01/swarm &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;0.0.0.0:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;&lt;em&gt;These key/value based configuration modes raise the question of how
healthchecks within Swarm work in combination with the Swarm client in join
mode. Since the list in key/value store is itself dynamic, is it required to run
the internal Swarm healthchecks too? I’m not familiar with that area of
functionality and so can’t speak to it but it’s worth noting.&lt;/em&gt;&lt;/p&gt;

&lt;h4 id=&quot;etcd-discovery&quot;&gt;EtcD Discovery&lt;/h4&gt;
&lt;p&gt;EtcD discovery works in much the same way as Consul discovery. Each Docker host
in the cluster runs a Swarm daemon in join mode pointed at an EtcD endpoint.
This provides a heartbeat to EtcD to maintain a list of active servers in the
cluster. A Docker host running the standard Docker daemon would concurrently
run a Swarm client with a configuration similar to:&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/etcd.png&quot; alt=&quot;EtcD Logo&quot; /&gt;&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm &lt;span class=&quot;nb&quot;&gt;join&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; etcd://etcdhost01/swarm &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--addr&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;10.100.199.200:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The Docker Swarm master would connect to EtcD, look at the path provided, and
generate it’s list of nodes by starting with the following command:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm manage &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; etcd://etcdhost01/swarm &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;0.0.0.0:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;zookeeper-discovery&quot;&gt;Zookeeper Discovery&lt;/h4&gt;
&lt;p&gt;Zookeeper discovery follows the same pattern as the other key/value store based
configuration modes. A ZK ensemble is created to hold the host list information
and a client runs alongside Docker in order to heartbeat in to the k/v store;
maintaining the list in near real-time. The Swarm master is also connected to
the ensemble and uses the information under &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/swarm&lt;/code&gt; to maintain its list of
hosts (which it then healthchecks).&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/apache.png&quot; alt=&quot;Apache Logo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Swarm Client (alongside Docker):&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm &lt;span class=&quot;nb&quot;&gt;join&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;c&quot;&gt;# All hosts in the ensemble should be listed&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; zk://zkhost01,zkhost02,zkhost03/swarm &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--addr&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;10.100.199.200&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Swarm Master:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm manage &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; zk://zkhost01,zkhost02,zkhost03/swarm &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; 0.0.0.0:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;hosted-token-based-discovery-default&quot;&gt;Hosted Token Based Discovery (default)&lt;/h4&gt;
&lt;p&gt;I have not used this functionality and at this point have very little reason to.&lt;/p&gt;

&lt;h3 id=&quot;scheduling&quot;&gt;Scheduling&lt;/h3&gt;
&lt;p&gt;Scheduling is the mechanism for choosing where a container should be created and
started. It is made up of a combination of a packing algorithm and filters (or
tags). Each Docker daemon is started with a set of tags like this:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;docker &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--label&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;storage&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ssd &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--label&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;zone&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;external &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--label&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;tier&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;data &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; tcp://0.0.0.0:2375&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Then when a Docker container is started
Swarm will choose a group of machines based on the filters, and then distributes
each run command according to its scheduler. Filters tell Swarm where a
container can and cannot run, while the scheduler places it amongst the available
hosts. There are a few filtering mechanisms:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Constraint&lt;/strong&gt;: This utilizes the tags that a Docker daemon was starting with.
Currently it supports only ‘=’, but at some point in the future it may support ‘!=’.
A node must match all of the constraints provided by a container in order to
fit into scheduling. Starting a container with a few constraints would look
like:&lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-P&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:storage&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ssd &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; constraint:zone&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;external &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; nginx
  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Affinity&lt;/strong&gt;: Affinity can work in two ways: affinity to containers or affinity
to images. In order to start two containers on the same host the following
command would be run:&lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-P&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; nginx &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; nginx

   docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-P&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
     &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; mysql &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
     &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; affinity:container&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;nginx &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
     &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; mysql
  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Since Swarm does not handle image management, it is also possible to set
  affinity for an image. This means a container will only be started on a
  node that already contains the image. This negates the need to wait for an
  image to be pulled in the background before starting a container. An example:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;  docker run &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-P&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;--name&lt;/span&gt; nginx &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; affinity:image&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;nginx &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; nginx
  &lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Port&lt;/strong&gt;: The port filter will not allow any two containers with the same
static port mapping to be started on the same host. This makes a lot of sense
as you cannot duplicate a port mapping on a Dockerhost. For example, two nodes
started with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-p 80:80&lt;/code&gt; will not be allowed to run on the same Dockerhost.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Healthy&lt;/strong&gt;: This prevents the scheduling of containers on unhealthy nodes.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once Swarm has narrowed the host list down to a set that matches the above
filters, it then schedules the container on one of the nodes. Currently the
following schedulers are built in:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Random&lt;/strong&gt;: Randomly distribute containers across available backends.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Binpacking&lt;/strong&gt;: Fill up a node with containers and then move to the next. This
  mode has the increased complexity of having to assign static resource amounts
  to each container at runtime. This means setting a limit on a container’s
  memory and cpu which may or may not seem OK. I personally like letting the
  containers fight amongst themselves to see who gets the resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In progress are the balanced strategy&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; and the ability to add Apache Mesos&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&quot;tls&quot;&gt;TLS&lt;/h3&gt;
&lt;p&gt;I am happy to say that Swarm works with TLS enabled. This makes it more
secure between both the client and Swarm daemon as well as between the Swarm
daemon and the Docker daemons. This is good because my security guy says that
there are no more borders in networks. Yey.&lt;/p&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/SSL-security.jpg&quot; alt=&quot;SSL Logo&quot; /&gt;&lt;/p&gt;

&lt;p&gt;It does require a full PKI including CA, but I have this solved in another post
already :) This is &lt;a href=&quot;/generate-ssl-for-docker-swarm/&quot;&gt;how to generate the required TLS certs for Docker and Swarm&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once the certificates have been generated and installed as per my other blog
post, the Docker and Swarm daemons can be fired up like this:&lt;/p&gt;

&lt;p&gt;Docker:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;docker &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlsverify&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscacert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/ca.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/dockerhost01-cert.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlskey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/private/dockerhost01-key.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; tcp://0.0.0.0:2376&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Swarm master:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;swarm manage &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlsverify&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscacert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/ca.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/swarm-cert.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlskey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/private/swarm-key.pem  &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; file:///etc/swarm_config &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; tcp://0.0.0.0:2376&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Then the client must know to connect via TLS. This is done with the following
environment variables:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DOCKER_HOST&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;tcp://dockerswarm01:2376
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DOCKER_CERT_PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DOCKER_TLS_VERIFY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;1&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;You are now setup for TLS. WCGW?
&lt;img src=&quot;https://cicd.life/images/heartbleed.png&quot; alt=&quot;SSL Logo&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;more-to-come&quot;&gt;More to come!&lt;/h2&gt;

&lt;p&gt;Well there is a lot to talk about when it comes to configuration of complex
clustered software, but I feel this is a good enough overview to get you up and
running and thinking about how to configure your Swarm cluster. In the next episode
I’ll lay out some example architectures for your Swarm cluster. Stay tuned and
please feel free to comment below!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;All of the research behind these blog posts was made possible due to the
awesome company I work for: &lt;a href=&quot;https://www.rallydev.com/careers/open-positions&quot;&gt;Rally Software&lt;/a&gt;
in Boulder, CO. We get at least 1 hack week per quarter and it enables us to
hack on awesome things like Docker Swarm. If you would like to cut to the chase
and directly start playing with a Vagrant example, here is the repo that is the
output of my Q1 2014 hack week efforts:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/technolo-g/docker-swarm-demo&quot;&gt;https://github.com/technolo-g/docker-swarm-demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/docker/swarm/tree/master/discovery#using-a-static-list-of-ips&lt;/em&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/docker/swarm/pull/227&lt;/em&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/docker/swarm/issues/214&lt;/em&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/intro-to-docker-swarm-pt2-config-options-requirements/&quot;&gt;Intro to Docker Swarm: Part 2 - Configuration Options and Requirements&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on January 19, 2015.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Intro to Docker Swarm: Part 1 - Overview]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/intro-to-docker-swarm-pt1-overview/" />
  <id>https://cicd.life/intro-to-docker-swarm-pt1-overview</id>
  <published>2015-01-18T00:00:00-05:00</published>
  <updated>2015-01-18T00:00:00-05:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;h2 id=&quot;what-is-docker-swarm&quot;&gt;What is Docker Swarm?&lt;/h2&gt;

&lt;p&gt;Docker Swarm&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; is a utility that is used to create a cluster of Docker hosts
that can be interacted with as if it were a single host. I was introduced to it a
few days before it was announced at DockerCon EU&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; at the Docker Global Hack
Day&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; that I participated in at work. During the introduction to hackday, a
few really cool new technologies were announced&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; including Docker Swarm,
Docker Machine, and Docker Compose. Since Ansible fills the role of Machine and
Compose, Swarm stuck out as particularly interesting to me.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://cicd.life/images/Docker_global_hack_day.jpg&quot; alt=&quot;Docker Global Hack Day 2014&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Victor Vieux and Andrea Luzzardi announced the concept and demonstrated
the basic workings of Swarm during the intros and made a statement that I found
to be very interesting. They said that though the POC (proof of concept) was
functional and able to demo, they were going to throw away all of that code and
start from scratch. I thought that was great and try to keep that in mind when
POC’ing a new technology.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The daemon is written in Go and at this point in time &lt;a href=&quot;https://github.com/docker/swarm/commit/a0901ce8d679e3ff0e13eee61e99407b4436bebd&quot;&gt;latest commit a0901ce8d6&lt;/a&gt;
is definitely Alpha software. Things are moving at a very rapid pace at this
point in time and functionality + feature set vary almost daily. That being said,
&lt;a href=&quot;https://github.com/vieux&quot;&gt;@vieux&lt;/a&gt; is extremely responsive with adding
functionality and fixing bugs via GitHub Issues&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;. I would not recommend using
it in production yet, but it is a very promising technology.&lt;/p&gt;

&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;//www.youtube.com/embed/EC25ARhZ5bI&quot; frameborder=&quot;0&quot;&gt; &lt;/iframe&gt;

&lt;h2 id=&quot;how-does-it-work&quot;&gt;How does it work&lt;/h2&gt;

&lt;p&gt;Interacting with and operating Swarm is (by-design) very similar to dealing with
a single Docker host. This allows interoperability with existing toolchains
without having to make too many modifications (the major ones being splitting
builds off of the Swarm cluster). Swarm is a daemon that is run on a Linux
machine bound to a network interface on the same port that a standalone Docker
instance (http/2375 or https/2376) would be. The Swarm daemon accepts connections
from the standard Docker client &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;&amp;gt;=1.4.0&lt;/code&gt; and proxies them back to the Docker
daemons configured behind Swarm which are also listening on the standard Docker
ports. It can distribute the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;create&lt;/code&gt; commands based on a few different packing
algorithms in combination with tags that the Docker daemons have been started
with. This makes the creation of a partitioned cluster of heterogeneous Docker
hosts that is exposed as a single Docker endpoint extremely simple.&lt;/p&gt;

&lt;p&gt;Interacting with Swarm is ‘more or less’ the same as interacting with a single
non-clustered Docker instance, but there are a few caveats. There is not 1-1
support for all Docker commands. This is due to both architectural and time based
reasons. Some commands are just not implemented yet and I would imagine some
might never be. Right now almost everything needed for running containers is
available, including (amongst others):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker create&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker inspect&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker kill&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker logs&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker start&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This subset is the essential part of what is needed to begin playing with the
tool in runtime. Here is an overview of how the technologies are used in the most
basic configuration:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The Docker hosts are brought up with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--label key=value&lt;/code&gt; listening on the network.&lt;/li&gt;
  &lt;li&gt;The Swarm daemon is brought up and pointed at a file containing a list of the
Docker hosts that make up the cluster as well the ports they are listening on.&lt;/li&gt;
  &lt;li&gt;Swarm reaches out to each of the Docker hosts and determines their tags,
health, and amount of resources in order to maintain a list of the backends and
their metadata.&lt;/li&gt;
  &lt;li&gt;The client interacts with Swarm via it’s network port (2375). You interact
with Swarm the same way you would with Docker: create, destroy, run, attach,
and get logs of running containers amongst other things.&lt;/li&gt;
  &lt;li&gt;When a command is issued to Swarm, Swarm:
    &lt;ul&gt;
      &lt;li&gt;decides where to route the command based off of the provided &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;constraint&lt;/code&gt;
tags, health of the backends, and the scheduling algorithm.&lt;/li&gt;
      &lt;li&gt;executes the command against the proper Docker daemon&lt;/li&gt;
      &lt;li&gt;returns the result in the same format as Docker does&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p class=&quot;image-right&quot;&gt;&lt;img src=&quot;https://cicd.life/images/Swarm_Diagram_Omnigraffle.jpg&quot; alt=&quot;Basic Docker Swarm Diagram&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The Swarm daemon itself is only a scheduler and a router. It does not actually
run the containers itself meaning that if Swarm goes down, the containers it has
provisioned are still up on the backend Docker hosts. In addition, since it doesn’t
handle any of the network routing (network connections need to be routed
directly to the backend Docker host) running containers will still be available
even if the Swarm daemon dies. When Swarm recovers from such a crash, it is able
to query the backends in order to rebuild its list of metadata.&lt;/p&gt;

&lt;p&gt;Due to the design of Swarm, interaction with Swarm for all runtime activities is
just about the same as it would be for other Docker daemon: the Docker client,
docker-py, docker-api gem, etc.. Build commands have not yet been figured out,
but you can get by for runtime today. Unfortunately at this exact time Ansible
does not seem to work with Swarm in TLS mode&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;, but it appears to affect the
Docker daemon itself not just Swarm.&lt;/p&gt;

&lt;p&gt;This concludes the 1st post regarding Docker Swarm. I apologize for the lack of
technical detail, but it will be coming in subsequent posts in the form of
architectures, snippets, and some hands-on activities :) Look out for Part 2:
Docker Swarm Configuration Options and Requirements coming soon!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;All of the research behind these blog posts was made possible due to the
awesome company I work for: &lt;a href=&quot;https://www.rallydev.com/careers/open-positions&quot;&gt;Rally Software&lt;/a&gt;
in Boulder, CO. We get at least 1 hack week per quarter and it enables us to
hack on awesome things like Docker Swarm. If you would like to cut to the chase
and directly start playing with a Vagrant example, here is the repo that is the
output of my Q1 2014 hack week efforts:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/technolo-g/docker-swarm-demo&quot;&gt;https://github.com/technolo-g/docker-swarm-demo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/docker/swarm&lt;/em&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;http://blog.docker.com/2015/01/dockercon-eu-introducing-docker-swarm/&lt;/em&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;http://www.meetup.com/Docker-meetups/events/148163592/&lt;/em&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;http://blog.docker.com/2014/12/announcing-docker-machine-swarm-and-compose-for-orchestrating-distributed-apps/&lt;/em&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/docker/swarm/issues&lt;/em&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/ansible/ansible/issues/10032&lt;/em&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/intro-to-docker-swarm-pt1-overview/&quot;&gt;Intro to Docker Swarm: Part 1 - Overview&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on January 18, 2015.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Creating SSL/TLS Certificates for Docker and Docker Swarm]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/generate-ssl-for-docker-swarm/" />
  <id>https://cicd.life/generate-ssl-for-docker-swarm</id>
  <published>2015-01-18T00:00:00-05:00</published>
  <updated>2015-01-18T00:00:00-05:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;p&gt;Docker&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; has one of the most gentle learning slopes of a new technology to
enter the mainstream in a long time. A developer can get up and running in a very
short amount of time&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; and begin realizing value almost immediately with
Docker, but the hard part comes when trying to secure the new technology for
use in a production like environment. Production has a much higher standard when
it comes to availability, security, and repeatability. This can lead to problems
as the differences between the development and production environment are both:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Fairly complex to replicate in a secure way: It is not feasible to pass around
the private key for your production certificate in the name of development
environment automation. On the other hand, to generate a full CA and all of the
certificates and keys required can be daunting.&lt;/li&gt;
  &lt;li&gt;Functionally quite a large delta: The difference between an insecure, non-tls
environment and an SSL one can be significant. For instance, at this exact
exact point in time it appears that Ansible does not yet support a TLS enabled
Docker host&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to attack this problem, we should attempt to replicate the prod
environment when it is feasible and especially if it is easy and cheap. To this
end, let’s create the full certificate chain needed to run a secure Docker Swarm&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;
cluster. I think you will find that it is both easy and cheap :)&lt;/p&gt;

&lt;h2 id=&quot;the-script&quot;&gt;The script&lt;/h2&gt;

&lt;p&gt;This is a bash script I used&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; that will output everything within the
directory it is run. It accomplishes the following things:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Creates a Certificate Authority&lt;/li&gt;
  &lt;li&gt;Creates a cert/key for Docker Swarm (supporting both client and server auth)&lt;/li&gt;
  &lt;li&gt;Generates 3 certificates for the individual Docker hosts with SAN IPs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is required to set a config as we need to add a SAN IP Address entry to the
certificate and CSR. This is required because without it, Swarm will spit out
the following error:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;ERRO[0282] Get https://10.100.199.201:2376/v1.15/info: x509: cannot validate certificate &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;10.100.199.201 because it doesn&lt;span class=&quot;s1&quot;&gt;&apos;t contain any IP SANs&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; I am no SSL wizard and so some of the settings in the openssl.cnf
may be insecure, not needed, or even both. In addition, you can see that this is
totally insecure as&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;the password is in the script&lt;/li&gt;
  &lt;li&gt;the passwords are removed from the keys&lt;/li&gt;
  &lt;li&gt;many other reasons&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Please don’t use these exact script or the generated certs for production use!&lt;/strong&gt;&lt;/p&gt;

&lt;h4 id=&quot;gen_sslsh&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gen_ssl.sh&lt;/code&gt;&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OPENSSL_CONF&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;openssl.cnf


&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;Creating CA (ca-key.pem, ca.pem)&apos;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;01 &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; ca.srl
openssl genrsa &lt;span class=&quot;nt&quot;&gt;-des3&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-passout&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; ca-key.pem 2048
openssl req &lt;span class=&quot;nt&quot;&gt;-new&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
        &lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=Non-Prod Test CA/C=US&apos;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
        &lt;span class=&quot;nt&quot;&gt;-x509&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 365 &lt;span class=&quot;nt&quot;&gt;-key&lt;/span&gt; ca-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; ca.pem


&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;Creating Swarm certificates (swarm-key.pem, swarm-cert.pem)&apos;&lt;/span&gt;
openssl genrsa &lt;span class=&quot;nt&quot;&gt;-des3&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-passout&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; swarm-key.pem 2048
openssl req &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=dockerswarm01&apos;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-new&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-key&lt;/span&gt; swarm-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; swarm-client.csr
&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;extendedKeyUsage = clientAuth,serverAuth&apos;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; extfile.cnf
openssl x509 &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-req&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 365 &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; swarm-client.csr &lt;span class=&quot;nt&quot;&gt;-CA&lt;/span&gt; ca.pem &lt;span class=&quot;nt&quot;&gt;-CAkey&lt;/span&gt; ca-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; swarm-cert.pem &lt;span class=&quot;nt&quot;&gt;-extfile&lt;/span&gt; extfile.cnf
openssl rsa &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; swarm-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; swarm-key.pem

&lt;span class=&quot;c&quot;&gt;# Set the default keys to be Swarm&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rp&lt;/span&gt; swarm-key.pem key.pem
&lt;span class=&quot;nb&quot;&gt;cp&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-rp&lt;/span&gt; swarm-cert.pem cert.pem

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;Creating host certificates (dockerhost01-3-key.pem, dockerhost01-3-cert.pem)&apos;&lt;/span&gt;
openssl genrsa &lt;span class=&quot;nt&quot;&gt;-passout&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-des3&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost01-key.pem 2048
openssl req &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=dockerhost01&apos;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-new&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-key&lt;/span&gt; dockerhost01-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost01.csr
openssl x509 &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-req&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 365 &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; dockerhost01.csr &lt;span class=&quot;nt&quot;&gt;-CA&lt;/span&gt; ca.pem &lt;span class=&quot;nt&quot;&gt;-CAkey&lt;/span&gt; ca-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost01-cert.pem &lt;span class=&quot;nt&quot;&gt;-extfile&lt;/span&gt; openssl.cnf
openssl rsa &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; dockerhost01-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost01-key.pem

openssl genrsa &lt;span class=&quot;nt&quot;&gt;-passout&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-des3&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost02-key.pem 2048
openssl req &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=dockerhost02&apos;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-new&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-key&lt;/span&gt; dockerhost02-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost02.csr
openssl x509 &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-req&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 365 &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; dockerhost02.csr &lt;span class=&quot;nt&quot;&gt;-CA&lt;/span&gt; ca.pem &lt;span class=&quot;nt&quot;&gt;-CAkey&lt;/span&gt; ca-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost02-cert.pem &lt;span class=&quot;nt&quot;&gt;-extfile&lt;/span&gt; openssl.cnf
openssl rsa &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; dockerhost02-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost02-key.pem

openssl genrsa &lt;span class=&quot;nt&quot;&gt;-passout&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-des3&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost03-key.pem 2048
openssl req &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=dockerhost03&apos;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-new&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-key&lt;/span&gt; dockerhost03-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost03.csr
openssl x509 &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-req&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 365 &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; dockerhost03.csr &lt;span class=&quot;nt&quot;&gt;-CA&lt;/span&gt; ca.pem &lt;span class=&quot;nt&quot;&gt;-CAkey&lt;/span&gt; ca-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost03-cert.pem &lt;span class=&quot;nt&quot;&gt;-extfile&lt;/span&gt; openssl.cnf
openssl rsa &lt;span class=&quot;nt&quot;&gt;-passin&lt;/span&gt; pass:password &lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; dockerhost03-key.pem &lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; dockerhost03-key.pem

&lt;span class=&quot;c&quot;&gt;# We don&apos;t need the CSRs once the cert has been generated&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;rm&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;.csr&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;opensslcnf&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;openssl.cnf&lt;/code&gt;&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# OpenSSL example configuration file.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# This is mostly being used for generation of certificate requests.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# This definition stops the following lines choking if HOME isn&apos;t&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# defined.&lt;/span&gt;
HOME			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
RANDFILE		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$ENV&lt;/span&gt;::HOME/.rnd
oid_section		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; new_oids
extensions		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; v3_req

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; new_oids &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
tsa_policy1 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 1.2.3.4.1
tsa_policy2 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 1.2.3.4.5.6
tsa_policy3 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 1.2.3.4.5.7

&lt;span class=&quot;c&quot;&gt;####################################################################&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; ca &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
default_ca	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; CA_default		&lt;span class=&quot;c&quot;&gt;# The default ca section&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;####################################################################&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; CA_default &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;dir&lt;/span&gt;		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; ./tls		&lt;span class=&quot;c&quot;&gt;# Where everything is kept&lt;/span&gt;
certs		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/certs		&lt;span class=&quot;c&quot;&gt;# Where the issued certs are kept&lt;/span&gt;
crl_dir		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/crl		&lt;span class=&quot;c&quot;&gt;# Where the issued crl are kept&lt;/span&gt;
database	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/index.txt	&lt;span class=&quot;c&quot;&gt;# database index file.&lt;/span&gt;
new_certs_dir	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/newcerts		&lt;span class=&quot;c&quot;&gt;# default place for new certs.&lt;/span&gt;
certificate	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/cacert.pem 	&lt;span class=&quot;c&quot;&gt;# The CA certificate&lt;/span&gt;
serial		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/serial 		&lt;span class=&quot;c&quot;&gt;# The current serial number&lt;/span&gt;
crlnumber	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/crlnumber
crl		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/crl.pem 		&lt;span class=&quot;c&quot;&gt;# The current CRL&lt;/span&gt;
private_key	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/private/cakey.pem# The private key
RANDFILE	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$dir&lt;/span&gt;/private/.rand	&lt;span class=&quot;c&quot;&gt;# private random number file&lt;/span&gt;
x509_extensions	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; usr_cert		&lt;span class=&quot;c&quot;&gt;# The extentions to add to the cert&lt;/span&gt;
name_opt 	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; ca_default		&lt;span class=&quot;c&quot;&gt;# Subject Name options&lt;/span&gt;
cert_opt 	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; ca_default		&lt;span class=&quot;c&quot;&gt;# Certificate field options&lt;/span&gt;
default_days	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 365			&lt;span class=&quot;c&quot;&gt;# how long to certify for&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;default_crl_days&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 30			&lt;span class=&quot;c&quot;&gt;# how long before next CRL&lt;/span&gt;
default_md	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; default		&lt;span class=&quot;c&quot;&gt;# use public key default MD&lt;/span&gt;
preserve	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; no			&lt;span class=&quot;c&quot;&gt;# keep passed DN ordering&lt;/span&gt;
policy		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; policy_match

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; policy_match &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
countryName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; match
stateOrProvinceName	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; match
organizationName	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; match
organizationalUnitName	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional
commonName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; supplied
emailAddress		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; policy_anything &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
countryName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional
stateOrProvinceName	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional
localityName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional
organizationName	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional
organizationalUnitName	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional
commonName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; supplied
emailAddress		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; optional

&lt;span class=&quot;c&quot;&gt;####################################################################&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; req &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
default_bits		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 1024
default_keyfile 	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; privkey.pem
distinguished_name	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; req_distinguished_name
attributes		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; req_attributes
x509_extensions	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; v3_ca	&lt;span class=&quot;c&quot;&gt;# The extentions to add to the self signed cert&lt;/span&gt;
string_mask &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; utf8only
req_extensions &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; v3_req &lt;span class=&quot;c&quot;&gt;# The extensions to add to a certificate request&lt;/span&gt;

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; req_distinguished_name &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
countryName			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Country Name &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;2 letter code&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
countryName_default		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; AU
countryName_min			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 2
countryName_max			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 2
stateOrProvinceName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; State or Province Name &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;full name&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
stateOrProvinceName_default	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Some-State
localityName			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Locality Name &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;eg, city&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
0.organizationName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Organization Name &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;eg, company&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
0.organizationName_default	&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Internet Widgits Pty Ltd
organizationalUnitName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Organizational Unit Name &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;eg, section&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
commonName			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Common Name &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;e.g. server FQDN or YOUR name&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
commonName_max			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 64
emailAddress			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; Email Address
emailAddress_max		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 64

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; req_attributes &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
challengePassword		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; A challenge password
challengePassword_min		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 4
challengePassword_max		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 20
unstructuredName		&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; An optional company name

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; usr_cert &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;basicConstraints&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;CA:FALSE
nsComment			&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;OpenSSL Generated Certificate&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;subjectKeyIdentifier&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hash
&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;authorityKeyIdentifier&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;keyid,issuer

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; v3_req &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Extensions to add to a certificate request&lt;/span&gt;
basicConstraints &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; CA:FALSE
keyUsage &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; nonRepudiation, digitalSignature, keyEncipherment
subjectAltName &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; @alt_names

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; v3_ca &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
subjectAltName &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; @alt_names
&lt;span class=&quot;nv&quot;&gt;subjectKeyIdentifier&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;hash
&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;authorityKeyIdentifier&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;keyid:always,issuer
basicConstraints &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; CA:true

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; crl_ext &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;authorityKeyIdentifier&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;keyid:always

&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt; alt_names &lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# The IPs of the Docker and Swarm hosts&lt;/span&gt;
IP.1 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 10.100.199.200
IP.2 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 10.100.199.201
IP.3 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 10.100.199.202
IP.4 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 10.100.199.203&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;installation&quot;&gt;Installation&lt;/h2&gt;

&lt;p&gt;Once you have generated the TLS keys and certificates they must be installed on
the target machine. I prefer to just copy the certificate and the key files into
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/pki/tls/certs/&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/pki/tls/private/&lt;/code&gt; respectively. Once they are
installed, you can then fire up your Docker and Swarm daemons like so:&lt;/p&gt;

&lt;h4 id=&quot;docker&quot;&gt;Docker&lt;/h4&gt;
&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;/usr/bin/docker &lt;span class=&quot;nt&quot;&gt;-d&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlsverify&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscacert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/ca.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/dockerhost01-cert.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlskey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/private/dockerhost01-key.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; tcp://0.0.0.0:2376
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;swarm&quot;&gt;Swarm&lt;/h4&gt;
&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;/usr/local/bin/swarm manage &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlsverify&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscacert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/ca.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlscert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/certs/swarm-cert.pem &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--tlskey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/etc/pki/tls/private/swarm-key.pem  &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;--discovery&lt;/span&gt; file:///etc/swarm_config &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  &lt;span class=&quot;nt&quot;&gt;-H&lt;/span&gt; tcp://0.0.0.0:2376
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;using-the-tls-enabled-docker-daemon&quot;&gt;Using the TLS enabled Docker daemon&lt;/h2&gt;
&lt;p&gt;Now in order to use the Docker daemon, you will have to present a client cert
that was generated from the same CA as the certificate Docker/Swarm is using. We
have generated one here and more can be made if needed. Set the following
environment variables in order to tell the Docker client what to use for the TLS
config:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DOCKER_HOST&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;tcp://dockerswarm01:2376
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DOCKER_CERT_PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;DOCKER_TLS_VERIFY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This will now enable the Docker client to communicate ‘securely’ with Docker
Swarm and Docker Swarm to communicate securely with the Docker nodes behind it.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://www.docker.com&lt;/em&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;http://goo.gl/QlZ5qv&lt;/em&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/ansible/ansible/issues/10032&lt;/em&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/docker/swarm&lt;/em&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;em&gt;https://github.com/technolo-g/docker-swarm-demo/blob/master/bin/gen_ssl.sh&lt;/em&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/generate-ssl-for-docker-swarm/&quot;&gt;Creating SSL/TLS Certificates for Docker and Docker Swarm&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on January 18, 2015.&lt;/p&gt;</content>
</entry>


<entry>
  <title type="html"><![CDATA[Clean up your Git history]]></title>
 <link rel="alternate" type="text/html" href="https://cicd.life/clean-up-your-git-history/" />
  <id>https://cicd.life/clean-up-your-git-history</id>
  <published>2015-01-17T00:00:00-05:00</published>
  <updated>2015-01-17T00:00:00-05:00</updated>
  <author>
    <name>Matt Bajor</name>
    <uri>https://cicd.life</uri>
    <email>matt@notevenremotelydorky.com</email>
  </author>
  <content type="html">&lt;p&gt;We all like to keep our code looking as neat as possible, but sometimes you also
need to keep track of those small changes you’ve been making. A good way to ABC
(Always Be Committin’) is to work in a branch and hack your way through the
problem and then to clean up before submitting a PR.&lt;/p&gt;

&lt;h3 id=&quot;cut-a-new-branch&quot;&gt;Cut a new branch&lt;/h3&gt;
&lt;p&gt;What we are about to do here can be destructive as well as confusing. With tasks
like that, it’s always nice to have a backup and so we are going to cut a new
branch to work with off of the branch that all of the nasty &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;more changes&lt;/code&gt; live
in:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git checkout &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; squashed_feature
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;rebase-from-master&quot;&gt;Rebase from master&lt;/h3&gt;
&lt;p&gt;This will give us a branch that we can then safely rebase from master. This
process will allow you to pick the commits you would like to squash and the ones
you would like to keep. You can do this by running:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git rebase &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; master
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;squash-commits&quot;&gt;Squash commits&lt;/h3&gt;
&lt;p&gt;If you have merged master into your branch during your development process you
will be unable to use this method. I normally will rebase my branch on master
instead of merging in, but workflows may vary. Once you begin the rebase process,
your git editor will open a file that looks like:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;pick 3e33836 Initial commit of consul-template role
pick 7a09b99 remove boilerplate
pick 935e5e4 Default openssl.cnf
pick 6277156 Add all ips as SAN
pick 8cc1e89 Add SAN IP to certificates
pick e5aa77c Have to repro with hosts
pick 9819551 Change from 5 to 3 dockerhosts to reduce &lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Rebase d33881f..9819551 onto d33881f&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Commands:&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  p, pick = use commit&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  r, reword = use commit, but edit the commit message&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  e, edit = use commit, but stop for amending&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  s, squash = use commit, but meld into previous commit&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  f, fixup = like &quot;squash&quot;, but discard this commit&apos;s log message&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#  x, exec = run command (the rest of the line) using shell&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# These lines can be re-ordered; they are executed from top to bottom.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# If you remove a line here THAT COMMIT WILL BE LOST.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# However, if you remove everything, the rebase will be aborted.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;#&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# Note that empty commits are commented out&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;In this view the most recent commit will be at the top and the oldest at the
bottom. To squash all of the commits into a single one, change all the ‘pick’s
but one to ‘squash’. Note: ‘s’ will also work instead of ‘squash’&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;pick 3e33836 Initial commit of consul-template role
s 7a09b99 remove boilerplate
s 935e5e4 Default openssl.cnf
s 6277156 Add all ips as SAN
s 8cc1e89 Add SAN IP to certificates
s e5aa77c Have to repro with hosts
s 9819551 Change from 5 to 3 dockerhosts to reduce &lt;span class=&quot;nb&quot;&gt;time&lt;/span&gt;

...&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;rewrite&quot;&gt;Rewrite&lt;/h3&gt;
&lt;p&gt;You want to squash these changes as if you were to remove the line itself, the
actual commit will be removed from history:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;c&quot;&gt;# This is a combination of 7 commits.&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;# The first commit&apos;s message is:&lt;/span&gt;
Initial commit of consul-template role

&lt;span class=&quot;c&quot;&gt;# This is the 2nd commit message:&lt;/span&gt;

remove boilerplate

...&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;At this screen there is no need to change the commit messages. After saving and
quitting the editor, a new file will open to allow you to edit the commit
message. In that file, just remove all of the messages and replace them with the
one that you want.&lt;/p&gt;

&lt;p&gt;You can now push your branch up and open a PR!&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git push origin squashed_feature
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href=&quot;http://makandracards.com/makandra/527-squash-several-git-commits-into-a-single-commit&quot;&gt;Source&lt;/a&gt;&lt;/p&gt;

  &lt;p&gt;&lt;a href=&quot;https://cicd.life/clean-up-your-git-history/&quot;&gt;Clean up your Git history&lt;/a&gt; was originally published by Matt Bajor at &lt;a href=&quot;https://cicd.life&quot;&gt;CI/CD Life&lt;/a&gt; on January 17, 2015.&lt;/p&gt;</content>
</entry>

</feed>
