One-shot Turnup of HTTPS Jenkins on Docker

Uncategorized Comments Off on One-shot Turnup of HTTPS Jenkins on Docker

What’s that? You want to build using Jenkins, but it’s too much trouble? I might have a few tricks.

Our goal is to have a URL, let’s say that can turn up docker images for builds. That “turn up docker images” itself can be tricky if you read the various issues with access-control on the /var/run/docker.sock pipe. I think I’ve got that. Notice, also: no custom port, but https.

The key, first of all, is a few services:

Jenkins itself

Well, actually, I use blueocean for the possible magic pipeline capability, but it’s Jenkins inside. I’d *like* to use Config-as-Code (JCaC) but I’m not there yet.

You need to choose a hostname in the domain you control. For example, “”. Pick something you control, and A-record your DNS over to your new host. That has to exist before the SSL certs for HTTPS, so go ahead and do that now so the changes have time to propagate.

Go ahead, I can wait.

Good. Now, install docker-compose; on the off-chance it doesn’t have proper dependency info for the components you’ll also need, I also installed docker-ce, docker-ce-cli,, before docker-compose. Your OS may call them different things.

Your docker-compose.yamlshould look like this:

version: '3'
    container_name: blueocean
    image: "chickenandporn/blueocean:latest"
   - dockersock
- DOCKER_HOST=tcp://dockersock:2375
- "10088:8080"
- "32768:50000"
restart: unless-stopped
- jenkins-data:/var/jenkins_home

You may notice: no privileged status, and no docker-sock mapped into the container. I’ll get to that. Make sure both environment variables are set to the same value, the name of your docker host. It’s OK that you don’t have access to the docker-socket, that’ll get resolved. The ports don’t matter: nginx is going to proxy tcp/80 and tcp/443 for you.

The custom chickenandporn/blueocean:latest is on docker hub, and is identical to upstream jenkinsci/blueocean except that it softlinks the libc to libmusl — this allows the golang toolchain to work. If you don’t need the golang binaries to work on a libmusl-based jenkins, use the upstream one, Docker Hub has the Dockerfile listed, you can see the difference, and the upstream image data.

Don’t need to run it yet.

Docker.sock Proxy

Access to the docker pipe to command the docker instance has been a recurring issue. Lots of ways are searchable to try to resolve this: setting group IDs, setting permissions, all leave your system open, but I could do neither: when the GID numbers change and the group ownership of the pipe changes (Synology uses root, for example) there’s no consistent solution.

So proxy it from TCP.

socat is a lightweight proxy; you can tell it to receive connections on a socket and bidirectionally map them to a pipe. This offers the same security risk as setting your jenkins to run privileged, but works across slightly different environments.

I also used iptables to ensure I don’t get hit a second time with a possibly open docker.sock. On a known port, even. Dumb. Protect yourself.

The docker-compose stanza looks like this:

   container_name: "dockersock"
     - socat
     - tcp-listen:2375,fork,reuseaddr
     - unix-connect:/var/run/docker.sock
   image: "alpine/socat:latest"
     - "2375:2375"
   restart: always
   user: root:root
     - /var/run/docker.sock:/var/run/docker.sock:rw

This is where the docker pipe is contacted. DOCKER_HOST=tcp://dockersock:2375 environment tells jenkins to connect to this container, and socatproxies to the docker.sock. One magic trick at-play here is that on a private VLAN, Docker runs a special DNS service on that resolves container names to IPs — this lets you effortlessly resolve dockersock to the VLAN IP it gets offered, allowing less manual config and more versatility.

This works on a basic Centos7 docker engine, a Synology NAS docker, anywhere I’ve tried it. … and it works from the first shot, on freshly-installed bare-metal, reducing the hand-config one-offs that are needed to bring up the service when your .. ahem… server needs to be wiped and reloaded.

How about that Nginx?

    container_name: "nginx-proxy"
    image: "jwilder/nginx-proxy:latest"
    restart: unless-stopped
      - "80:80"
      - "443:443"
      - nginx-certs:/etc/nginx/certs
      - nginx-vhost:/etc/nginx/vhost.d
      - nginx-html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro

If you google for jwilder (ie you’ll see many examples of this proxy, and some more documentation. Essentially, it listens on the docker sock for new containers being instantiated, checks for VIRTUAL_HOST and VIRTUAL_PORT environment variables, and creates reverse-proxy entries for them. This is how your Jenkins can share tcp/80 and tcp/443 with other containers.

The other trick here is to share volumes with something that populates your SSL certs for HTTPS.

And the Crypto Certificates

The last step is the certificates needed for TLS protecting HTTPS connections. Luckily, letsencrypt has a bot for your automated generation and storage, and it’s been containerized.

   container_name: "nginx-proxy-letsencrypt"
     - nginx-proxy
     - NGINX_PROXY_CONTAINER=nginx-proxy
   image: "jrcs/letsencrypt-nginx-proxy-companion"
     - nginx-certs:/etc/nginx/certs
     - nginx-vhost:/etc/nginx/vhost.d
     - nginx-html:/usr/share/nginx/html
     - /var/run/docker.sock:/var/run/docker.sock:ro

Similar to the nginx-proxycontainer, this one listens on the docker socket for new containers, and reacts to that. In this case, it looks for LETSENCRYPT_HOST and LETSENCRYPT_EMAIL, uses those to generate certificates, stores them in the shared volume, and connects to the container given in NGINX_PROXY_CONTAINER=nginx-proxy to finish assigning the certificates to the service. It’s all very automatic, but the service does need to be responding on tcp/80 and tcp/443 to generate a certificate.

If your LETSENCRYPT_HOST mismatches your VIRTUAL_HOST you’ll get certificates that are invalid for the service offering them. It would be better to configure both from one environment variable, but this isn’t a huge hardship, and the variable prefix acts like a namespace.

The Launch

So you’ve got these four stanzas in a docker-compose.yaml file. One final section needs to mention the volumes:

  external: true

Because jenkins-data is “external” (to ensure docker-compose never ever wipes it out) you’ll need to create that. The nginx volumes are created for you. The private VLAN for your images herein (typically a 172.16/32 IPv4 subnet) will be created for you.

  1. Confirm that your A-record (and/or AAAA) resolves to your service IP
  2. docker volume create --name=jenkins-data
  3. Ensure your tcp/2375 is protected from the unwashed masses of hack-bots on the intarwebz. I used a -j DROP rule in iptables, but use what works.
  4. (optional): docker-compose -f path/to/your/docker-compose.yaml pull to fetch the images ahead of time more verbosely, so it doesn’t seem to take forever with no feedback. If you skip this step, that’s fine.
  5. docker-compose -f path/to/your/docker-compose.yaml up -d
  6. Get the setup password: docker logs blueocean 2>&1 |grep -C 5 'Jenkins initial setup is required'
  7. start hitting to see when your service is there

You’ll immediately be asked for your password from the logs to ensure you have full access; I’d install the default plugins and go. You should now have full access to a Jenkins that can run pipelines for you in new docker-containers.

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in