One-shot Turnup of HTTPS Jenkins on Docker

Uncategorized No Comments »

What’s that? You want to build using Jenkins, but it’s too much trouble? I might have a few tricks.

Our goal is to have a URL, let’s say https://jenkins.example.com/ that can turn up docker images for builds. That “turn up docker images” itself can be tricky if you read the various issues with access-control on the /var/run/docker.sock pipe. I think I’ve got that. Notice, also: no custom port, but https.

The key, first of all, is a few services:

Jenkins itself

Well, actually, I use blueocean for the possible magic pipeline capability, but it’s Jenkins inside. I’d *like* to use Config-as-Code (JCaC) but I’m not there yet.

You need to choose a hostname in the domain you control. For example, “jenkins.example.com”. Pick something you control, and A-record your DNS over to your new host. That has to exist before the SSL certs for HTTPS, so go ahead and do that now so the changes have time to propagate.

Go ahead, I can wait.

Good. Now, install docker-compose; on the off-chance it doesn’t have proper dependency info for the components you’ll also need, I also installed docker-ce, docker-ce-cli, containerd.io, before docker-compose. Your OS may call them different things.

Your docker-compose.yamlshould look like this:

---
version: '3'
services:
  blueocean:
    container_name: blueocean
    image: "chickenandporn/blueocean:latest"
depends_on:
   - dockersock
environment:
- DOCKER_HOST=tcp://dockersock:2375
- LETSENCRYPT_EMAIL=your-email@example.com
- LETSENCRYPT_HOST=jenkins.example.com
- VIRTUAL_HOST=jenkins.example.com
- VIRTUAL_PORT=8080
ports:
- "10088:8080"
- "32768:50000"
restart: unless-stopped
volumes:
- jenkins-data:/var/jenkins_home

You may notice: no privileged status, and no docker-sock mapped into the container. I’ll get to that. Make sure both environment variables are set to the same value, the name of your docker host. It’s OK that you don’t have access to the docker-socket, that’ll get resolved. The ports don’t matter: nginx is going to proxy tcp/80 and tcp/443 for you.

The custom chickenandporn/blueocean:latest is on docker hub, and is identical to upstream jenkinsci/blueocean except that it softlinks the libc to libmusl — this allows the golang toolchain to work. If you don’t need the golang binaries to work on a libmusl-based jenkins, use the upstream one, Docker Hub has the Dockerfile listed, you can see the difference, and the upstream image data.

Don’t need to run it yet.

Docker.sock Proxy

Access to the docker pipe to command the docker instance has been a recurring issue. Lots of ways are searchable to try to resolve this: setting group IDs, setting permissions, all leave your system open, but I could do neither: when the GID numbers change and the group ownership of the pipe changes (Synology uses root, for example) there’s no consistent solution.

So proxy it from TCP.

socat is a lightweight proxy; you can tell it to receive connections on a socket and bidirectionally map them to a pipe. This offers the same security risk as setting your jenkins to run privileged, but works across slightly different environments.

I also used iptables to ensure I don’t get hit a second time with a possibly open docker.sock. On a known port, even. Dumb. Protect yourself.

The docker-compose stanza looks like this:

  dockersock:
   container_name: "dockersock"
   entrypoint:
     - socat
     - tcp-listen:2375,fork,reuseaddr
     - unix-connect:/var/run/docker.sock
   image: "alpine/socat:latest"
   ports:
     - "2375:2375"
   restart: always
   user: root:root
   volumes:
     - /var/run/docker.sock:/var/run/docker.sock:rw

This is where the docker pipe is contacted. DOCKER_HOST=tcp://dockersock:2375 environment tells jenkins to connect to this container, and socatproxies to the docker.sock. One magic trick at-play here is that on a private VLAN, Docker runs a special DNS service on 127.0.0.11 that resolves container names to IPs — this lets you effortlessly resolve dockersock to the VLAN IP it gets offered, allowing less manual config and more versatility.

This works on a basic Centos7 docker engine, a Synology NAS docker, anywhere I’ve tried it. … and it works from the first shot, on freshly-installed bare-metal, reducing the hand-config one-offs that are needed to bring up the service when your .. ahem… server needs to be wiped and reloaded.

How about that Nginx?

  nginx-proxy:
    container_name: "nginx-proxy"
    image: "jwilder/nginx-proxy:latest"
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - nginx-certs:/etc/nginx/certs
      - nginx-vhost:/etc/nginx/vhost.d
      - nginx-html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro

If you google for jwilder (ie https://github.com/jwilder/nginx-proxy) you’ll see many examples of this proxy, and some more documentation. Essentially, it listens on the docker sock for new containers being instantiated, checks for VIRTUAL_HOST and VIRTUAL_PORT environment variables, and creates reverse-proxy entries for them. This is how your Jenkins can share tcp/80 and tcp/443 with other containers.

The other trick here is to share volumes with something that populates your SSL certs for HTTPS.

And the Crypto Certificates

The last step is the certificates needed for TLS protecting HTTPS connections. Luckily, letsencrypt has a bot for your automated generation and storage, and it’s been containerized.

  nginx-proxy-letsencrypt:
   container_name: "nginx-proxy-letsencrypt"
   depends_on:
     - nginx-proxy
   environment:
     - NGINX_PROXY_CONTAINER=nginx-proxy
   image: "jrcs/letsencrypt-nginx-proxy-companion"
   volumes:
     - nginx-certs:/etc/nginx/certs
     - nginx-vhost:/etc/nginx/vhost.d
     - nginx-html:/usr/share/nginx/html
     - /var/run/docker.sock:/var/run/docker.sock:ro

Similar to the nginx-proxycontainer, this one listens on the docker socket for new containers, and reacts to that. In this case, it looks for LETSENCRYPT_HOST and LETSENCRYPT_EMAIL, uses those to generate certificates, stores them in the shared volume, and connects to the container given in NGINX_PROXY_CONTAINER=nginx-proxy to finish assigning the certificates to the service. It’s all very automatic, but the service does need to be responding on tcp/80 and tcp/443 to generate a certificate.

If your LETSENCRYPT_HOST mismatches your VIRTUAL_HOST you’ll get certificates that are invalid for the service offering them. It would be better to configure both from one environment variable, but this isn’t a huge hardship, and the variable prefix acts like a namespace.

The Launch

So you’ve got these four stanzas in a docker-compose.yaml file. One final section needs to mention the volumes:

volumes:
 jenkins-data:
  external: true
 nginx-certs:
 nginx-vhost:
nginx-html:

Because jenkins-data is “external” (to ensure docker-compose never ever wipes it out) you’ll need to create that. The nginx volumes are created for you. The private VLAN for your images herein (typically a 172.16/32 IPv4 subnet) will be created for you.

  1. Confirm that your A-record (and/or AAAA) resolves to your jenkins.example.com service IP
  2. docker volume create --name=jenkins-data
  3. Ensure your tcp/2375 is protected from the unwashed masses of hack-bots on the intarwebz. I used a -j DROP rule in iptables, but use what works.
  4. (optional): docker-compose -f path/to/your/docker-compose.yaml pull to fetch the images ahead of time more verbosely, so it doesn’t seem to take forever with no feedback. If you skip this step, that’s fine.
  5. docker-compose -f path/to/your/docker-compose.yaml up -d
  6. Get the setup password: docker logs blueocean 2>&1 |grep -C 5 'Jenkins initial setup is required'
  7. start hitting https://jenkins.example.com/ to see when your service is there

You’ll immediately be asked for your password from the logs to ensure you have full access; I’d install the default plugins and go. You should now have full access to a Jenkins that can run pipelines for you in new docker-containers.


Synology and UPnP and UBNT

Uncategorized No Comments »

Synology is a fairly good NAS product, but they occasionally seem to drop a ball (*cough* *cough* python3 update, ever? *cough*). UPnP is tonight’s frustration, but A comment by user scyto on UBNT’s community support from May 2016 is what helped.

Plagiarizing Scyto’s response (in case it gets “updated” to the new and “improved” forum) is:

It seems synology built natpmp int0 their units but never turned it on – here is how.

  1. ensure you already cofigure your synoogy with the router wizard in non password mode
  2. enable SSH access to the synology
  3. login with an SSH client
  4. sudo vi /etc/portforward/router.conf to edit the file 
  5. change the following lines in the file:
    1. support_change_port=yes
    2. support_router_upnp=yes
    3. support_router_natpmp=yes
    4. router_type=natpmp
  6. leave all other lines as-in

What scyto doesn’t mention is that after saving /etc/portforward/router.conf there’s no need to sigHUP anything or force a reload — just works the next time you hit [SAVE] on the “External Access” config screen.

Docker Jenkins golang tool “Not Found”?

Uncategorized No Comments »

There I was, setting up a Dockerized Jenkins (well, BlueOcean) to auto populate a golang tool for Jenkins to properly build the converted-to-golang “ouizone” code (more on that later). It was gonna be awesome and unblock remote upgrade of a physical server.

go: No such file or directory

Wait, what? It’s a static-linked binary (which means mostly static: still needs Libc) and it’s not interpreted, so there’s no missing interpreter. Happens regardless the version of go that I tried (notable mention: thanks Google for changing the path: $TOOL/bin/go -> $TOOL/go/bin/go)

Checking, the key part is that it’s mostly static: libc is still needed. The docker container from jenkinsci is a musl -based system, and the go toolchain is a libc. Musl is not multilib, but is indeed small, and in the ldd output of the go binary, go depends on /lib64/ld-linux-x86-64.so.2 rather than (container) /lib/ld-musl-x86_64.so.1

So… how to fix this? Following the suggestion in Trion’s jenkins-docker-client:

sudo docker exec -u root -it blueocean bash
bash-4.4# ln -s /lib /lib64 
bash-4.4# ln -s /lib/ld-musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2

Codifying that by burning it into a DockerFile that extends jenkinsci/blueocean should make it permanent. …another day. Today, it works, and I got a bit of work to do.

Neato Vacuum and Li Ion Cells

Uncategorized No Comments »

The Neato finally acted as though I replaced with different cells.

TL;DR:

  1. plug in a USB to a Mac
  2. screen /dev/tty.usbmodem1411
  3. GetVersion to see that we’re talking
    • …BatteryType,1,NIMH, (or similar)
  4. SetConfig BatteryType 3
    • BatteryType,3,LIION_4CELL,

I’ve been using a Neato-XV for a while, and after the first year, the battery wouldn’t hold a charge as well. Since I was replacing them, I decided to go with a Lithium-Ion stack.

Li-Ion on Amazon was very quick, and there was a vendor selling an exact drop-in. Arrived in good shape, half-charged as they should be, dropped them in, charged and ran like expected.

Now we’re a few years out, and either the Li-Ion has degraded, or has finally started acting like Li-Ion cells: the batteries’ Protection Circuit Module drops all connection when it’s fully charged. This is after it’s been charging all night as it usually does, but it seems the charge has gotten high enough to trigger the PCM cutting out.

This one change, mentioned in Neato XV 21 and Lithium battery converts the charging logic to expect a complete cut-out of power when charged rather than a slight drop — or at least accept the cut-out as a better sign.

Connecting to the Neato is described in the first page of the Neato Robotics Programmer’s Manual but not so much in terms of a Mac which has a serial connection client by default. I found that watching the difference before/after allowed be to see that the /dev/tty.usbmodem1411 device was being created on connection, so screen /dev/tty.usbmodem1411 is the way in. The first attempt failed, but starting up the Neato with a USB connected may have resolved that.

Vagrant on MacOSX-10.10 and Later

vagrant, virtualization No Comments »

If your vagrant installation isn’t working in MacOSX-10.10 (“Yosemite”) or 10.11 (“El Capitan”), add the following to your ${HOME}/.profile or ${HOME}/.bashrc

export PATH=${PATH}:/opt/vagrant/bin
Read the rest of this entry »

Single-Language Internationalization: Spellcheck Basis

Uncategorized No Comments »

Even if a project has only one language — ie has not yet been considered for internationalization — an internationalization message catalog can give benefits such as sanity-checking the text that is not subject to compiler cross-check. I’d like to look at the effort to do this in my own work.

I’m a big fan of things that can be automated, or that enable other capabilities without much effort. For example, I tend to recommend checking for a compatible standard rather than willy-nilly inventing a new one on the off-chance that accidental compatibility is reached (“hey, they use Dublin-Core, and we use Dublin-Core, we can use their text-manipulation tools with our outputs! We can work together without a code change!”)

By extracting the visible strings of text form an application, it’s possible to consider them en-masse even before translation. Messages can be more consistent (tense, tone, dialect). Additionally, it may be possible to spellcheck.

Case-in-point:

Spelling anyone?

“Yosemite”

Rsync Over FTP, read-write, on a Mac or BSD Client

howto No Comments »

Recently I was discussing with someone the need to simplify the sync of a folder into an FTP server. The goal is that at set intervals, any change in a local folder is pushed to a remote folder: changes changed, new files created, removed files removed. Similar to yesterday’s article, except this is a read-write access to the FTP server, allowing either direction of sync.

This is how to do it using curlftpfs and rsync.

Read the rest of this entry »

Rsync Over FTP, on a Mac or BSD Client

howto, Uncategorized No Comments »

Recently I was discussing with someone the need to simplify the sync of a folder into an FTP server. The goal is that at set intervals, any change in a remote server is pulled to q local folder: changes changed, new files created, removed files removed. This is that kind of thing that should be easier, but it’s mixing an old technology (rsync) with a very, very old technology (FTP).

This is how to do it using mount_ftp and rsync.

Read the rest of this entry »

Java getOutputStream() surprises

Uncategorized No Comments »

As a not to my future self, apparently you need to open the connection before setting doOutput:

URLConnection connection = url.openConnection();
connection.setDoOutput(true);

return connection.getOutputStream();

It’s a good thing that’s poorly documented and non-obvious, and that it fails in misleading ways.

Scheduling Cyclic Jobs in MacOSX

How to No Comments »

Many of us UNIX old-timers are quite accustomed to cronjobs, but MacOSX has a centralized “LaunchDaemon” called launchd — to leverage it to run cronjobs gives an OS-specific, perhaps OS-preferred, method of doing so.

The TL;DR:

<?xml version=”1.0″ encoding=”UTF-8″?>

<!DOCTYPE plist PUBLIC “-//Apple//DTD PLIST 1.0//EN” “http://www.apple.com/DTDs/PropertyList-1.0.dtd”>

<plist version=”1.0″>

<dict>

<key>Label</key><string>com.example.rsync.SyncTheRepos</string>

<key>Program</key><string>/usr/bin/rsync</string>

<key>ProgramArguments</key>

<array>

<string>-avr</string>

<string>–delete-after</string>

<string>rsync.example.com::repos</string>

<string>~/Documents/Repos</string>

</array>

<key>EnableGlobbing</key><true/>

<key>StartCalendarInterval</key>

<dict>

<key>Hour</key><integer>3</integer>

<key>Minute</key><integer>14</integer>

</dict>

<key>ProcessType</key><string>Background</string>

</dict>

</plist>

In general, there is a lot of flexibility in setting up a launchd plist — the config info on https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man5/launchd.plist.5.html plus the various examples on the internet should help, but I generally take this example and re-use it.

Once this plist is saved as a local text file in ~/Library/LaunchAgents/ (for example, I’ll save mine as ~/Library/LaunchAgents/rsync-repos.plist), I activate it using:

launchd load -w ~/Library/LaunchAgents/rsync-repos.plist

If I want to disable the job, I use:

launchd unload -w ~/Library/LaunchAgents/rsync-repos.plist

The files are not modified in either case.

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in