Document Accumulation and Indexing

Uncategorized No Comments »

Many software engineers — myself included — prefer to independently search out solutions to their roblems.  Maybe this is a case of Imposter Syndrome: “If I ask how, they’ll KNOW I’m an idiot”.  For me, it’s also a case of wanting to solve a problem without waiting for the ~10:00 crowd to wander into the office to start their day.  Impatience is a motivator.

The problem is how do we make documentation searchable?  It’s simple enough to say “htdig”, but that requires a view of the entire landscape to index.  Moreover, the documentation that would be included has to come from various different sources — the world isn’t always a monorepo.  Even addressing the mix of code-extracted documentation (ie Doxygen) and user-curated (Sphinx, unversioned/outdated blogs, un-version-controlled Wikis), the generated and resulting http content needs to be in the same place.  The more it has to be done manually, the more likely it won’t be.

There’s two options I’m considering: pushing PRs to a common repos, or git-submoduling:

GIT-SUBMODULING

The process of updating a git-submodule with documentation resources such as generated Doxygen XML might seem simpler form the repository side, but I worry about relying on developers to do additional steps.  The secret of ensuring that certain steps are done every time is to script them behind automation.  Even automation that needs a trigger activated by a developer or anyone in a rush or not as good at repetitive steps means the trigger won’t always be activated.

populating a git submodule of documentation requires developer-side activities done every time, and might be automatable using a .pre-commit hook.  This path only makes sense if you want to keep documentation linked back to a repository, or you want to enforce certain levels of documentation at the commit stage (ie block commits that push documentation coverage below a certain amount). Otherwise, it just won’t get done.

PUSHING DOCUMENTATION PRs

If we can arrive at a common workable (not perfect) format, it’s possible to push document updates to a single repository.  The benefit there is that repo managers such as github (workflows) or CI/CD tools such as Jenkins can be configured to rebuild whenever there is a change to the master branch.

Jenkins itself can be configured to checkout a repository and copy things in place.  For this to work, we’d need the following to happen:

  1. A contributing repository would need to include user-generated documentation in a compatible format and a known subdir
  2. A contribution sourcecode repository would need to markup documentation with an extractable format
  3. A CI/CD such as Jenkins would need to harvest that generated content on a successful build of a master or production branch and push to the document-accumulating repository
    1. If the push is a commit to master-branch, fairly straightforward
    2. If the push is a PR against the accumulating repository, bot-based accept-and-merge (ie mergebot) needs to be configured
  4. The document-accumulating needs to merge and index the documentation on every push to master
  5. The document-accumulating repository needs to generate new containers of indexed content to federate around an organization

This might be quite possible if the acumulating repository is a consistent format from which organizations can fork.  What about the format?

This is discussed and somewhat functional on https://indico.cern.ch/event/716909/contributions/2946822/attachments/1821901/2980311/The_sphinx-breathe-doxygen_combo.pdf 

What’s the Magic Step for Go?

Make a go parser that writes Doxygen XML 🙂

OK, that seems crazy on the surface; hold your nose and stick with me a sec:

I see some efforts to make “My Vanity documenter” for go, which (based on low kloc for the entire project) implies that much of the code parse/introspection is pre-existing, perhaps already leveraged in go vet and go lint. These various small Doxygen-like extractors can mislead that the lift is relatively small compared to the gains (the bang-for-buck ratio gives big gains for acceptable effort).

This would give an onramp to Go lang documentation to join documentation in other formats/languages. The greater goal of a single indexing and search mechanism finding all the documentation cross-language can be a compelling win.

 * https://exhale.readthedocs.io/en/latest/

 * https://breathe.readthedocs.io/en/latest/markups.html

 * https://github.com/m00re/jenkins-slave-sphinx

HomeAssistant via Hass.io with LetsEncrypt

Uncategorized No Comments »

One of the problems I have in running hass.io on my Synology NAS (which makes homeassistant really easy to run) is that I can’t set the environment variables to configure the nginx/letsencrypt containers to trivially get certificates.

So I cheated, just like in https://tech.chickenandporn.com/2019/10/20/unifi-with-letsencrypt/ with the following config:

proxyhass:
  container_name: "proxyhass"
  entrypoint:
    - socat
    - tcp-listen:443,fork,reuseaddr
    - tcp:192.168.0.4:8123
  environment:
    - LETSENCRYPT_EMAIL=chickenandporn@gmail.com
    - LETSENCRYPT_HOST=hass.example.com
    - VIRTUAL_HOST=hass.example.com
  image: "alpine/socat:latest"
  networks:
    - proxy
  ports:
    - "443"
  restart: always

Yes, this looks very similar to the other cheating episode, with a few changes on TCP ports. Home Assistant running on hass.io runs it on mode=host so its port is accessible at the host machine. …so my NAS running on 192.168.0.4 (bogus, not really this IP) tcp/8123 accesses the hass.io -based HomeAssistant.

The second change to look out for is that the hass.example.com is obviously not my actual FQDN, and you’d want to use your own here. Although that’s my real gmail address, you want to use your own.

Third change: my nginx proxy is actually on a subnet so that random containers can’t go internetting on me unsupervised. … so I run this socat proxy in the network “proxy” which is served by nginx.

When you start this up, there’s a minute or two that the proxy and the letsencrypt companion require to get a certificate; after that, you are free and clear to both use your own homeassistant from the unwashed internet (it has a login prompt) and considering the Alexa Manual Setup for awesome Alexa goodness. This is, like, 2 of the 4 totally-not-self-redundant requirements.

Unifi with LetsEncrypt

Uncategorized No Comments »

One problem I ran into with Unifi: I didn’t want to go through the difficulty/effort/burden of getting a TLS cert.

We all know that LetsEncrypt is a trivial way to get certificates, if you can line things up correctly. For the nginx-proxy-letsencrypt, it’s a case of setting environment variables. We don’t always have the option to do that.

So I cheated.

I’m using the “How about that Nginx?” and “And the Crypto Certificates” from https://tech.chickenandporn.com/2019/03/01/one-shot-turnup-of-https-jenkins-on-docker/ and really setting up what that needs to function.

I created a bonehead-simple proxy so I could assign the environment variables. With my unifi server running an https interface at 196.168.0.4:8443, my docker-compose.xml for this little part looks like the following:

proxyunifi:
  container_name: "proxyunifi"
  entrypoint:
    - socat
    - tcp-listen:8443,fork,reuseaddr
    - tcp:192.168.0.4:8443
  environment:
    - LETSENCRYPT_EMAIL=chickenandporn@gmail.com
    - LETSENCRYPT_HOST=unifi.example.com
    - VIRTUAL_HOST=unifi.example.com
  image: "alpine/socat:latest"
  networks:
    - proxy
  ports:
    - "8443"
  restart: always

    The only real complexity here is that I’m forwarding connections to tcp/8443, so I’m forwarding to the https port. If you have problems with that, try tcp-listen:8080,fork,reuseaddr for the entry point item but this is what works for me. A VIRTUAL_PROTO=https can ensure that it works as well.

I would have forwarded the connection to a name instead of an ip address, but my home network has been moved from a dnsmasq where I trivially have names for DHCP ip addresses leased out

One-shot Turnup of HTTPS Jenkins on Docker

Uncategorized Comments Off on One-shot Turnup of HTTPS Jenkins on Docker

What’s that? You want to build using Jenkins, but it’s too much trouble? I might have a few tricks.

Our goal is to have a URL, let’s say https://jenkins.example.com/ that can turn up docker images for builds. That “turn up docker images” itself can be tricky if you read the various issues with access-control on the /var/run/docker.sock pipe. I think I’ve got that. Notice, also: no custom port, but https.

The key, first of all, is a few services:

Jenkins itself

Well, actually, I use blueocean for the possible magic pipeline capability, but it’s Jenkins inside. I’d *like* to use Config-as-Code (JCaC) but I’m not there yet.

You need to choose a hostname in the domain you control. For example, “jenkins.example.com”. Pick something you control, and A-record your DNS over to your new host. That has to exist before the SSL certs for HTTPS, so go ahead and do that now so the changes have time to propagate.

Go ahead, I can wait.

Good. Now, install docker-compose; on the off-chance it doesn’t have proper dependency info for the components you’ll also need, I also installed docker-ce, docker-ce-cli, containerd.io, before docker-compose. Your OS may call them different things.

Your docker-compose.yamlshould look like this:

---
version: '3'
services:
  blueocean:
    container_name: blueocean
    image: "chickenandporn/blueocean:latest"
depends_on:
   - dockersock
environment:
- DOCKER_HOST=tcp://dockersock:2375
- LETSENCRYPT_EMAIL=your-email@example.com
- LETSENCRYPT_HOST=jenkins.example.com
- VIRTUAL_HOST=jenkins.example.com
- VIRTUAL_PORT=8080
ports:
- "10088:8080"
- "32768:50000"
restart: unless-stopped
volumes:
- jenkins-data:/var/jenkins_home

You may notice: no privileged status, and no docker-sock mapped into the container. I’ll get to that. Make sure both environment variables are set to the same value, the name of your docker host. It’s OK that you don’t have access to the docker-socket, that’ll get resolved. The ports don’t matter: nginx is going to proxy tcp/80 and tcp/443 for you.

The custom chickenandporn/blueocean:latest is on docker hub, and is identical to upstream jenkinsci/blueocean except that it softlinks the libc to libmusl — this allows the golang toolchain to work. If you don’t need the golang binaries to work on a libmusl-based jenkins, use the upstream one, Docker Hub has the Dockerfile listed, you can see the difference, and the upstream image data.

Don’t need to run it yet.

Docker.sock Proxy

Access to the docker pipe to command the docker instance has been a recurring issue. Lots of ways are searchable to try to resolve this: setting group IDs, setting permissions, all leave your system open, but I could do neither: when the GID numbers change and the group ownership of the pipe changes (Synology uses root, for example) there’s no consistent solution.

So proxy it from TCP.

socat is a lightweight proxy; you can tell it to receive connections on a socket and bidirectionally map them to a pipe. This offers the same security risk as setting your jenkins to run privileged, but works across slightly different environments.

I also used iptables to ensure I don’t get hit a second time with a possibly open docker.sock. On a known port, even. Dumb. Protect yourself.

The docker-compose stanza looks like this:

  dockersock:
   container_name: "dockersock"
   entrypoint:
     - socat
     - tcp-listen:2375,fork,reuseaddr
     - unix-connect:/var/run/docker.sock
   image: "alpine/socat:latest"
   ports:
     - "2375:2375"
   restart: always
   user: root:root
   volumes:
     - /var/run/docker.sock:/var/run/docker.sock:rw

This is where the docker pipe is contacted. DOCKER_HOST=tcp://dockersock:2375 environment tells jenkins to connect to this container, and socatproxies to the docker.sock. One magic trick at-play here is that on a private VLAN, Docker runs a special DNS service on 127.0.0.11 that resolves container names to IPs — this lets you effortlessly resolve dockersock to the VLAN IP it gets offered, allowing less manual config and more versatility.

This works on a basic Centos7 docker engine, a Synology NAS docker, anywhere I’ve tried it. … and it works from the first shot, on freshly-installed bare-metal, reducing the hand-config one-offs that are needed to bring up the service when your .. ahem… server needs to be wiped and reloaded.

How about that Nginx?

  nginx-proxy:
    container_name: "nginx-proxy"
    image: "jwilder/nginx-proxy:latest"
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - nginx-certs:/etc/nginx/certs
      - nginx-vhost:/etc/nginx/vhost.d
      - nginx-html:/usr/share/nginx/html
      - /var/run/docker.sock:/tmp/docker.sock:ro

If you google for jwilder (ie https://github.com/jwilder/nginx-proxy) you’ll see many examples of this proxy, and some more documentation. Essentially, it listens on the docker sock for new containers being instantiated, checks for VIRTUAL_HOST and VIRTUAL_PORT environment variables, and creates reverse-proxy entries for them. This is how your Jenkins can share tcp/80 and tcp/443 with other containers.

The other trick here is to share volumes with something that populates your SSL certs for HTTPS.

And the Crypto Certificates

The last step is the certificates needed for TLS protecting HTTPS connections. Luckily, letsencrypt has a bot for your automated generation and storage, and it’s been containerized.

  nginx-proxy-letsencrypt:
   container_name: "nginx-proxy-letsencrypt"
   depends_on:
     - nginx-proxy
   environment:
     - NGINX_PROXY_CONTAINER=nginx-proxy
   image: "jrcs/letsencrypt-nginx-proxy-companion"
   volumes:
     - nginx-certs:/etc/nginx/certs
     - nginx-vhost:/etc/nginx/vhost.d
     - nginx-html:/usr/share/nginx/html
     - /var/run/docker.sock:/var/run/docker.sock:ro

Similar to the nginx-proxycontainer, this one listens on the docker socket for new containers, and reacts to that. In this case, it looks for LETSENCRYPT_HOST and LETSENCRYPT_EMAIL, uses those to generate certificates, stores them in the shared volume, and connects to the container given in NGINX_PROXY_CONTAINER=nginx-proxy to finish assigning the certificates to the service. It’s all very automatic, but the service does need to be responding on tcp/80 and tcp/443 to generate a certificate.

If your LETSENCRYPT_HOST mismatches your VIRTUAL_HOST you’ll get certificates that are invalid for the service offering them. It would be better to configure both from one environment variable, but this isn’t a huge hardship, and the variable prefix acts like a namespace.

The Launch

So you’ve got these four stanzas in a docker-compose.yaml file. One final section needs to mention the volumes:

volumes:
 jenkins-data:
  external: true
 nginx-certs:
 nginx-vhost:
nginx-html:

Because jenkins-data is “external” (to ensure docker-compose never ever wipes it out) you’ll need to create that. The nginx volumes are created for you. The private VLAN for your images herein (typically a 172.16/32 IPv4 subnet) will be created for you.

  1. Confirm that your A-record (and/or AAAA) resolves to your jenkins.example.com service IP
  2. docker volume create --name=jenkins-data
  3. Ensure your tcp/2375 is protected from the unwashed masses of hack-bots on the intarwebz. I used a -j DROP rule in iptables, but use what works.
  4. (optional): docker-compose -f path/to/your/docker-compose.yaml pull to fetch the images ahead of time more verbosely, so it doesn’t seem to take forever with no feedback. If you skip this step, that’s fine.
  5. docker-compose -f path/to/your/docker-compose.yaml up -d
  6. Get the setup password: docker logs blueocean 2>&1 |grep -C 5 'Jenkins initial setup is required'
  7. start hitting https://jenkins.example.com/ to see when your service is there

You’ll immediately be asked for your password from the logs to ensure you have full access; I’d install the default plugins and go. You should now have full access to a Jenkins that can run pipelines for you in new docker-containers.


Synology and UPnP and UBNT

Uncategorized Comments Off on Synology and UPnP and UBNT

Synology is a fairly good NAS product, but they occasionally seem to drop a ball (*cough* *cough* python3 update, ever? *cough*). UPnP is tonight’s frustration, but A comment by user scyto on UBNT’s community support from May 2016 is what helped.

Plagiarizing Scyto’s response (in case it gets “updated” to the new and “improved” forum) is:

It seems synology built natpmp int0 their units but never turned it on – here is how.

  1. ensure you already cofigure your synoogy with the router wizard in non password mode
  2. enable SSH access to the synology
  3. login with an SSH client
  4. sudo vi /etc/portforward/router.conf to edit the file 
  5. change the following lines in the file:
    1. support_change_port=yes
    2. support_router_upnp=yes
    3. support_router_natpmp=yes
    4. router_type=natpmp
  6. leave all other lines as-in

What scyto doesn’t mention is that after saving /etc/portforward/router.conf there’s no need to sigHUP anything or force a reload — just works the next time you hit [SAVE] on the “External Access” config screen.

Docker Jenkins golang tool “Not Found”?

Uncategorized No Comments »

There I was, setting up a Dockerized Jenkins (well, BlueOcean) to auto populate a golang tool for Jenkins to properly build the converted-to-golang “ouizone” code (more on that later). It was gonna be awesome and unblock remote upgrade of a physical server.

go: No such file or directory

Wait, what? It’s a static-linked binary (which means mostly static: still needs Libc) and it’s not interpreted, so there’s no missing interpreter. Happens regardless the version of go that I tried (notable mention: thanks Google for changing the path: $TOOL/bin/go -> $TOOL/go/bin/go)

Checking, the key part is that it’s mostly static: libc is still needed. The docker container from jenkinsci is a musl -based system, and the go toolchain is a libc. Musl is not multilib, but is indeed small, and in the ldd output of the go binary, go depends on /lib64/ld-linux-x86-64.so.2 rather than (container) /lib/ld-musl-x86_64.so.1

So… how to fix this? Following the suggestion in Trion’s jenkins-docker-client:

sudo docker exec -u root -it blueocean bash
bash-4.4# ln -s /lib /lib64 
bash-4.4# ln -s /lib/ld-musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2

Codifying that by burning it into a DockerFile that extends jenkinsci/blueocean should make it permanent. …another day. Today, it works, and I got a bit of work to do.

Neato Vacuum and Li Ion Cells

Uncategorized Comments Off on Neato Vacuum and Li Ion Cells

The Neato finally acted as though I replaced with different cells.

TL;DR:

  1. plug in a USB to a Mac
  2. screen /dev/tty.usbmodem1411
  3. GetVersion to see that we’re talking
    • …BatteryType,1,NIMH, (or similar)
  4. SetConfig BatteryType 3
    • BatteryType,3,LIION_4CELL,

I’ve been using a Neato-XV for a while, and after the first year, the battery wouldn’t hold a charge as well. Since I was replacing them, I decided to go with a Lithium-Ion stack.

Li-Ion on Amazon was very quick, and there was a vendor selling an exact drop-in. Arrived in good shape, half-charged as they should be, dropped them in, charged and ran like expected.

Now we’re a few years out, and either the Li-Ion has degraded, or has finally started acting like Li-Ion cells: the batteries’ Protection Circuit Module drops all connection when it’s fully charged. This is after it’s been charging all night as it usually does, but it seems the charge has gotten high enough to trigger the PCM cutting out.

This one change, mentioned in Neato XV 21 and Lithium battery converts the charging logic to expect a complete cut-out of power when charged rather than a slight drop — or at least accept the cut-out as a better sign.

Connecting to the Neato is described in the first page of the Neato Robotics Programmer’s Manual but not so much in terms of a Mac which has a serial connection client by default. I found that watching the difference before/after allowed be to see that the /dev/tty.usbmodem1411 device was being created on connection, so screen /dev/tty.usbmodem1411 is the way in. The first attempt failed, but starting up the Neato with a USB connected may have resolved that.

Why is Allan so Nuts?

Uncategorized No Comments »

This blog is normally about what I did to solve problems I had; it might not be the best reading, I’m not the best writer, but I like to share so that the research I did for something can go further.

I am obsessed by details. Why? Why am I so nuts?

In my early career, I went from Military to Telephony to USL Unix to mobile phones: where errors cost lives, to errors costing 911 calls, to errors costing millions of dollars in retail revenue and servers down, to an environment (mobile) where it must be correct from the outset or you risk 911 calls and risk losing the ability to fix it in a costly field-upgrade.

Recalling a handset can ruin a company, but not being able to place a call can risk a life.

When I helped NASDAQ field-upgrade, we saved them millions and millions for 2700 servers in an environment where if too many servers don’t come back, all trading would be suspended for fairness/access reasons. When I helped McDonalds, they needed 100% uptime without fatigue on flash or RAM. K-Mart needed around 2500 servers done in one-shot. The FAA needed 100% accuracy and a one-entry form for install information (so consider installing Windows and VW with only one piece of data — three letters and a number — as the only user entry). RadioShack and SDM (Canada’s largest retailer) needed a “hit enter” as the only install command. At every step, I’ve had to keep track of every little detail.

Errors are avoided at the design stage: When it has to work the same every time as delivered, interpreted and late-bound languages are not the best choice (especially when their lack of internationalization causes additional issues — I’m looking at you, Perl). Re-use tested stuff rather than making it up — it’s out there if you look — or discuss and indicate why the existing is so unsuitable so that others know you did and can learn from your research. Documented design choices avoids second-guessing by latecomers. If no choice is possible and one of many equal options must be done, document the “Arbitrary choice” and move on, able to refactor if a later detail shows a design issue for correction. Good design should avoid Epochs and variants, but since you cannot see everything yourself, document and discuss, debate, as much as your timeline allows.

Failure scenarios should be graceful, and verbose for forensic analysis. Errors and error codes should be unique and easy to shout over a phone from a noisy datacenter, and should be easy to google and/or research at 3am rather than tracking down an engineer to define/dereference. Why did you add complexity? Can you explain it in one breath? In general, the myriad little details should be handled by the software, letting the broader scope and macroscopic config choice be handled by the meatware (the person/people).

This is why I see details. This is why I discuss unique error codes (and love Oracle for it). This is why I discuss parsable logs and precise configs.

My memory doesn’t work like other people, but my memory works for these things.

This is not the blog where I’m quite critical about things; this is where I hope others will find solutions. I wanted to explain why those solutions tend to be specifically different from others’ solutions.

Single-Language Internationalization: Spellcheck Basis

Uncategorized Comments Off on Single-Language Internationalization: Spellcheck Basis

Even if a project has only one language — ie has not yet been considered for internationalization — an internationalization message catalog can give benefits such as sanity-checking the text that is not subject to compiler cross-check. I’d like to look at the effort to do this in my own work.

I’m a big fan of things that can be automated, or that enable other capabilities without much effort. For example, I tend to recommend checking for a compatible standard rather than willy-nilly inventing a new one on the off-chance that accidental compatibility is reached (“hey, they use Dublin-Core, and we use Dublin-Core, we can use their text-manipulation tools with our outputs! We can work together without a code change!”)

By extracting the visible strings of text form an application, it’s possible to consider them en-masse even before translation. Messages can be more consistent (tense, tone, dialect). Additionally, it may be possible to spellcheck.

Case-in-point:

Spelling anyone?

“Yosemite”

Rsync Over FTP, on a Mac or BSD Client

howto, Uncategorized Comments Off on Rsync Over FTP, on a Mac or BSD Client

Recently I was discussing with someone the need to simplify the sync of a folder into an FTP server. The goal is that at set intervals, any change in a remote server is pulled to q local folder: changes changed, new files created, removed files removed. This is that kind of thing that should be easier, but it’s mixing an old technology (rsync) with a very, very old technology (FTP).

This is how to do it using mount_ftp and rsync.

The general idea is to use the mount_ftp almost like FUSE-mounting a remote resource, then using rsync on that mounted filesystem. If we wrap it around mktemp to work relatively portably in a temp folder, we’d have something like this:

  • local folder: ${HOME}/contrib
  • remote server: ftp.example.com
  • remote folder: ./Scott/ABC
  • remote user: scott
  • remote pass: tiger
# create a temporary/random mountpoint
TEMPFILE=$(mktemp -d -t ftprsync)

# mount the remote space; no output, but the return code matches errno.h values
sudo mount_ftp -o rw ftp://scott:tiger@ftp.example.com/Scott/ABC ${TEMPFILE} ; echo $?

# sadly, despite best efforts (and "-o rw"), this is only a read-only, so only good
# for syncing FTP content out to the local system

# sync
rsync -avr --delete-after ${TEMPFILE}/* ${HOME}/contrib

#shut it down
sudo umount ${TEMPFILE}
rm -fr ${TEMPFILE}
WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in