If you're like me, you want to stay current with Docker. Maybe you're looking to use the new features as they come out, maybe you're just wanting compatibility between the various platforms you use. Unfortunately, Docker does not create official packages for the Pi, and other public sources update infrequently. What a pain!
Building it yourself is frustrating, too. The Docker build process requires Docker, and the version included with raspbian (and debian jessie) is too old to use for the process. You can get around this by installing one of those infrequently updated but much newer than the default repo packages, and then on to the next item. Building Docker requires more memory than the Pi has! You'll need a sizable swap file (or partition) and to adjust swappiness (sysctl vm.swappiness=70) and go from there.
The instructions go roughly like this:
. If you don't have a swap partition, create a swap file. sudo dd if=/dev/zero bs-=1M count=512 of=/swapfile && sudo mkswap /swapfile && sudo swapon /swapfile
. Raise the swappieness parameter: sudo sysctl vm.swappiness=70
. Install prerequisites: sudo apt-get install build-essential git
. Install a relatively recent Docker package, either the one I provide below, or you can get one from http://blog.hypriot.com/downloads/
. You will surely need to change your storage driver to overlay (or if you're using docker 1.12 or above, overlay2). Edit /lib/systemd/system/docker.service and change ExecStart to ExecStart=/usr/bin/dockerd -H fd:// --storage-driver=overlay2
. Download the Docker git tree: git clone https://github.com/docker/docker.git && cd docker
. Begin the build process: make deb
. Go get lunch, In another city.
. Provided no issues popped up during the build, you'll find your new package in bundles/latest/build-deb/debian-jessie/
. Before you install your newly built package, you should remove the old one you used to build. If you used my package: sudo apt-get remove docker-engine && sudo dpkg -P docker-engine
. If you used the Hypriot package, sudo apt-get remove docker-hypriot && sudo dpkg -P docker-hypriot
. You should also remove the previous Docker data, or you'll get errors with old modules: sudo rm -rf /var/lib/docker
. Now you can install the newly created package: sudo dpkg -i bundles/latest/build-deb/debian-jessie/docker-engine*deb
If you'd like to use my Docker package, I've made it available here: https://brokedown.net/docker-engine_1.12.0~dev~git20160726.003227.0.a4634cd-0~jessie_armhf.deb md5sum: 4e446917dbd59155c0dd4719e201000a
Happy Dockering!
Docker is an excellent way to manage and separate your infrastructure concerns. You get most of the advantages of splitting workloads into virtual machines while avoiding most of the disadvantages that go along with it. As Docker continues to mature, they have added a pile of functionality to make your life easier from an Engineering and Operations standpoint.
One of the recent features finally allows simple remote access to your Docker daemon, with high strength security through the use of TLS and Client Certificates.
TLS is Transport Layer Security, basically a standard way of enabling encryption on any type of socket. TLS is the successor to SSL, and is improved in every practical way. One of the features of TLS is the ability to verify both sides (client and server) of the connection, rather than just one side (server) as typically used for things like HTTPS.
Both your Client and Server certificates must be signed by a "known" certificate authority (CA). In this case, your CA acts as the arbiter of trust, and any client with a signed certificate will be permitted access. Docker (currently) has no mechanisms for users or passwords, and does not currently support CRLs (Certificate Revocation Lists), so you need to protect those certificates and be prepared to rebuild your TLS infrastructure from scratch (with new keys for every server and client) if a Client Certificate is compromised.
Now, most of us aren't OpenSSL experts that can sign certificates as easily as ordering a coffee. With this in mind, I created DockerCertManager, which simplifies the creation and management of your keys and certificates. For the sake of brevity, I will assume that you are using DockerCertManager.
Your first step is to initialize your Certificate Authority. You should only need to do this once, and DockerCertManager will try to prevent you from overwriting an existing CA.
./DockerCertManager initca
You will now have a Certificate Authority. Be very careful to protect ca-key.pem, as it any certificate signed by it will be trusted by your client and server. the ca.pem is your public key and can be safely distributed. Next, you need a Server certificate. For this example, we'll use docker01.example.com.
./DockerCertmanager server docker01.example.com
This will create your private key and certificate file, and recommend how to name and install them. I suggest copying them to /etc/docker on your server with the names key.pem, cert.pem, and ca.pem, and chmod them to 400. Next, you'll need to add the configuration to your Docker startup environment file. On Ubuntu it's /etc/default/docker, and you'll edit your DOCKER_OPTS to look like so:
DOCKER_OPTS="--dns 192.168.1.1 --tlsverify -H=unix:///var/run/docker.sock -H=0.0.0.0:4243 --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/cert.pem --tlskey=/etc/docker/key.pem"
Applying your configuration requires a restart of the Docker daemon, which will shut down any containers you may have running. After that, you can create a client certificate. Your Client Certificate identifies a user or account, but please note that as of today Docker does not include any concept of users, so all certificate holders are equal. For our example, we'll set up our friend Joe User who uses the username joeuser.
./DockerCertManager client joeuser
Your client certificates will now be created, along with a suggestion on how to install them. Docker will simplify things by looking for specific filenames in ~/.docker, so I suggest you follow the suggestion. Copy your client certificate, key, and CA public key into ~/.docker as key.pem, cert.pem and ca.pem, and chmod them to 400.
You should now be ready to test your TLS connection to your server. In our example, you can issue commands to our server by adding --tlsverify and -H hostname:4243 to our docker command, like such:docker --tlsverify -H docker01.example.com:4243 version
You can also use an environment to specity to always use --tlsauth and/or a specific host:
export DOCKER_HOST=docker01.example.com:4243
export DOCKER_TLS_VERIFY=1
docker version
So there you have it! You should have a secure, encrypted channel to interact dcirectly with your Docker daemons on remote hosts! It doesn't (currently) get any better than this!
With all that done, the caveats list...
First, Docker authorization is either yes or no, there are no access levels. Anyone with a signed certificate is a fully trusted administrator of your docker hosts. You have to keep that CA key private.
Second, Docker doesn't support certificate revocation lists. If a client certificate gets leaked, or an employee leaves, or whatever, you can't just remove that key's access. Your only option is to build your certificates from the ground up, starting with a new Certificate Authority.
Third, to use Docker's built-in automatic filenames, you're restricted to a single set of certificates. That means that if you have multiple environments (think dev/test/prod), you're either using the same certificate for all of them or you're changing your docker config per-environment. In this case, I recommend you use shell aliases:
alias dockerprod="docker --tlsverify --tlscacert=/home/joeuser/.docker/prod-ca.pem --tlscert=/home/joeuser/.docker/prod-cert.pem --tlskey=/home/joeuser/.docker/prod-key.pem"
... And similar for other environments.
The first issue I ran into with Kubuntu 15.10 was very early in the process. Booting from a flash drive would take me to the bootloader (Unetbootin), allow me to select the "Start Kubuntu" option, flash the Kubuntu logo for a fw minutes, and finally drop back to a busybox prompt saying it was not able to find a live filesystem.
My motherboard isn't exotic, my USB controllers are Intel 9 series, which have been well supported for a very long time. Of course, moving to different ports, trying both USB2 and USB3 ports, etc were all tried.
With a little debugging (use the `dmesg` command to see the kernel's log), it showed that the USB controller was trying to reset my usb stick for some reason, and it was failing. This caused the kernel to generally not recognize that the stick existed.
The "fix" I came up with was:
. Reboot to the Unetbootin boot menu
. Scroll to the Start Kubuntu option
. Hit Tab to edit the boot options
. Remove 'quiet" and "splash" options
. Hit Enter to boot
. Wait for kernel messages about resetting the USB controller
. Unplug USB stick, plug into a different port
. Installer find the stick, the installer finishes loading.
If you're having the same issue, hopefully this helps you get another step farther in the process!
The new 10.04 release of Ubuntu/Kubuntu bring on a lot of improvements over previous versions. Unfortunately, they also bring a little bit of suck with them as well. This page will document the problems I've run into, along with what I've done to fix them. I will update as new issues are found.
We'll Start With Thunderbird issues and move on to KDE indexing. Shall we begin?
Here's a new one. Apache wouldn't restart on a server complaining thus:
"No space left on device: mod_rewrite: could not create rewrite_log_lock"
After some digging around with strace, I saw it was failing to create a semaphore. Evidently, Apache can leave semaphores around after its death, and can cause this mystery problem.
So how do you fix it?
Like this:
for IPC in $(ipcs -s | grep apache | awk '{print$2}'); do ipcrm -s $IPC; done