“9’s don’t matter if users aren’t happy” — Charity Majors
I wrote this up quickly for a friend who was asking about it. It’s not rocket science, and I’m sure this restates what many have said before, but this chart should make it simple to understand your list of priorities for different service or application failure modes. If you like it leave a clap, or a positive comment. If not, I appreciate constructive feedback. All the best. j-
Adventures in bare metal homelab kubernetes administration.
Kubernetes has deprecated support for dockershim after version 1.20 which is currently running on my homelab. So I thought, how hard can it be to make the switch to containerd? It turns out, not that bad for kubelets. I’m still working out how to convert my single node control plane in a non-destructive way. Here’s the process I used to convert all kubelets from docker to containerd.
(Note: Part 2 which deals with a single node control plane is out if you’d rather start there)
My cluster is running on ubuntu 20.04, so…
To compliment my previous article about migrating kubelets to containerd, this will be about migrating a single node control plane cluster from docker to containerd. My homelab is built fromkubeadm init
running on Ubuntu 20.04. Backups are stored on the kubernetes control node itself, but syncing these to a NAS or a cloud bucket would not be a bad idea. Keeping those backups safe is left as an exercise for the reader.
One thing to note, my homelab cluster doesn’t use any special flags. The move to containerd and the systemd cgroup driver requires a modified kubelet config according to…
In a previous update, I talked about setting up a service specific NFS mount path using a synology diskstation, and left getting shared storage for another day. Well, another day came, and I now have a common pool of storage for all my kubernetes applications.
This followup is intended to be a simple how-to to replicate the results.
Setup the NFS Shared Persistent Volume…
apiVersion: v1
kind: PersistentVolume
metadata:
name: kube-storage-nfs
labels:
bucket: shared
spec:
capacity:
storage: 30T
volumeMode: Filesystem
storageClassName: slow
persistentVolumeReclaimPolicy: Retain
accessModes:
— ReadWriteMany
nfs:
server: <nfs_server_ip_addr>
path: “/path/to/kube-storage”
readOnly: false
The important parts in this manifest…
This how-to will walk through the process of setting up an application with a persistent volume claim backed by a Synology NFS shared folder.
To use a centralized NFS share and mount subdirectories for services requires using a specialized NFS provisioner. Maybe I’ll look into that one day. For now, there’s a shared folder for each application for which you want to mount a persistent volume.
(Update 2018–04–14: Shared NFS storage doesn’t require a special provisioner. An updated how-to is here: https://www.j03.co/2018/04/14/shared-nfs-and-subpaths-in-kubernetes/)
My home network has the usual things. A cable modem, a wireless router/firewall combo, and a bunch of internet of things devices. I also have a few services exposed through port forwarding, and since this is the internet, they are almost always under some sort of attack. One device in particular gives a loud warning beep when under attack, which is great, but also acts as a pager at 3am, which frankly sucks (esp. if I’m, say — sleeping, or something).
So rather than getting woken up at odd hours, I decided to do a couple of things. First, I…
“After forty years of research and development of military personnel selection practices, it is now abundantly clear that there is no satisfactory and reliable technique for locating personnel with leadership potentials. Only the selection of specialists for particular technical jobs seems feasible.” — Morris Janowitz, The Professional Soldier, 1964
A friend posted this quote on Facebook along with the assertion that hiring for leadership is almost pointless. The implication is, of course, that there’s no way to determine future success based on interview performance alone. …
I'm an engineering leader who is passionate about reliability engineering and building sustainable/scalable teams.