Persistent Volume Claims with Synology NAS on a Raspberry Pi Kubernetes cluster
This how-to will walk through the process of setting up an application with a persistent volume claim backed by a Synology NFS shared folder.
To use a centralized NFS share and mount subdirectories for services requires using a specialized NFS provisioner. Maybe I’ll look into that one day. For now, there’s a shared folder for each application for which you want to mount a persistent volume.
(Update 2018–04–14: Shared NFS storage doesn’t require a special provisioner. An updated how-to is here: https://www.j03.co/2018/04/14/shared-nfs-and-subpaths-in-kubernetes/)
Setup the Synology NFS share
- Control Panel/Shared Folder/Create
- Fill out the info; name=”prometheus-data”, description=”Prometheus NFS share.”
- uncheck “Enable Recycling Bin”
- Click “next”; On the encryption screen click “Next”, on the advanced settings screen click “Next”; Confirm your settings and click “Apply”
- The “Edit Shared Folder” window should pop up; select the “NFS Permissions” tab and click “Create”
- In “Hostname or IP” add ‘*’ or if your kube cluster is on a specific network, add that info. Wildcard matching is insecure and shouldn’t be used for more than testing.
- Make sure the following are all checked: Enable asynchronous; Allow connections from non-privileged ports; and allow users to access mounted subfolders. Click OK, and then click OK again.
Setup the kubernetes persistent volume
Create a file called prometheus-nfs.yaml
and add the following content.
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-nfs
spec:
capacity:
storage: 10Gi
storageClassName: standard
accessModes:
- ReadWriteMany
nfs:
server: <your_ip_addr>
path: "/path_to/prometheus-data"
This will create a persistent volume called prometheus-nfs
when you run this command: kubectl -n <your_namespace> apply -f prometheus-nfs.yaml
Setup the Application Persistent Volume Claim
Create a file called prometheus-storage.yaml
with the following contents.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-data
namespace: monitoring
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 10Gi
This will create a persistent volume claim of 10G from the prometheus-nfs
persistent volume created a minute ago when you run the command:
kubectl -n <your_namespace> apply -f prometheus-storage.yaml
At this point you should have a persistent volume claim registered against the nfs share you’ve set up. If everything worked correctly, you should see the persistent volume in the Kubernetes dashboard under “Cluster / Persistent Volumes.” You should see your persistent-volume-claim in the “Claim” column with a status of “Bound”. If that’s all good the real fun can begin.
Configure your Application deployment
I won’t get into all the gory details of crafting your deployment config, but there are two important parts to mount the storage claim against your container. The volumeMounts
section of your service container spec, and the volumes
section of your service spec. Here’s a shortened version of prometheus-deploy.yaml
as an example.
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
...
template:
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
...
volumeMounts:
- name: prometheus-data
mountPath: /prometheus
volumes:
- name: prometheus-data
persistentVolumeClaim:
claimName: prometheus-data
In this case the volumeMounts
block configures the container to mount the persistent volume claim under the designated mount point. And the volumes
block below, maps the kubernetes created volume claim to the service as part of the service template spec. The end result is a map from persistence claim created earlier and described in the deployment file, to the volumeMount described as part of the container config.
You can deploy and test by running the command:
kubectl -n <your_namespace> apply -f prometheus-deploy.yaml
Validate you’re writing to the Synology
Once your deployment has scheduled the containers as expected, you can go back to the Synology web interface, and load “File Explorer” to validate you’re seeing data as expected. If you find your persistent volumes, or persistent volume claims stuck in pending state, or not being bound correctly, something went wrong. They will prevent your deployment from completing, and you’ll see failed container starts with messages like, Couldn’t mount.
Some other tips and tricks, if you’re using hyperiot as a raspberry pi distro, you might have to apt-get install nfs-common
or your kublet nodes won’t know what to do with the mount commands they’re getting from your configs.
As always, GL;HF.