Shared NFS and SubPaths in Kubernetes

Joseph Bironas
2 min readApr 28, 2018

In a previous update, I talked about setting up a service specific NFS mount path using a synology diskstation, and left getting shared storage for another day. Well, another day came, and I now have a common pool of storage for all my kubernetes applications.

This followup is intended to be a simple how-to to replicate the results.

Setup the NFS Shared Persistent Volume…

apiVersion: v1
kind: PersistentVolume
metadata:
name: kube-storage-nfs
labels:
bucket: shared
spec:
capacity:
storage: 30T
volumeMode: Filesystem
storageClassName: slow
persistentVolumeReclaimPolicy: Retain
accessModes:
— ReadWriteMany
nfs:
server: <nfs_server_ip_addr>
path: “/path/to/kube-storage”
readOnly: false

The important parts in this manifest are setting the storage capacity, the bucket label, although you can use other labels as necessary, the accessMode, and the storageClassName. These all act as selectors and if your persistent volume claim doesn’t match, it won’t bind. You also want to keep state if/when a pod dies and gets rescheduled, so setting the persistentVolumeReclaimPolicy to `Retain` will make sure your application doesn’t lose data when that happens.

Setup the NFS Shared Persistent Volume Claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-data
spec:
accessModes:
— ReadWriteMany
storageClassName: slow
resources:
requests:
storage: 30T
selector:
matchLabels:
bucket: shared

The important bits of this file, outside of the selectors mentioned above are the labels defined in the actual selector block.

Apply those to your cluster

kubectl apply -f kube-storage-pv.yaml
kubectl apply -f kube-storage-pvc.yaml

Again, if these selectors don’t match between your persistent volume and it’s claim configuration, scheduling will hang forever. You can get more info if it seems to be taking a long time by running `kubectl describe pvc shared-data` and looking at the `Events` block.

Finally create and apply a deployment for the application you’d like to have access to shared storage.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myApp-deploy
labels:
app: myApp
spec:
template:
metadata:
labels:
app: myApp
spec:
containers:

volumeMounts:
— name: myApp-storage
mountPath: /path/to/data
subPath: myApp
volumes:
— name: myApp-storage
persistentVolumeClaim:
claimName: shared-data

In this case, you can see the volume being used creates an application specific binding for the `shared-data` persistent volume claim. This maps to `myApp-storage` which then gets referenced buy the pod template spec, as a volume mount. Note the use of `subPath` as it tells the container to mount the subpath rather than the mount path. This will mount the path `/path/to/kube-storage/myApp` to `/path/to/data` inside the container.

You can use this same deployment config in as many deployment manifests as you’d like. Making use of `subPath` should keep application data isolated reasonably well.

All in all, this process is fairly simple, and very useful if you have a NAS or large storage system you’d like to put to good use.

--

--

Joseph Bironas

I'm an engineering leader who is passionate about reliability engineering and building sustainable/scalable teams.